Neuroscience

Articles and news from the latest research reports.

Posts tagged artificial intelligence

136 notes

Image: Eleven areas of the brain are showing differential activity levels in a Dartmouth study using functional MRI to measure how humans manipulate mental imagery. Credited to Alex Schlegel, Dartmouth College
Researchers discover how and where imagination occurs in human brains
New insights into ‘mental workspace’ may help advance artificial intelligence
Philosophers and scientists have long puzzled over where human imagination comes from. In other words, what makes humans able to create art, invent tools, think scientifically and perform other incredibly diverse behaviors?
The answer, Dartmouth researchers conclude in a new study, lies in a widespread neural network — the brain’s “mental workspace” — that consciously manipulates images, symbols, ideas and theories and gives humans the laser-like mental focus needed to solve complex problems and come up with new ideas.
Their findings, titled “Network structure and dynamics of the mental workspace,” appear the week of Sept. 16 in the Proceedings of the National Academy of Sciences.
"Our findings move us closer to understanding how the organization of our brains sets us apart from other species and provides such a rich internal playground for us to think freely and creatively," says lead author Alex Schlegel, a graduate student in the Department of Psychological and Brain Sciences. "Understanding these differences will give us insight into where human creativity comes from and possibly allow us to recreate those same creative processes in machines."
Scholars theorize that human imagination requires a widespread neural network in the brain, but evidence for such a “mental workspace” has been difficult to produce with techniques that mainly study brain activity in isolation. Dartmouth researchers addressed the issue by asking: How does the brain allow us to manipulate mental imagery? For instance, imagining a bumblebee with the head of a bull, a seemingly effortless task but one that requires the brain to construct a totally new image and make it appear in our mind’s eye.
In the study, 15 participants were asked to imagine specific abstract visual shapes and then to mentally combine them into new more complex figures or to mentally dismantle them into their separate parts. Researchers measured the participants’ brain activity with functional MRI and found a cortical and subcortical network over a large part of the brain was responsible for their imagery manipulations. The network closely resembles the “mental workspace” that scholars have theorized might be responsible for much of human conscious experience and for the flexible cognitive abilities that humans have evolved.

Image: Eleven areas of the brain are showing differential activity levels in a Dartmouth study using functional MRI to measure how humans manipulate mental imagery. Credited to Alex Schlegel, Dartmouth College

Researchers discover how and where imagination occurs in human brains

New insights into ‘mental workspace’ may help advance artificial intelligence

Philosophers and scientists have long puzzled over where human imagination comes from. In other words, what makes humans able to create art, invent tools, think scientifically and perform other incredibly diverse behaviors?

The answer, Dartmouth researchers conclude in a new study, lies in a widespread neural network — the brain’s “mental workspace” — that consciously manipulates images, symbols, ideas and theories and gives humans the laser-like mental focus needed to solve complex problems and come up with new ideas.

Their findings, titled “Network structure and dynamics of the mental workspace,” appear the week of Sept. 16 in the Proceedings of the National Academy of Sciences.

"Our findings move us closer to understanding how the organization of our brains sets us apart from other species and provides such a rich internal playground for us to think freely and creatively," says lead author Alex Schlegel, a graduate student in the Department of Psychological and Brain Sciences. "Understanding these differences will give us insight into where human creativity comes from and possibly allow us to recreate those same creative processes in machines."

Scholars theorize that human imagination requires a widespread neural network in the brain, but evidence for such a “mental workspace” has been difficult to produce with techniques that mainly study brain activity in isolation. Dartmouth researchers addressed the issue by asking: How does the brain allow us to manipulate mental imagery? For instance, imagining a bumblebee with the head of a bull, a seemingly effortless task but one that requires the brain to construct a totally new image and make it appear in our mind’s eye.

In the study, 15 participants were asked to imagine specific abstract visual shapes and then to mentally combine them into new more complex figures or to mentally dismantle them into their separate parts. Researchers measured the participants’ brain activity with functional MRI and found a cortical and subcortical network over a large part of the brain was responsible for their imagery manipulations. The network closely resembles the “mental workspace” that scholars have theorized might be responsible for much of human conscious experience and for the flexible cognitive abilities that humans have evolved.

Filed under imagination artificial intelligence neuroimaging brain mapping neuroscience science

291 notes

Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.

Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Filed under artificial intelligence AI brain mapping cognitive prostheses technology robotics science

105 notes

Computer smart as a 4-year-old

Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is.
Turns out–it’s about as smart as the average 4-year-old, they will report July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Wash.
The UIC team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children.
They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine’s scores were very uneven across different portions of the test.
“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study.
Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.
“But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.
One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.
Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.
“All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.” Life is a rich learning environment.
“We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of 8,” said Sloan. He and his colleagues hope the study will help to focus attention on the “hard spots” in AI research.

Computer smart as a 4-year-old

Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is.

Turns out–it’s about as smart as the average 4-year-old, they will report July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Wash.

The UIC team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children.

They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine’s scores were very uneven across different portions of the test.

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study.

Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.

“But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.

One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.

Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.

“All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.” Life is a rich learning environment.

“We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of 8,” said Sloan. He and his colleagues hope the study will help to focus attention on the “hard spots” in AI research.

Filed under ConceptNet 4 AI artificial intelligence neuroscience science

free counters