Neuroscience

Articles and news from the latest research reports.

Posts tagged science

82 notes

Switching between habitual and goal-directed actions — a ‘2 in 1’ system in our brain

"Pressing the button of the lift at your work place, or apartment building is an automatic action – a habit. You don’t even really look at the different buttons; your hand is almost reaching out and pressing on its own. But what happens when you use the lift in a new place? In this case, your hand doesn’t know the way, you have to locate the buttons, find the right one, and only then your hand can press a button. Here, pushing the button is a goal-directed action." It is with this example that Rui Costa, principal investigator at the Champalimaud Neuroscience Programme (CNP), explains how critical it is to be able to shift between habits and goal-direct actions, in a fast and accurate way, in everyday life.

To unravel the circuit that underlies this capacity, the capacity to “break habits”, was the goal of this study, carried out by Christina Gremel and Rui Costa, at NIAAA, National Institutes of Health, USA and the Champalimaud Foundation, in Portugal, that is published today (Date) in Nature Communications.

"We developed a task where mice would shift between making the same action in a goal-directed or habitual manner. We could then, for the first time, directly examine brain areas controlling the capacity to break habits," explains the study’s lead author Christina Gremel from NIAAA. Evidence from previous studies has shown that two neighbouring regions of the brain are necessary for these different functions – the dorsal medial striatum is necessary for goal-directed actions and the dorsal lateral striatum is necessary for habitual actions. What was not known, and this new study reveals, is that a third region, the orbital frontal cortex (OFC), is critical for shifting between these two types of actions. As explained by Rui Costa, "when neurons in the OFC were inhibited, the generation of goal-directed actions was disrupted, while activation of these neurons, by means of a technique called optogenetics, selectively increased goal-directed actions."

For Costa, the results of this study suggest “something quite extraordinary – the same neural circuits function in a dynamic way, enabling the learning of automatic and goal-directed actions in parallel.”

These results have important implications for understanding neuropsychiatric disorders where the balance between habits and goal-directed actions is disrupted, such as obsessive-compulsive disorder.

The neural bases of behaviour, and their connection to neuropsychiatric disorders, are at the core of ongoing work by neuroscientists and clinicians at the Champalimaud Foundation.

(Source: eurekalert.org)

Filed under goal-directed actions habitual actions decision making orbitofrontal cortex neuroscience science

142 notes

Practice Makes the Brain’s Motor Cortex More Efficient

Not only does practice make perfect, it also makes for more efficient generation of neuronal activity in the primary motor cortex, the area of the brain that plans and executes movement, according to researchers from the University of Pittsburgh School of Medicine. Their findings, published online today in Nature Neuroscience, showed that practice leads to decreased metabolic activity for internally generated movements, but not for visually guided motor tasks, and suggest the motor cortex is “plastic” and a potential site for the storage of motor skills.

image

The hand area of the primary motor cortex is known to be larger among professional pianists than in amateur ones. This observation has suggested that extensive practice and the development of expert performance induces changes in the primary motor cortex, said senior investigator Peter L. Strick, Ph.D., Distinguished Professor and chair, Department of Neurobiology, Pitt School of Medicine.

Prior imaging studies have shown that markers of synaptic activity, meaning the input signals to neurons, decrease in the primary motor cortex as repeated actions become routine and an individual develops expertise at a motor skill. The researchers found that markers of synaptic activity also display a marked decrease in monkeys trained to perform sequences of movements that are guided from memory — an internally generated task — rather than from vision. They wondered whether the change in synaptic activity indicated that neuron firing also declined. To examine this issue they recorded neuron activity and sampled metabolic activity, a measure of synaptic activity in the same animals.

All the monkeys were trained on two tasks and were rewarded when they reached out to touch an object in front of them. In the visually guided task, a visual target showed the monkeys where to reach and the end point was randomly switched from trial to trial. In the internally generated task the monkeys were trained to perform short sequences of movements without visual cues. They practiced the sequences until they achieved a level of skill comparable to an expert typist.

The researchers found neuron activity was comparable between monkeys that performed visually guided and internally generated tasks. However, metabolic activity was high for the visually guided task, but only modest during the internally generated task.

“This tells us that practicing a skilled movement and the development of expertise leads to more efficient generation of neuron activity in the primary motor cortex to produce the movement. The increase in efficiency could be created by a number of factors such as more effective synapses, greater synchrony in inputs and more finely tuned inputs,” Dr. Strick noted. “What is really important is that our results indicate that practice changes the primary motor cortex so that it can become an important substrate for the storage of motor skills. Thus, the motor cortex is adaptable, or plastic.

(Source: upmc.com)

Filed under motor cortex neuronal activity synaptic activity motor skill practice neuroscience psychology science

111 notes

Breastfeeding may reduce Alzheimer’s risk 
A new study suggests that mothers who breastfeed run a lower risk of developing Alzheimer’s, with longer periods of breastfeeding further reducing the risk.
Mothers who breastfeed their children may have a lower risk of developing Alzheimer’s Disease, with longer periods of breastfeeding also lowering the overall risk, a new study suggests.
The report, newly published in the Journal of Alzheimer’s Disease, suggests that the link may be to do with certain biological effects of breastfeeding. For example, breastfeeding restores insulin tolerance which is significantly reduced during pregnancy, and Alzheimer’s is characterised by insulin resistance in the brain.
Although they used data gathered from a very small group of just 81 British women, the researchers observed a highly significant and consistent correlation between breastfeeding and Alzheimer’s risk. They argue that this was so strong that any potential sampling error was unlikely.
At the same time, however, the connection was much less pronounced in women who already had a history of dementia in their family. The research team hope that the study – which was intended merely as a pilot – will stimulate further research looking at the relationship between female reproductive history and disease risk.
The findings may point towards new directions for fighting the global Alzheimer’s epidemic – especially in developing countries where cheap, preventative measures are desperately needed.
More broadly, the study opens up new lines of enquiry in understanding what makes someone susceptible to Alzheimer’s in the first place. It may also act as an incentive for women to breastfeed, rather than bottle-feed – something which is already known to have wider health benefits for both mother and child.
Dr Molly Fox, from the Department of Biological Anthropology at the University of Cambridge, who led the study, said: “Alzheimer’s is the world’s most common cognitive disorder and it already affects 35.6 million people. In the future, we expect it to spread most in low and middle-income countries. So it is vital that we develop low-cost, large-scale strategies to protect people against this devastating disease.”
Previous studies have already established that breastfeeding can reduce a mother’s risk of certain other diseases, and research has also shown that there may be a link between breastfeeding and a woman’s general cognitive decline later in life. Until now, however, little has been done to examine the impact of breastfeeding duration on Alzheimer’s risk.
Fox and her colleagues – Professor Carlo Berzuini and Professor Leslie Knapp – interviewed 81 British women aged between 70 and 100. These included both women with, and without, Alzheimer’s. In addition, the team also spoke to relatives, spouses and carers.
Through these interviews, the researchers collected information about the women’s reproductive history, their breastfeeding history, and their dementia status. They also gathered information about other factors that might account for their dementia, for example, a past stroke, or brain tumour.
Dementia status itself was measured using a standard rating scale called the Clinical Dementia Rating (CDR). The researchers also developed a method for estimating the age of Alzheimer’s sufferers at the onset of their disease, using the CDR as a basis and taking into account their age and existing, known patterns of Alzheimer’s progression. All of this information was then compared with the participants’ breastfeeding history.
Despite the small number of participants, the study revealed a number of clear links between breastfeeding and Alzheimer’s. These were not affected when the researchers took into account other potential variables such as age, education history, the age when the woman first gave birth, her age at menopause, or her smoking and drinking history.
The researchers observed three main trends:
Women who breastfed exhibited a reduced Alzheimer’s Disease risk compared with women who did not.
Longer breastfeeding history was significantly associated with a lower Alzheimer’s Risk.
Women who had a higher ratio of total months pregnant during their life to total months breastfeeding had a higher Alzheimer’s risk.
The trends were, however, far less pronounced for women who had a parent or sibling with dementia. In these cases, the impact of breastfeeding on Alzheimer’s risk appeared to be significantly lower, compared with women whose families had no history of dementia.
The study argues that there may be a number of biological reasons for the connection between Alzheimer’s and breastfeeding, all of which require further investigation.
One theory is that breastfeeding deprives the body of the hormone progesterone, compensating for high levels of progesterone which are produced during pregnancy. Progesterone is known to desensitize the brain’s oestrogen receptors, and oestrogen may play a role in protecting the brain against Alzheimer’s.
Another possibility is that breastfeeding increases a woman’s glucose tolerance by restoring her insulin sensitivity after pregnancy. Pregnancy itself induces a natural state of insulin resistance. This is significant because Alzheimer’s is characterised by a resistance to insulin in the brain (and therefore glucose intolerance) to the extent that it is even sometimes referred to as “Type 3 diabetes”.
“Women who spent more time pregnant without a compensatory phase of breastfeeding therefore may have more impaired glucose tolerance, which is consistent with our observation that those women have an increased risk of Alzheimer’s disease,” Fox added.

Breastfeeding may reduce Alzheimer’s risk

A new study suggests that mothers who breastfeed run a lower risk of developing Alzheimer’s, with longer periods of breastfeeding further reducing the risk.

Mothers who breastfeed their children may have a lower risk of developing Alzheimer’s Disease, with longer periods of breastfeeding also lowering the overall risk, a new study suggests.

The report, newly published in the Journal of Alzheimer’s Disease, suggests that the link may be to do with certain biological effects of breastfeeding. For example, breastfeeding restores insulin tolerance which is significantly reduced during pregnancy, and Alzheimer’s is characterised by insulin resistance in the brain.

Although they used data gathered from a very small group of just 81 British women, the researchers observed a highly significant and consistent correlation between breastfeeding and Alzheimer’s risk. They argue that this was so strong that any potential sampling error was unlikely.

At the same time, however, the connection was much less pronounced in women who already had a history of dementia in their family. The research team hope that the study – which was intended merely as a pilot – will stimulate further research looking at the relationship between female reproductive history and disease risk.

The findings may point towards new directions for fighting the global Alzheimer’s epidemic – especially in developing countries where cheap, preventative measures are desperately needed.

More broadly, the study opens up new lines of enquiry in understanding what makes someone susceptible to Alzheimer’s in the first place. It may also act as an incentive for women to breastfeed, rather than bottle-feed – something which is already known to have wider health benefits for both mother and child.

Dr Molly Fox, from the Department of Biological Anthropology at the University of Cambridge, who led the study, said: “Alzheimer’s is the world’s most common cognitive disorder and it already affects 35.6 million people. In the future, we expect it to spread most in low and middle-income countries. So it is vital that we develop low-cost, large-scale strategies to protect people against this devastating disease.”

Previous studies have already established that breastfeeding can reduce a mother’s risk of certain other diseases, and research has also shown that there may be a link between breastfeeding and a woman’s general cognitive decline later in life. Until now, however, little has been done to examine the impact of breastfeeding duration on Alzheimer’s risk.

Fox and her colleagues – Professor Carlo Berzuini and Professor Leslie Knapp – interviewed 81 British women aged between 70 and 100. These included both women with, and without, Alzheimer’s. In addition, the team also spoke to relatives, spouses and carers.

Through these interviews, the researchers collected information about the women’s reproductive history, their breastfeeding history, and their dementia status. They also gathered information about other factors that might account for their dementia, for example, a past stroke, or brain tumour.

Dementia status itself was measured using a standard rating scale called the Clinical Dementia Rating (CDR). The researchers also developed a method for estimating the age of Alzheimer’s sufferers at the onset of their disease, using the CDR as a basis and taking into account their age and existing, known patterns of Alzheimer’s progression. All of this information was then compared with the participants’ breastfeeding history.

Despite the small number of participants, the study revealed a number of clear links between breastfeeding and Alzheimer’s. These were not affected when the researchers took into account other potential variables such as age, education history, the age when the woman first gave birth, her age at menopause, or her smoking and drinking history.

The researchers observed three main trends:

  • Women who breastfed exhibited a reduced Alzheimer’s Disease risk compared with women who did not.
  • Longer breastfeeding history was significantly associated with a lower Alzheimer’s Risk.
  • Women who had a higher ratio of total months pregnant during their life to total months breastfeeding had a higher Alzheimer’s risk.

The trends were, however, far less pronounced for women who had a parent or sibling with dementia. In these cases, the impact of breastfeeding on Alzheimer’s risk appeared to be significantly lower, compared with women whose families had no history of dementia.

The study argues that there may be a number of biological reasons for the connection between Alzheimer’s and breastfeeding, all of which require further investigation.

One theory is that breastfeeding deprives the body of the hormone progesterone, compensating for high levels of progesterone which are produced during pregnancy. Progesterone is known to desensitize the brain’s oestrogen receptors, and oestrogen may play a role in protecting the brain against Alzheimer’s.

Another possibility is that breastfeeding increases a woman’s glucose tolerance by restoring her insulin sensitivity after pregnancy. Pregnancy itself induces a natural state of insulin resistance. This is significant because Alzheimer’s is characterised by a resistance to insulin in the brain (and therefore glucose intolerance) to the extent that it is even sometimes referred to as “Type 3 diabetes”.

“Women who spent more time pregnant without a compensatory phase of breastfeeding therefore may have more impaired glucose tolerance, which is consistent with our observation that those women have an increased risk of Alzheimer’s disease,” Fox added.

Filed under breastfeeding alzheimer's disease progesterone dementia neuroscience science

209 notes

Are we there yet?
MIT researchers reveal how the brain keeps eyes on the prize.
“Are we there yet?”
As anyone who has traveled with young children knows, maintaining focus on distant goals can be a challenge. A new study from MIT suggests how the brain achieves this task, and indicates that the neurotransmitter dopamine may signal the value of long-term rewards. The findings may also explain why patients with Parkinson’s disease — in which dopamine signaling is impaired — often have difficulty in sustaining motivation to finish tasks.
The work is described this week in the journal Nature.
Previous studies have linked dopamine to rewards, and have shown that dopamine neurons show brief bursts of activity when animals receive an unexpected reward. These dopamine signals are believed to be important for reinforcement learning, the process by which an animal learns to perform actions that lead to reward.
Taking the long view
In most studies, that reward has been delivered within a few seconds. In real life, though, gratification is not always immediate: Animals must often travel in search of food, and must maintain motivation for a distant goal while also responding to more immediate cues. The same is true for humans: A driver on a long road trip must remain focused on reaching a final destination while also reacting to traffic, stopping for snacks, and entertaining children in the back seat.
The MIT team, led by Institute Professor Ann Graybiel — who is also an investigator at MIT’s McGovern Institute for Brain Research — decided to study how dopamine changes during a maze task approximating work for delayed gratification. The researchers trained rats to navigate a maze to reach a reward. During each trial a rat would hear a tone instructing it to turn either right or left at an intersection to find a chocolate milk reward.
Rather than simply measuring the activity of dopamine-containing neurons, the MIT researchers wanted to measure how much dopamine was released in the striatum, a brain structure known to be important in reinforcement learning. They teamed up with Paul Phillips of the University of Washington, who has developed a technology called fast-scan cyclic voltammetry (FSCV) in which tiny, implanted, carbon-fiber electrodes allow continuous measurements of dopamine concentration based on its electrochemical fingerprint.
“We adapted the FSCV method so that we could measure dopamine at up to four different sites in the brain simultaneously, as animals moved freely through the maze,” explains first author Mark Howe, a former graduate student with Graybiel who is now a postdoc in the Department of Neurobiology at Northwestern University. “Each probe measures the concentration of extracellular dopamine within a tiny volume of brain tissue, and probably reflects the activity of thousands of nerve terminals.”
Gradual increase in dopamine
From previous work, the researchers expected that they might see pulses of dopamine released at different times in the trial, “but in fact we found something much more surprising,” Graybiel says: The level of dopamine increased steadily throughout each trial, peaking as the animal approached its goal — as if in anticipation of a reward.
The rats’ behavior varied from trial to trial — some runs were faster than others, and sometimes the animals would stop briefly — but the dopamine signal did not vary with running speed or trial duration. Nor did it depend on the probability of getting a reward, something that had been suggested by previous studies.
“Instead, the dopamine signal seems to reflect how far away the rat is from its goal,” Graybiel explains. “The closer it gets, the stronger the signal becomes.” The researchers also found that the size of the signal was related to the size of the expected reward: When rats were trained to anticipate a larger gulp of chocolate milk, the dopamine signal rose more steeply to a higher final concentration.
In some trials the T-shaped maze was extended to a more complex shape, requiring animals to run further and to make extra turns before reaching a reward. During these trials, the dopamine signal ramped up more gradually, eventually reaching the same level as in the shorter maze. “It’s as if the animal were adjusting its expectations, knowing that it had further to go,” Graybiel says.
An ‘internal guidance system’
“This means that dopamine levels could be used to help an animal make choices on the way to the goal and to estimate the distance to the goal,” says Terrence Sejnowski of the Salk Institute, a computational neuroscientist who is familiar with the findings but who was not involved with the study. “This ‘internal guidance system’ could also be useful for humans, who also have to make choices along the way to what may be a distant goal.”
One question that Graybiel hopes to examine in future research is how the signal arises within the brain. Rats and other animals form cognitive maps of their spatial environment, with so-called “place cells” that are active when the animal is in a specific location. “As our rats run the maze repeatedly,” she says, “we suspect they learn to associate each point in the maze with its distance from the reward that they experienced on previous runs.”
As for the relevance of this research to humans, Graybiel says, “I’d be shocked if something similar were not happening in our own brains.” It’s known that Parkinson’s patients, in whom dopamine signaling is impaired, often appear to be apathetic, and have difficulty in sustaining motivation to complete a long task. “Maybe that’s because they can’t produce this slow ramping dopamine signal,” Graybiel says. 

Are we there yet?

MIT researchers reveal how the brain keeps eyes on the prize.

“Are we there yet?”

As anyone who has traveled with young children knows, maintaining focus on distant goals can be a challenge. A new study from MIT suggests how the brain achieves this task, and indicates that the neurotransmitter dopamine may signal the value of long-term rewards. The findings may also explain why patients with Parkinson’s disease — in which dopamine signaling is impaired — often have difficulty in sustaining motivation to finish tasks.

The work is described this week in the journal Nature.

Previous studies have linked dopamine to rewards, and have shown that dopamine neurons show brief bursts of activity when animals receive an unexpected reward. These dopamine signals are believed to be important for reinforcement learning, the process by which an animal learns to perform actions that lead to reward.

Taking the long view

In most studies, that reward has been delivered within a few seconds. In real life, though, gratification is not always immediate: Animals must often travel in search of food, and must maintain motivation for a distant goal while also responding to more immediate cues. The same is true for humans: A driver on a long road trip must remain focused on reaching a final destination while also reacting to traffic, stopping for snacks, and entertaining children in the back seat.

The MIT team, led by Institute Professor Ann Graybiel — who is also an investigator at MIT’s McGovern Institute for Brain Research — decided to study how dopamine changes during a maze task approximating work for delayed gratification. The researchers trained rats to navigate a maze to reach a reward. During each trial a rat would hear a tone instructing it to turn either right or left at an intersection to find a chocolate milk reward.

Rather than simply measuring the activity of dopamine-containing neurons, the MIT researchers wanted to measure how much dopamine was released in the striatum, a brain structure known to be important in reinforcement learning. They teamed up with Paul Phillips of the University of Washington, who has developed a technology called fast-scan cyclic voltammetry (FSCV) in which tiny, implanted, carbon-fiber electrodes allow continuous measurements of dopamine concentration based on its electrochemical fingerprint.

“We adapted the FSCV method so that we could measure dopamine at up to four different sites in the brain simultaneously, as animals moved freely through the maze,” explains first author Mark Howe, a former graduate student with Graybiel who is now a postdoc in the Department of Neurobiology at Northwestern University. “Each probe measures the concentration of extracellular dopamine within a tiny volume of brain tissue, and probably reflects the activity of thousands of nerve terminals.”

Gradual increase in dopamine

From previous work, the researchers expected that they might see pulses of dopamine released at different times in the trial, “but in fact we found something much more surprising,” Graybiel says: The level of dopamine increased steadily throughout each trial, peaking as the animal approached its goal — as if in anticipation of a reward.

The rats’ behavior varied from trial to trial — some runs were faster than others, and sometimes the animals would stop briefly — but the dopamine signal did not vary with running speed or trial duration. Nor did it depend on the probability of getting a reward, something that had been suggested by previous studies.

“Instead, the dopamine signal seems to reflect how far away the rat is from its goal,” Graybiel explains. “The closer it gets, the stronger the signal becomes.” The researchers also found that the size of the signal was related to the size of the expected reward: When rats were trained to anticipate a larger gulp of chocolate milk, the dopamine signal rose more steeply to a higher final concentration.

In some trials the T-shaped maze was extended to a more complex shape, requiring animals to run further and to make extra turns before reaching a reward. During these trials, the dopamine signal ramped up more gradually, eventually reaching the same level as in the shorter maze. “It’s as if the animal were adjusting its expectations, knowing that it had further to go,” Graybiel says.

An ‘internal guidance system’

“This means that dopamine levels could be used to help an animal make choices on the way to the goal and to estimate the distance to the goal,” says Terrence Sejnowski of the Salk Institute, a computational neuroscientist who is familiar with the findings but who was not involved with the study. “This ‘internal guidance system’ could also be useful for humans, who also have to make choices along the way to what may be a distant goal.”

One question that Graybiel hopes to examine in future research is how the signal arises within the brain. Rats and other animals form cognitive maps of their spatial environment, with so-called “place cells” that are active when the animal is in a specific location. “As our rats run the maze repeatedly,” she says, “we suspect they learn to associate each point in the maze with its distance from the reward that they experienced on previous runs.”

As for the relevance of this research to humans, Graybiel says, “I’d be shocked if something similar were not happening in our own brains.” It’s known that Parkinson’s patients, in whom dopamine signaling is impaired, often appear to be apathetic, and have difficulty in sustaining motivation to complete a long task. “Maybe that’s because they can’t produce this slow ramping dopamine signal,” Graybiel says. 

Filed under dopamine parkinson's disease reinforcement learning place cells fast-scan cyclic voltammetry neuroscience science

86 notes

Putting the brakes on pain

Neuropathic pain — pain that results from a malfunction in the nervous system — is a daily reality for millions of Americans. Unlike normal pain, it doesn’t go away after the stimulus that provoked it ends, and it also behaves in a variety of other unusual and disturbing ways. Someone suffering from neuropathic pain might experience intense discomfort from a light touch, for example, or feel as though he or she were freezing in response to a slight change in temperature.

A major part of the answer to the problem of neuropathic pain, scientists believe, is found in spinal nerve cells that release a signaling chemical known as GABA. These GABA neurons act as a sort of brake on pain impulses; it’s thought that when they die or are disabled the pain system goes out of control. If GABA neurons could be kept alive and healthy after peripheral nerve or tissue injury, it’s possible that neuropathic pain could be averted.

Now, University of Texas Medical Branch at Galveston researchers have found a way to, at least partially, accomplish this objective. The key, they determined, is stemming the biochemical assault by reactive oxygen species that are generated in the wake of nerve injury.

"GABA neurons are particularly susceptible to oxidative stress, and we hypothesized that reactive oxygen species contribute to neuropathic sensitization by promoting the loss of GABA neurons as well as hindering GABA functions," said UTMB professor Jin Mo Chung, senior author of a paper on the research now online in the journal Pain.

To test this hypothesis — and determine whether GABA neurons could be saved — the researchers conducted a series of experiments in mice that had been surgically altered to simulate the conditions of neuropathic pain. In one key experiment, mice treated with an antioxidant compound for a week after surgery were compared with untreated mice. The antioxidant mice showed less pain-associated behavior and were found to have far more GABA neurons than the untreated mice.

"So by giving the antioxidant we lowered the pain behavior, and when we look at the spinal cords we see the GABA neuron population is almost the same as normal," Chung said. "That suggested we prevented those neurons from dying, which is a big thing."

One complication, Chung noted, is a “moderate quantitative mismatch” between the behavioral data and the GABA-neuron counts. While the anti-oxidant mice displayed less pain behavior, their behavioral improvement wasn’t as substantial as their high number of GABA neurons would suggest. One possibility is that the surviving neurons were somehow impaired — a hypothesis supported by electrophysiological data.

Although no clinical trials are planned in the immediate future, Chung believes anti-oxidants have great potential as a treatment for neuropathic pain. “If this is true and it works in humans — well, any time you can salvage neurons, it’s a good thing,” he said. “Neuropathic pain is very difficult to treat, and I think this is a possibility, a good possibility.”

(Source: eurekalert.org)

Filed under neuropathic pain GABA neurons reactive oxygen species animal model oxidative stress neuroscience science

72 notes

Questions answered with the pupils of your eyes
Patients who are otherwise completely unable to communicate can answer yes or no questions within seconds with the help of a simple system—consisting of just a laptop and camera—that measures nothing but the size of their pupils. The tool, described and demonstrated in Current Biology, a Cell Press publication, on August 5 takes advantage of changes in pupil size that naturally occur when people do mental arithmetic. It requires no specialized equipment or training at all.
The new pupil response system might not only help those who are severely motor-impaired communicate, but might also be extended to assessing the mental state of patients whose state of consciousness is unclear, the researchers say.
"It is remarkable that a physiological system as simple as the pupil has such a rich repertoire of responses that it can be used for a task as complex as communication," says Wolfgang Einhäuser of Philipps-Universität Marburg in Germany.
The researchers asked healthy people to solve a math problem only when the correct answer to a yes or no question was shown to them on a screen. The mental load associated with solving that problem caused an automatic increase in pupil size, which the researchers showed they could measure and translate into an accurate answer to questions like “Are you 20 years old?”
They then tested out their pupil response algorithm on seven “typical” locked-in patients who had suffered brain damage following a stroke. In many cases, they were able to discern an answer based on pupil size alone.
"We find it remarkable that the system worked almost perfectly in all healthy observers and then could be transferred directly from them to the patients, with no need for training or parameter adjustment," Einhäuser says.
While the system could still use improvement in terms of speed and accuracy, those are technical hurdles Einhäuser is confident they can readily overcome. Their measures of pupil response could already make an important difference for those who need it most.
"For patients with altered state of consciousness—those who are in a coma or other unresponsive state—any communication is a big step forward," he says.

Questions answered with the pupils of your eyes

Patients who are otherwise completely unable to communicate can answer yes or no questions within seconds with the help of a simple system—consisting of just a laptop and camera—that measures nothing but the size of their pupils. The tool, described and demonstrated in Current Biology, a Cell Press publication, on August 5 takes advantage of changes in pupil size that naturally occur when people do mental arithmetic. It requires no specialized equipment or training at all.

The new pupil response system might not only help those who are severely motor-impaired communicate, but might also be extended to assessing the mental state of patients whose state of consciousness is unclear, the researchers say.

"It is remarkable that a physiological system as simple as the pupil has such a rich repertoire of responses that it can be used for a task as complex as communication," says Wolfgang Einhäuser of Philipps-Universität Marburg in Germany.

The researchers asked healthy people to solve a math problem only when the correct answer to a yes or no question was shown to them on a screen. The mental load associated with solving that problem caused an automatic increase in pupil size, which the researchers showed they could measure and translate into an accurate answer to questions like “Are you 20 years old?”

They then tested out their pupil response algorithm on seven “typical” locked-in patients who had suffered brain damage following a stroke. In many cases, they were able to discern an answer based on pupil size alone.

"We find it remarkable that the system worked almost perfectly in all healthy observers and then could be transferred directly from them to the patients, with no need for training or parameter adjustment," Einhäuser says.

While the system could still use improvement in terms of speed and accuracy, those are technical hurdles Einhäuser is confident they can readily overcome. Their measures of pupil response could already make an important difference for those who need it most.

"For patients with altered state of consciousness—those who are in a coma or other unresponsive state—any communication is a big step forward," he says.

Filed under locked-in syndrome brain damage pupil size pupil response system neuroscience science

98 notes

Centers throughout the brain work together to make reading possible
A combination of brain scans and reading tests has revealed that several regions in the brain are responsible for allowing humans to read.
The findings open up the possibility that individuals who have difficulty reading may only need additional training for specific parts of the brain — targeted therapies that could more directly address their individual weaknesses.
“Reading is a complex task. No single part of the brain can do all the work,” said Qinghua He, postdoctoral research associate at the USC Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences, and first author of a study on this research that was published in The Journal of Neuroscience on July 31.
The study looked at the correlation between reading ability and brain structure revealed by high-resolution magnetic resonance imaging (MRI) scans of more than 200 participants.
To control for external factors, the participants were about the same age and education level (college students); right-handed (lefties use the opposite hemisphere of their brain for reading); and all had about the same language skills (Chinese-speaking, with English as a second language for more than nine years). Their IQ, response speed and memory were also tested.
The study first collected data for seven different reading tests of a sample of more than 400 participants. These tests were intended to explore three aspects of their reading ability: phonological decoding ability (the ability to sound out printed words); form-sound association (how well participants could make connections between a new word and sound); and naming speed (how quickly participants were able to read out loud).
Each of these aspects, it turned out, was related to the gray matter volume — the amount of neurons — in different parts of the brain.
The MRI analysis showed that phonological decoding ability was strongly connected with gray matter volume in the left superior parietal lobe (around the top/rear of the brain); form-sound association was strongly connected with the hippocampus and cerebellum; and naming speed lit up a variety of locations around the brain.
“Our results strongly suggest that reading consists of unique capacities and is supported by distinct neural systems that are relatively independent of general cognitive abilities,” said Gui Xue, corresponding author of the study. Xue was formerly a research assistant professor at USC and now is a professor and director of the Center for Brain and Learning Sciences at Beijing Normal University.
“Although there is no doubt that reading has to build up existing neural systems due to the short history of written language in human evolution, years of reading experiences might have finely tuned the system to accommodate the specific requirement of a given written system,” Xue said.
He and Xue collaborated with Chunhui Chen and Qi Dong of Beijing Normal University; Chuansheng Chen of the University of California, Irvine; and Zhong-Lin Lu of Ohio State University.
One of the top features of this study was its unusually wide sample size, according to researchers. Typically, MRI studies test a relatively small sample of individuals — perhaps around 20 to 30 — because of the high cost of using the MRI machine. Testing a single individual can cost about $500, depending on the nature of the research.
The team had the good fortune of receiving access to Beijing Normal University’s new MRI center — the BNU Imaging Center for Brain Research — just before it opened to the public. With support from several grants, the researchers were able to conduct MRI tests on 233 individuals.
Next, the group will explore how to combine data from other factors, such as white matter, resting and task functional MRI, as well as more powerful machine-learning techniques, to improve the accuracy of individuals’ reading abilities.
“Research along this line will enable the early diagnosis of reading difficulties and the development of more targeted therapies,” Xue said.

Centers throughout the brain work together to make reading possible

A combination of brain scans and reading tests has revealed that several regions in the brain are responsible for allowing humans to read.

The findings open up the possibility that individuals who have difficulty reading may only need additional training for specific parts of the brain — targeted therapies that could more directly address their individual weaknesses.

“Reading is a complex task. No single part of the brain can do all the work,” said Qinghua He, postdoctoral research associate at the USC Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences, and first author of a study on this research that was published in The Journal of Neuroscience on July 31.

The study looked at the correlation between reading ability and brain structure revealed by high-resolution magnetic resonance imaging (MRI) scans of more than 200 participants.

To control for external factors, the participants were about the same age and education level (college students); right-handed (lefties use the opposite hemisphere of their brain for reading); and all had about the same language skills (Chinese-speaking, with English as a second language for more than nine years). Their IQ, response speed and memory were also tested.

The study first collected data for seven different reading tests of a sample of more than 400 participants. These tests were intended to explore three aspects of their reading ability: phonological decoding ability (the ability to sound out printed words); form-sound association (how well participants could make connections between a new word and sound); and naming speed (how quickly participants were able to read out loud).

Each of these aspects, it turned out, was related to the gray matter volume — the amount of neurons — in different parts of the brain.

The MRI analysis showed that phonological decoding ability was strongly connected with gray matter volume in the left superior parietal lobe (around the top/rear of the brain); form-sound association was strongly connected with the hippocampus and cerebellum; and naming speed lit up a variety of locations around the brain.

“Our results strongly suggest that reading consists of unique capacities and is supported by distinct neural systems that are relatively independent of general cognitive abilities,” said Gui Xue, corresponding author of the study. Xue was formerly a research assistant professor at USC and now is a professor and director of the Center for Brain and Learning Sciences at Beijing Normal University.

“Although there is no doubt that reading has to build up existing neural systems due to the short history of written language in human evolution, years of reading experiences might have finely tuned the system to accommodate the specific requirement of a given written system,” Xue said.

He and Xue collaborated with Chunhui Chen and Qi Dong of Beijing Normal University; Chuansheng Chen of the University of California, Irvine; and Zhong-Lin Lu of Ohio State University.

One of the top features of this study was its unusually wide sample size, according to researchers. Typically, MRI studies test a relatively small sample of individuals — perhaps around 20 to 30 — because of the high cost of using the MRI machine. Testing a single individual can cost about $500, depending on the nature of the research.

The team had the good fortune of receiving access to Beijing Normal University’s new MRI center — the BNU Imaging Center for Brain Research — just before it opened to the public. With support from several grants, the researchers were able to conduct MRI tests on 233 individuals.

Next, the group will explore how to combine data from other factors, such as white matter, resting and task functional MRI, as well as more powerful machine-learning techniques, to improve the accuracy of individuals’ reading abilities.

“Research along this line will enable the early diagnosis of reading difficulties and the development of more targeted therapies,” Xue said.

Filed under reading brain scans brain structure MRI gray matter parietal lobe hippocampus cerebellum neuroscience science

182 notes

Study reveals potential role of ‘love hormone’ oxytocin in brain function
Findings of NYU Langone researchers may have relevance in autism-spectrum disorder
In a loud, crowded restaurant, having the ability to focus on the people and conversation at your own table is critical. Nerve cells in the brain face similar challenges in separating wanted messages from background chatter. A key element in this process appears to be oxytocin, typically known as the “love hormone” for its role in promoting social and parental bonding.
In a study appearing online August 4 in Nature, NYU Langone Medical Center researchers decipher how oxytocin, acting as a neurohormone in the brain, not only reduces background noise, but more importantly, increases the strength of desired signals. These findings may be relevant to autism, which affects one in 88 children in the United States.
“Oxytocin has a remarkable effect on the passage of information through the brain,” says Richard W. Tsien, DPhil, the Druckenmiller Professor of Neuroscience and director of the Neuroscience Institute at NYU Langone Medical Center. “It not only quiets background activity, but also increases the accuracy of stimulated impulse firing. Our experiments show how the activity of brain circuits can be sharpened, and hint at how this re-tuning of brain circuits might go awry in conditions like autism.”
Children and adults with autism-spectrum disorder (ASD) struggle with recognizing the emotions of others and are easily distracted by extraneous features of their environment. Previous studies have shown that children with autism have lower levels of oxytocin, and mutations in the oxytocin receptor gene predispose people to autism. Recent brain recordings from people with ASD show impairments in the transmission of even simple sensory signals.
The current study built upon 30-year old results from researchers in Geneva, who showed that oxytocin acted in the hippocampus, a region of the brain involved in memory and cognition. The hormone stimulated nerve cells – called inhibitory interneurons – to release a chemical called GABA. This substance dampens the activity of the adjoining excitatory nerve cells, known as pyramidal cells.
“From the previous findings, we predicted that oxytocin would dampen brain circuits in all ways, quieting both background noise and wanted signals,” Dr. Tsien explains. “Instead, we found that oxytocin increased the reliability of stimulated impulses – good for brain function, but quite unexpected.”
To resolve this paradox, Dr. Tsien and his Stanford graduate student Scott Owen collaborated with Gord Fishell, PhD, the Julius Raynes Professor of Neuroscience and Physiology at NYU Langone Medical Center, and NYU graduate student Sebnem Tuncdemir. They identified the particular type of inhibitory interneurons responsible for the effects of oxytocin: “fast-spiking” inhibitory interneurons.
The mystery of how oxytocin drives these fast-spiking inhibitory cells to fire, yet also increases signaling to pyramidal neurons, was solved through studies with rodent models. The researchers found that continually activating the fast-spiking inhibitory neurons – good for lowering background noise – also causes their GABA-releasing synapses to fatigue. Accordingly, when a stimulus arrives, the tired synapses release less GABA and excitation of the pyramidal neuron is not dampened as much, so that excitation drives the pyramidal neuron’s firing more reliably.
“The stronger signal and muffled background noise arise from the same fundamental action of oxytocin and give two benefits for the price of one,” Dr. Fishell explains. “It’s too early to say how the lack of oxytocin signaling is involved in the wide diversity of autism-spectrum disorders, and the jury is still out about its possible therapeutic effects. But it is encouraging to find that a naturally occurring neurohormone can enhance brain circuits by dialing up wanted signals while quieting background noise.”

Study reveals potential role of ‘love hormone’ oxytocin in brain function

Findings of NYU Langone researchers may have relevance in autism-spectrum disorder

In a loud, crowded restaurant, having the ability to focus on the people and conversation at your own table is critical. Nerve cells in the brain face similar challenges in separating wanted messages from background chatter. A key element in this process appears to be oxytocin, typically known as the “love hormone” for its role in promoting social and parental bonding.

In a study appearing online August 4 in Nature, NYU Langone Medical Center researchers decipher how oxytocin, acting as a neurohormone in the brain, not only reduces background noise, but more importantly, increases the strength of desired signals. These findings may be relevant to autism, which affects one in 88 children in the United States.

“Oxytocin has a remarkable effect on the passage of information through the brain,” says Richard W. Tsien, DPhil, the Druckenmiller Professor of Neuroscience and director of the Neuroscience Institute at NYU Langone Medical Center. “It not only quiets background activity, but also increases the accuracy of stimulated impulse firing. Our experiments show how the activity of brain circuits can be sharpened, and hint at how this re-tuning of brain circuits might go awry in conditions like autism.”

Children and adults with autism-spectrum disorder (ASD) struggle with recognizing the emotions of others and are easily distracted by extraneous features of their environment. Previous studies have shown that children with autism have lower levels of oxytocin, and mutations in the oxytocin receptor gene predispose people to autism. Recent brain recordings from people with ASD show impairments in the transmission of even simple sensory signals.

The current study built upon 30-year old results from researchers in Geneva, who showed that oxytocin acted in the hippocampus, a region of the brain involved in memory and cognition. The hormone stimulated nerve cells – called inhibitory interneurons – to release a chemical called GABA. This substance dampens the activity of the adjoining excitatory nerve cells, known as pyramidal cells.

“From the previous findings, we predicted that oxytocin would dampen brain circuits in all ways, quieting both background noise and wanted signals,” Dr. Tsien explains. “Instead, we found that oxytocin increased the reliability of stimulated impulses – good for brain function, but quite unexpected.”

To resolve this paradox, Dr. Tsien and his Stanford graduate student Scott Owen collaborated with Gord Fishell, PhD, the Julius Raynes Professor of Neuroscience and Physiology at NYU Langone Medical Center, and NYU graduate student Sebnem Tuncdemir. They identified the particular type of inhibitory interneurons responsible for the effects of oxytocin: “fast-spiking” inhibitory interneurons.

The mystery of how oxytocin drives these fast-spiking inhibitory cells to fire, yet also increases signaling to pyramidal neurons, was solved through studies with rodent models. The researchers found that continually activating the fast-spiking inhibitory neurons – good for lowering background noise – also causes their GABA-releasing synapses to fatigue. Accordingly, when a stimulus arrives, the tired synapses release less GABA and excitation of the pyramidal neuron is not dampened as much, so that excitation drives the pyramidal neuron’s firing more reliably.

“The stronger signal and muffled background noise arise from the same fundamental action of oxytocin and give two benefits for the price of one,” Dr. Fishell explains. “It’s too early to say how the lack of oxytocin signaling is involved in the wide diversity of autism-spectrum disorders, and the jury is still out about its possible therapeutic effects. But it is encouraging to find that a naturally occurring neurohormone can enhance brain circuits by dialing up wanted signals while quieting background noise.”

Filed under oxytocin brain function ASD inhibitory interneurons hippocampus neuroscience science

129 notes

Study Reveals Genes That Drive Brain Cancer

About 15 percent of glioblastoma patients could receive personalized treatment with drugs currently used in other cancers

image

A team of researchers at the Herbert Irving Comprehensive Cancer Center at Columbia University Medical Center has identified 18 new genes responsible for driving glioblastoma multiforme, the most common—and most aggressive—form of brain cancer in adults. The study was published August 5, 2013, in Nature Genetics.

“Cancers rely on driver genes to remain cancers, and driver genes are the best targets for therapy,” said Antonio Iavarone, MD, professor of pathology and neurology at Columbia University Medical Center and a principal author of the study.

“Once you know the driver in a particular tumor and you hit it, the cancer collapses. We think our study has identified the vast majority of drivers in glioblastoma, and therefore a list of the most important targets for glioblastoma drug development and the basis for personalized treatment of brain cancer.”

Personalized treatment could be a reality soon for about 15 percent of glioblastoma patients, said Anna Lasorella, MD, associate professor of pediatrics and of pathology & cell biology at CUMC.

“This study—together with our study from last year, Research May Lead to New Treatment for Type of Brain Cancer—shows that about 15 percent of glioblastomas are driven by genes that could be targeted with currently available FDA-approved drugs,” she said. “There is no reason why these patients couldn’t receive these drugs now in clinical trials.”

New Bioinformatics Technique Distinguishes Driver Genes from Other Mutations

In any single tumor, hundreds of genes may be mutated, but distinguishing the mutations that drive cancer from mutations that have no effect has been a longstanding problem for researchers.

image

An analysis of all gene mutations in nearly 140 brain tumors has uncovered most of the genes responsible for driving glioblastoma. The analysis found 18 new driver genes (labeled red), never before implicated in glioblastoma and correctly identified the 15 previously known driver genes (labeled blue). The graphs show mutated genes that are commonly found in varying numbers in glioblastoma (left), that frequently contain insertions (middle), and that frequently contain deletions (right). Genes represented by blue dots in the graphs were statistically most likely to be driver genes. Image: Raul Rabadan/Columbia University Medical Center.

The Columbia team used a combination of high throughput DNA sequencing and a new method of statistical analysis to generate a short list of driver candidates. The massive study of nearly 140 brain tumors sequenced the DNA and RNA of every gene in the tumors to identify all the mutations in each tumor. A statistical algorithm designed by co-author Raul Rabadan, PhD, assistant professor of biomedical informatics and systems biology, was then used to identify the mutations most likely to be driver mutations. The algorithm differs from other techniques to distinguish drivers from other mutations in that it considers not only how often the gene is mutated in different tumors, but also the manner in which it is mutated.

“If one copy of the gene in a tumor is mutated at a single point and the second copy is mutated in a different way, there’s a higher probability that the gene is a driver,” Dr. Iavarone said.

The analysis identified 15 driver genes that had been previously identified in other studies—confirming the accuracy of the technique—and 18 new driver genes that had never been implicated in glioblastoma.

Significantly, some of the most important candidates among the 18 new genes, such as LZTR1 and delta catenin, were confirmed to be driver genes in laboratory studies involving cancer stem cells taken from human tumors and examined in culture, as well as after they had been implanted into mice.

A New Model for Personalized Cancer Treatment

Because patients’ tumors are powered by different driver genes, the researchers say that a complicated analysis will be needed for personalized glioblastoma treatment to become a reality. First, all the genes in a patient’s tumor must be sequenced and analyzed to identify its driver gene.

“In some tumors it’s obvious what the driver is; but in others, it’s harder to figure out,” said Dr.Iavarone.

Once the candidate driver is identified, it must be confirmed in laboratory tests with cancer stem cells isolated from the patient’s tumor.

image

About 15 percent of glioblastoma driver genes can be targeted with currently available drugs, suggesting that personalized treatment for some patients may be possible in the near future. Personalized therapy for glioblastoma patients could be achieved by isolating the most aggressive cells from the patient’s tumor and identifying the driver gene responsible for the tumor’s growth (different tumors will be driven by different genes). Drugs can then be tested on the isolated cells to find the most promising candidate. In this image, the gene mutation driving the malignant tumor has been replaced with the normal gene, transforming malignant cells back into normal brain cells. Image: Anna Lasorella.

“Cancer stem cells are the tumor’s most aggressive cells and the critical cellular targets for cancer therapies,” said Dr. Lasorella. “Drugs that prove successful in hitting driver genes in cancer stem cells and slowing cancer growth in cell culture and animal models would then be tried in the patient.”

Personalized Treatment Already Possible for Some Patients

For 85 percent of the known glioblastoma drivers, no drugs that target them have yet been approved.

But the Columbia team has found that about 15 percent of patients whose tumors are driven by certain gene fusions, FDA-approved drugs that target those drivers are available.

The study found that half of these patients have tumors driven by a fusion between the gene EGFR and one of several other genes. The fusion makes EGFR—a growth factor already implicated in cancer—hyperactive; hyperactive EGFR drives tumor growth in these glioblastomas.

“When this gene fusion is present, tumors become addicted to it—they can’t live without it,” Dr. Iavarone said. “We think patients with this fusion might benefit from EGFR inhibitors that are already on the market. In our study, when we gave the inhibitors to mice with these human glioblastomas, tumor growth was strongly inhibited.”

Other patients have tumors that harbor a fusion of the genes FGFR (fibroblast growth factor receptor) and TACC (transforming acidic coiled-coil), first reported by the Columbia team last year. These patients may benefit from FGFR kinase inhibitors. Preliminary trials of these drugs (for treatment of other forms of cancer) have shown that they have a good safety profile, which should accelerate testing in patients with glioblastoma.

Filed under brain cancer glioblastoma brain tumor genes stem cells genetics neuroscience science

291 notes

Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.

Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Filed under artificial intelligence AI brain mapping cognitive prostheses technology robotics science

free counters