Neuroscience

Articles and news from the latest research reports.

377 notes

Meet London’s Babylab, where scientists experiment on babies’ brains
In the laboratories of the Henry Wellcome Building at Birkbeck, University of London, children’s squeaky toys lie scattered on the floor. Brightly coloured posters of animals are pasted on the walls and picture books are stacked on the low tables. This is the Babylab — a research centre that  experiments on children aged one month to three years, to understand how they learn, develop and think. “The way babies’ brains change is an amazing and mysterious process,” says the lab director, psychologist Mark Johnson. “The brain increases in size by three- to four-fold between birth and teenage years, but we don’t understand how that relates to its function.”
The Birkbeck neuroscientists are interested in finding out how babies recognise faces, how they learn to pay attention to some things and not others, how they perceive emotion and how their language develops. Studies published by the lab have shown that babies prefer to look at faces over objects. They have also found that differences in the dopamine-producing gene can affect babies’ attention span and that at six to eight months of age, there are detectable differences in the brain patterns of babies who were later  diagnosed with autism. 
The biggest obstacle is designing the right kinds of experiment. “There aren’t many methods for getting inside the mind of an infant or a toddler,” Johnson explains. Graduate students at the Babylab have teamed up with technology companies, using a €1.9 million (£1.7 million) grant from the European Union, to develop tools such as EEG head nets that record electrical brain activity, helmets that use light to measure blood flow in different parts of the brain, and eye-trackers that help study attention. Eventually, they want to create wireless systems so babies can react and play naturally during experiments. But despite the wires, “all our studies are geared towards making sure our babies are contented,” says Johnson. “If we want data, we need happy babies.”

Meet London’s Babylab, where scientists experiment on babies’ brains

In the laboratories of the Henry Wellcome Building at Birkbeck, University of London, children’s squeaky toys lie scattered on the floor. Brightly coloured posters of animals are pasted on the walls and picture books are stacked on the low tables. This is the Babylab — a research centre that experiments on children aged one month to three years, to understand how they learn, develop and think. “The way babies’ brains change is an amazing and mysterious process,” says the lab director, psychologist Mark Johnson. “The brain increases in size by three- to four-fold between birth and teenage years, but we don’t understand how that relates to its function.”

The Birkbeck neuroscientists are interested in finding out how babies recognise faces, how they learn to pay attention to some things and not others, how they perceive emotion and how their language develops. Studies published by the lab have shown that babies prefer to look at faces over objects. They have also found that differences in the dopamine-producing gene can affect babies’ attention span and that at six to eight months of age, there are detectable differences in the brain patterns of babies who were later diagnosed with autism.

The biggest obstacle is designing the right kinds of experiment. “There aren’t many methods for getting inside the mind of an infant or a toddler,” Johnson explains. Graduate students at the Babylab have teamed up with technology companies, using a €1.9 million (£1.7 million) grant from the European Union, to develop tools such as EEG head nets that record electrical brain activity, helmets that use light to measure blood flow in different parts of the brain, and eye-trackers that help study attention. Eventually, they want to create wireless systems so babies can react and play naturally during experiments. But despite the wires, “all our studies are geared towards making sure our babies are contented,” says Johnson. “If we want data, we need happy babies.”

Filed under babies babylab brain research facial recognition attention EEG neuroscience psychology science

73 notes

Parkinsons’ drug helps older people to make decisions

A drug widely used to treat Parkinson’s Disease can help to reverse age-related impairments in decision making in some older people, a study from researchers at the Wellcome Trust Centre for Neuroimaging has shown.

The study, published today in the journal Nature Neuroscience, also describes changes in the patterns of brain activity of adults in their seventies that help to explain why they are worse at making decisions than younger people.

Poorer decision-making is a natural part of the ageing process that stems from a decline in our brains’ ability to learn from our experiences. Part of the decision-making process involves learning to predict the likelihood of getting a reward from the choices that we make.

An area of the brain called the nucleus accumbens is responsible for interpreting the difference between the reward that we’re expecting to get from a decision and the reward that is actually received. These so called ‘prediction errors’, reported by a brain chemical called dopamine, help us to learn from our actions and modify our behaviour to make better choices the next time.

Dr Rumana Chowdhury, who led the study at the Wellcome Trust Centre for Neuroimaging at UCL, said: “We know that dopamine decline is part of the normal aging process so we wanted to see whether it had any effect on reward-based decision making. We found that when we treated older people who were particularly bad at making decisions with a drug that increases dopamine in the brain, their ability to learn from rewards improved to a level comparable to somebody in their twenties and enabled them to make better decisions.”

The team used a combination of behavioural testing and brain imaging techniques, to investigate the decision-making process in 32 healthy volunteers aged in their early seventies compared with 22 volunteers in their mid-twenties. Older participants were tested on and off L-DOPA, a drug that increases levels of dopamine in the brain. L-DOPA, more commonly known as Levodopa, is widely used in the clinic to treat Parkinson’s.

The participants were asked to complete a behavioural learning task called the two-arm bandit, which mimics the decisions that gamblers make while playing slot machines. Players were shown two images and had to choose the one that they thought would give them the biggest reward. Their performance before and after drug treatment was assessed by the amount of money they won in the task.

"The older volunteers who were less able to predict the likelihood of a reward from their decisions, and so performed worst in the task, showed a significant improvement following drug treatment," Dr Chowdhury explains.

The team then looked at brain activity in the participants as they played the game using functional Magnetic Resonance Imaging (fMRI), and measured connections between areas of the brain that are involved in reward prediction using a technique called Diffusor Tensor Imaging (DTI).

The findings reveal that the older adults who performed best in the gambling game before drug treatment had greater integrity of their dopamine pathways. Older adults who performed poorly before drug treatment were not able to adequately signal reward expectation in the brain – this was corrected by L-DOPA and their performance improved on the drug.

Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, said: “This careful investigation into the subtle cognitive changes that take place as we age offers important insights into what may happen at both a functional and anatomical level in older people who have problems with making decisions. That the team were able to reverse these changes by manipulating dopamine levels offers the hope of therapeutic approaches that could allow older people to function more effectively in the wider community.”

(Source: eurekalert.org)

Filed under brain brain activity parkinson's disease nucleus accumbens aging neuroimaging neuroscience science

100 notes

Unraveling the molecular roots of Down syndrome
Researchers discover that the extra chromosome inherited in Down syndrome impairs learning and memory because it leads to low levels of SNX27 protein in the brain.
What is it about the extra chromosome inherited in Down syndrome—chromosome 21—that alters brain and body development? Researchers have new evidence that points to a protein called sorting nexin 27, or SNX27. SNX27 production is inhibited by a molecule encoded on chromosome 21. The study, published March 24 in Nature Medicine, shows that SNX27 is reduced in human Down syndrome brains. The extra copy of chromosome 21 means a person with Down syndrome produces less SNX27 protein, which in turn disrupts brain function. What’s more, the researchers showed that restoring SNX27 in Down syndrome mice improves cognitive function and behavior.
“In the brain, SNX27 keeps certain receptors on the cell surface—receptors that are necessary for neurons to fire properly,” said Huaxi Xu, Ph.D., Sanford-Burnham professor and senior author of the study. “So, in Down syndrome, we believe lack of SNX27 is at least partly to blame for developmental and cognitive defects.”
SNX27’s role in brain function
Xu and colleagues started out working with mice that lack one copy of the snx27 gene. They noticed that the mice were mostly normal, but showed some significant defects in learning and memory. So the team dug deeper to determine why SNX27 would have that effect. They found that SNX27 helps keep glutamate receptors on the cell surface in neurons. Neurons need glutamate receptors in order to function correctly. With less SNX27, these mice had fewer active glutamate receptors and thus impaired learning and memory.
SNX27 levels are low in Down syndrome
Then the team got thinking about Down syndrome. The SNX27-deficient mice shared some characteristics with Down syndrome, so they took a look at human brains with the condition. This confirmed the clinical significance of their laboratory findings—humans with Down syndrome have significantly lower levels of SNX27.
Next, Xu and colleagues wondered how Down syndrome and low SNX27 are connected—could the extra chromosome 21 encode something that affects SNX27 levels? They suspected microRNAs, small pieces of genetic material that don’t code for protein, but instead influence the production of other genes. It turns out that chromosome 21 encodes one particular microRNA called miR-155. In human Down syndrome brains, the increase in miR-155 levels correlates almost perfectly with the decrease in SNX27.
Xu and his team concluded that, due to the extra chromosome 21 copy, the brains of people with Down syndrome produce extra miR-155, which by indirect means decreases SNX27 levels, in turn decreasing surface glutamate receptors. Through this mechanism, learning, memory, and behavior are impaired.
Restoring SNX27 function rescues Down syndrome mice
If people with Down syndrome simply have too much miR-155 or not enough SNX27, could that be fixed? The team explored this possibility. They used a noninfectious virus as a delivery vehicle to introduce new human SNX27 in the brains of Down syndrome mice.
“Everything goes back to normal after SNX27 treatment. It’s amazing—first we see the glutamate receptors come back, then memory deficit is repaired in our Down syndrome mice,” said Xin Wang, a graduate student in Xu’s lab and first author of the study. “Gene therapy of this sort hasn’t really panned out in humans, however. So we’re now screening small molecules to look for some that might increase SNX27 production or function in the brain.”

Unraveling the molecular roots of Down syndrome

Researchers discover that the extra chromosome inherited in Down syndrome impairs learning and memory because it leads to low levels of SNX27 protein in the brain.

What is it about the extra chromosome inherited in Down syndrome—chromosome 21—that alters brain and body development? Researchers have new evidence that points to a protein called sorting nexin 27, or SNX27. SNX27 production is inhibited by a molecule encoded on chromosome 21. The study, published March 24 in Nature Medicine, shows that SNX27 is reduced in human Down syndrome brains. The extra copy of chromosome 21 means a person with Down syndrome produces less SNX27 protein, which in turn disrupts brain function. What’s more, the researchers showed that restoring SNX27 in Down syndrome mice improves cognitive function and behavior.

“In the brain, SNX27 keeps certain receptors on the cell surface—receptors that are necessary for neurons to fire properly,” said Huaxi Xu, Ph.D., Sanford-Burnham professor and senior author of the study. “So, in Down syndrome, we believe lack of SNX27 is at least partly to blame for developmental and cognitive defects.”

SNX27’s role in brain function

Xu and colleagues started out working with mice that lack one copy of the snx27 gene. They noticed that the mice were mostly normal, but showed some significant defects in learning and memory. So the team dug deeper to determine why SNX27 would have that effect. They found that SNX27 helps keep glutamate receptors on the cell surface in neurons. Neurons need glutamate receptors in order to function correctly. With less SNX27, these mice had fewer active glutamate receptors and thus impaired learning and memory.

SNX27 levels are low in Down syndrome

Then the team got thinking about Down syndrome. The SNX27-deficient mice shared some characteristics with Down syndrome, so they took a look at human brains with the condition. This confirmed the clinical significance of their laboratory findings—humans with Down syndrome have significantly lower levels of SNX27.

Next, Xu and colleagues wondered how Down syndrome and low SNX27 are connected—could the extra chromosome 21 encode something that affects SNX27 levels? They suspected microRNAs, small pieces of genetic material that don’t code for protein, but instead influence the production of other genes. It turns out that chromosome 21 encodes one particular microRNA called miR-155. In human Down syndrome brains, the increase in miR-155 levels correlates almost perfectly with the decrease in SNX27.

Xu and his team concluded that, due to the extra chromosome 21 copy, the brains of people with Down syndrome produce extra miR-155, which by indirect means decreases SNX27 levels, in turn decreasing surface glutamate receptors. Through this mechanism, learning, memory, and behavior are impaired.

Restoring SNX27 function rescues Down syndrome mice

If people with Down syndrome simply have too much miR-155 or not enough SNX27, could that be fixed? The team explored this possibility. They used a noninfectious virus as a delivery vehicle to introduce new human SNX27 in the brains of Down syndrome mice.

“Everything goes back to normal after SNX27 treatment. It’s amazing—first we see the glutamate receptors come back, then memory deficit is repaired in our Down syndrome mice,” said Xin Wang, a graduate student in Xu’s lab and first author of the study. “Gene therapy of this sort hasn’t really panned out in humans, however. So we’re now screening small molecules to look for some that might increase SNX27 production or function in the brain.”

Filed under down syndrome chromosome 21 cognitive function brain function neuroscience science

104 notes

DNA damage occurs as part of normal brain activity
Scientists at the Gladstone Institutes have discovered that a certain type of DNA damage long thought to be particularly detrimental to brain cells can actually be part of a regular, non-harmful process. The team further found that disruptions to this process occur in mouse models of Alzheimer’s disease—and identified two therapeutic strategies that reduce these disruptions.
Scientists have long known that DNA damage occurs in every cell, accumulating as we age. But a particular type of DNA damage, known as a double-strand break, or DSB, has long been considered a major force behind age-related illnesses such as Alzheimer’s. Today, researchers in the laboratory of Gladstone Senior Investigator Lennart Mucke, MD, report in Nature Neuroscience that DSBs in neuronal cells in the brain can also be part of normal brain functions such as learning—as long as the DSBs are tightly controlled and repaired in good time. Further, the accumulation of the amyloid-beta protein in the brain—widely thought to be a major cause of Alzheimer’s disease—increases the number of neurons with DSBs and delays their repair.
"It is both novel and intriguing team’s finding that the accumulation and repair of DSBs may be part of normal learning," said Fred H. Gage, PhD, of the Salk Institute who was not involved in this study. "Their discovery that the Alzheimer’s-like mice exhibited higher baseline DSBs, which weren’t repaired, increases these findings’ relevance and provides new understanding of this deadly disease’s underlying mechanisms."
In laboratory experiments, two groups of mice explored a new environment filled with unfamiliar sights, smells and textures. One group was genetically modified to simulate key aspects of Alzheimer’s, and the other was a healthy, control group. As the mice explored, their neurons became stimulated as they processed new information. After two hours, the mice were returned to their familiar, home environment.
The investigators then examined the neurons of the mice for markers of DSBs. The control group showed an increase in DSBs right after they explored the new environment—but after being returned to their home environment, DSB levels dropped.
"We were initially surprised to find neuronal DSBs in the brains of healthy mice," said Elsa Suberbielle, DVM, PhD, Gladstone postdoctoral fellow and the paper’s lead author. "But the close link between neuronal stimulation and DSBs, and the finding that these DSBs were repaired after the mice returned to their home environment, suggest that DSBs are an integral part of normal brain activity. We think that this damage-and-repair pattern might help the animals learn by facilitating rapid changes in the conversion of neuronal DNA into proteins that are involved in forming memories."
The group of mice modified to simulate Alzheimer’s had higher DSB levels at the start—levels that rose even higher during neuronal stimulation. In addition, the team noticed a substantial delay in the DNA-repair process.
To counteract the accumulation of DSBs, the team first used a therapeutic approach built on two recent studies—one of which was led by Dr. Mucke and his team—that showed the widely used anti-epileptic drug levetiracetam could improve neuronal communication and memory in both mouse models of Alzheimer’s and in humans in the disease’s earliest stages. The mice they treated with the FDA-approved drug had fewer DSBs. In their second strategy, they genetically modified mice to lack the brain protein called tau—another protein implicated in Alzheimer’s. This manipulation, which they had previously found to prevent abnormal brain activity, also prevented the excessive accumulation of DSBs.
The team’s findings suggest that restoring proper neuronal communication is important for staving off the effects of Alzheimer’s—perhaps by maintaining the delicate balance between DNA damage and repair.
"Currently, we have no effective treatments to slow, prevent or halt Alzheimer’s, from which more than 5 million people suffer in the United States alone," said Dr. Mucke, who directs neurological research at Gladstone and is a professor of neuroscience and neurology at the University of California, San Francisco, with which Gladstone is affiliated. "The need to decipher the causes of Alzheimer’s and to find better therapeutic solutions has never been more important—or urgent. Our results suggest that readily available drugs could help protect neurons against some of the damages inflicted by this illness. In the future, we will further explore these therapeutic strategies. We also hope to gain a deeper understanding of the role that DSBs play in learning and memory—and in the disruption of these important brain functions by Alzheimer’s disease."
(Image courtesy: Lulu Qian, Erik Winfree & Jehoshua Bruck | California Institute of Technology)

DNA damage occurs as part of normal brain activity

Scientists at the Gladstone Institutes have discovered that a certain type of DNA damage long thought to be particularly detrimental to brain cells can actually be part of a regular, non-harmful process. The team further found that disruptions to this process occur in mouse models of Alzheimer’s disease—and identified two therapeutic strategies that reduce these disruptions.

Scientists have long known that DNA damage occurs in every cell, accumulating as we age. But a particular type of DNA damage, known as a double-strand break, or DSB, has long been considered a major force behind age-related illnesses such as Alzheimer’s. Today, researchers in the laboratory of Gladstone Senior Investigator Lennart Mucke, MD, report in Nature Neuroscience that DSBs in neuronal cells in the brain can also be part of normal brain functions such as learning—as long as the DSBs are tightly controlled and repaired in good time. Further, the accumulation of the amyloid-beta protein in the brain—widely thought to be a major cause of Alzheimer’s disease—increases the number of neurons with DSBs and delays their repair.

"It is both novel and intriguing team’s finding that the accumulation and repair of DSBs may be part of normal learning," said Fred H. Gage, PhD, of the Salk Institute who was not involved in this study. "Their discovery that the Alzheimer’s-like mice exhibited higher baseline DSBs, which weren’t repaired, increases these findings’ relevance and provides new understanding of this deadly disease’s underlying mechanisms."

In laboratory experiments, two groups of mice explored a new environment filled with unfamiliar sights, smells and textures. One group was genetically modified to simulate key aspects of Alzheimer’s, and the other was a healthy, control group. As the mice explored, their neurons became stimulated as they processed new information. After two hours, the mice were returned to their familiar, home environment.

The investigators then examined the neurons of the mice for markers of DSBs. The control group showed an increase in DSBs right after they explored the new environment—but after being returned to their home environment, DSB levels dropped.

"We were initially surprised to find neuronal DSBs in the brains of healthy mice," said Elsa Suberbielle, DVM, PhD, Gladstone postdoctoral fellow and the paper’s lead author. "But the close link between neuronal stimulation and DSBs, and the finding that these DSBs were repaired after the mice returned to their home environment, suggest that DSBs are an integral part of normal brain activity. We think that this damage-and-repair pattern might help the animals learn by facilitating rapid changes in the conversion of neuronal DNA into proteins that are involved in forming memories."

The group of mice modified to simulate Alzheimer’s had higher DSB levels at the start—levels that rose even higher during neuronal stimulation. In addition, the team noticed a substantial delay in the DNA-repair process.

To counteract the accumulation of DSBs, the team first used a therapeutic approach built on two recent studies—one of which was led by Dr. Mucke and his team—that showed the widely used anti-epileptic drug levetiracetam could improve neuronal communication and memory in both mouse models of Alzheimer’s and in humans in the disease’s earliest stages. The mice they treated with the FDA-approved drug had fewer DSBs. In their second strategy, they genetically modified mice to lack the brain protein called tau—another protein implicated in Alzheimer’s. This manipulation, which they had previously found to prevent abnormal brain activity, also prevented the excessive accumulation of DSBs.

The team’s findings suggest that restoring proper neuronal communication is important for staving off the effects of Alzheimer’s—perhaps by maintaining the delicate balance between DNA damage and repair.

"Currently, we have no effective treatments to slow, prevent or halt Alzheimer’s, from which more than 5 million people suffer in the United States alone," said Dr. Mucke, who directs neurological research at Gladstone and is a professor of neuroscience and neurology at the University of California, San Francisco, with which Gladstone is affiliated. "The need to decipher the causes of Alzheimer’s and to find better therapeutic solutions has never been more important—or urgent. Our results suggest that readily available drugs could help protect neurons against some of the damages inflicted by this illness. In the future, we will further explore these therapeutic strategies. We also hope to gain a deeper understanding of the role that DSBs play in learning and memory—and in the disruption of these important brain functions by Alzheimer’s disease."

(Image courtesy: Lulu Qian, Erik Winfree & Jehoshua Bruck | California Institute of Technology)

Filed under brain activity brain function brain cells dna damage neurons animal model neuroscience science

712 notes

Farsighted engineer invents bionic eye to help the blind
For UCLA bioengineering professor Wentai Liu, more than two decades of visionary research burst into the headlines last month when the FDA approved what it called “the first bionic eye for the blind.”
The Argus II Retinal Prosthesis System — developed by a team of physicians and engineers from around the country — aids adults who have lost their eyesight due to retinitis pigmentosa (RP), age-related macular degeneration or other eye diseases that destroy the retina’s light-sensitive photoreceptors.
At the heart of the device is a tiny yet powerful computer chip developed by Liu that, when implanted in the retina, effectively sidesteps the damaged photoreceptors to “trick” the eye into seeing. The Argus II operates with a miniature video camera mounted on a pair of eyeglasses that sends information about images it detects to a microprocessor worn on the user’s waistband. The microprocessor wirelessly transmits electronic signals to the computer chip, a fingernail-size grid made up of 60 circuits. These chips stimulate the retina’s nerve cells with electronic impulses which head up the optic nerve to the brain’s visual cortex. There, the brain assembles them into a composite image.
Recipients of the retinal implant can read oversized letters of the alphabet, discern objects and movement, and even see the outlines and some details of faces. And while the picture is far from perfect — the healthy human eye sees at a much higher resolution — it’s a breakthrough for people like the first patient, a man in his 70s who was blinded at age 20 by RP, to receive the implant in clinical trials. “It was the first time he’d seen light in a half-century,” said Liu, adding that “it feels good as the engineer” to have helped make this possible.
Liu joined the Artificial Retina Project in 1988 as a professor of computer and electrical engineering at North Carolina State University. The multidisciplinary research project was funded by the U.S. Department of Energy’s Office of Science because it envisioned a potential pandemic of eyesight loss in America’s aging population. Leading the project was Duke University ophthalmologist and neurosurgeon Dr. Mark Humayun, now on faculty at USC. He tapped Liu to engineer the artificial retina.
“I thought it was a great idea,” Liu said. “But I asked, ‘What can I do?’ because I didn’t know much about biology.” Humayun handed him a six-inch-thick medical manual on the retina. “The learning curve was very steep,” Liu recalled with a laugh.
However, Liu’s fellow engineers questioned his sanity. “I was working on integrated chip design and had just gotten tenure when I signed on to this project. They said, ‘You’re crazy!’ But I’m glad I made that choice, getting into this new field.”
How the bionic eye works

Farsighted engineer invents bionic eye to help the blind

For UCLA bioengineering professor Wentai Liu, more than two decades of visionary research burst into the headlines last month when the FDA approved what it called “the first bionic eye for the blind.”

The Argus II Retinal Prosthesis System — developed by a team of physicians and engineers from around the country — aids adults who have lost their eyesight due to retinitis pigmentosa (RP), age-related macular degeneration or other eye diseases that destroy the retina’s light-sensitive photoreceptors.

At the heart of the device is a tiny yet powerful computer chip developed by Liu that, when implanted in the retina, effectively sidesteps the damaged photoreceptors to “trick” the eye into seeing. The Argus II operates with a miniature video camera mounted on a pair of eyeglasses that sends information about images it detects to a microprocessor worn on the user’s waistband. The microprocessor wirelessly transmits electronic signals to the computer chip, a fingernail-size grid made up of 60 circuits. These chips stimulate the retina’s nerve cells with electronic impulses which head up the optic nerve to the brain’s visual cortex. There, the brain assembles them into a composite image.

Recipients of the retinal implant can read oversized letters of the alphabet, discern objects and movement, and even see the outlines and some details of faces. And while the picture is far from perfect — the healthy human eye sees at a much higher resolution — it’s a breakthrough for people like the first patient, a man in his 70s who was blinded at age 20 by RP, to receive the implant in clinical trials. “It was the first time he’d seen light in a half-century,” said Liu, adding that “it feels good as the engineer” to have helped make this possible.

Liu joined the Artificial Retina Project in 1988 as a professor of computer and electrical engineering at North Carolina State University. The multidisciplinary research project was funded by the U.S. Department of Energy’s Office of Science because it envisioned a potential pandemic of eyesight loss in America’s aging population. Leading the project was Duke University ophthalmologist and neurosurgeon Dr. Mark Humayun, now on faculty at USC. He tapped Liu to engineer the artificial retina.

“I thought it was a great idea,” Liu said. “But I asked, ‘What can I do?’ because I didn’t know much about biology.” Humayun handed him a six-inch-thick medical manual on the retina. “The learning curve was very steep,” Liu recalled with a laugh.

However, Liu’s fellow engineers questioned his sanity. “I was working on integrated chip design and had just gotten tenure when I signed on to this project. They said, ‘You’re crazy!’ But I’m glad I made that choice, getting into this new field.”

How the bionic eye works

Filed under Argus II prosthetics retina retinal implant photoreceptors neuroscience science

215 notes

Neuronal Morphology Goes Digital: A Research Hub for Cellular and System Neuroscience
The importance of neuronal morphology in brain function has been recognized for over a century. The broad applicability of ‘‘digital reconstructions’’ of neuron morphology across neuroscience subdisciplines has stimulated the rapid development of numerous synergistic tools for data acquisition, anatomical analysis, three-dimensional rendering, electrophysiological simulation, growth models, and data sharing. Here we discuss the processes of histological labeling, microscopic imaging, and semiautomated tracing. Moreover, we provide an annotated compilation of currently available resources in this rich research ‘‘ecosystem’’ as a central reference for experimental and computational neuroscience.

Neuronal Morphology Goes Digital: A Research Hub for Cellular and System Neuroscience

The importance of neuronal morphology in brain function has been recognized for over a century. The broad applicability of ‘‘digital reconstructions’’ of neuron morphology across neuroscience subdisciplines has stimulated the rapid development of numerous synergistic tools for data acquisition, anatomical analysis, three-dimensional rendering, electrophysiological simulation, growth models, and data sharing. Here we discuss the processes of histological labeling, microscopic imaging, and semiautomated tracing. Moreover, we provide an annotated compilation of currently available resources in this rich research ‘‘ecosystem’’ as a central reference for experimental and computational neuroscience.

Filed under neurons neuronal activity neuronal function neuronal morphology neuronal reconstruction neuroscience science

394 notes

Links Between Physical And Emotional Pain Relief
We often regard relief as the dissipation of pain, discomfort or stress. However, the specific emotion associated with the sense of relief really isn’t fully understood. It is for this reason a team of researchers from the Association for Psychological Science undertook a study in which they aimed to explore and understand more fully the psychological mechanisms at work responsible for providing us with the idea of relief.
To experts in the field, the term for relief after the removal of pain is called the pain offset relief.
The team states their research recognizes the concept of relief, and the mechanisms behind it, are nearly identical for both healthy individuals and those with a history of self-harm. They claim the identical nature of pain offset relief in both of these groups suggests it is a natural mechanism useful in regulating our emotions. Prior to the laboratory portion of the experiment, the researchers assessed participants for emotion dysregulation and reactivity, self-injurious behavior, and various psychiatric disorders.
When an individual is experiencing the sensation of pain or discomfort, the likelihood they will experience a negative emotion is significantly increased. The team wanted to learn specifically if pain offset relief led to more positive emotions being experienced or if it only aided in alleviation of negative emotions.
Lead author Joseph Franklin, along with his colleagues working on the study, employed the use of electrodes intended to measure the participants’ negative emotions and positive emotions when the participants were subjected to loud noises. The loud noise was sometimes presented on its own. At other times, the participants would have received a low- or high-intensity shock at either a 3.5, 6 or 14 second interval preceding the loud noise.
Participants in the study showed an increase in positive emotion in combination with decreased negative emotion after pain offset. They learned the greatest increase in positive emotion occurred almost simultaneously with the culmination of the high-intensity shocks. Alternately, the greatest decrease in negative emotion was associated with the culmination of a low-intensity shock.
The team has published their findings, which they claim will shed light on the emotional nature of pain offset relief, in the journal Psychological Science, as well as the journal Clinical Psychological Science. Additionally, they feel their study might be useful in gaining a better understanding into why some people would seek the sensation of relief by engaging in self-harm behaviors.
It is important to note the results of this study do not support the hypothesis that heightened pain offset relief is a risk factor for engaging in self-harm behaviors. In fact, the team speculates the biggest risk factors for nonsuicidal self-injury may concern how some people overcome the instinctive barriers that keep most people from inflicting self-harm.

Links Between Physical And Emotional Pain Relief

We often regard relief as the dissipation of pain, discomfort or stress. However, the specific emotion associated with the sense of relief really isn’t fully understood. It is for this reason a team of researchers from the Association for Psychological Science undertook a study in which they aimed to explore and understand more fully the psychological mechanisms at work responsible for providing us with the idea of relief.

To experts in the field, the term for relief after the removal of pain is called the pain offset relief.

The team states their research recognizes the concept of relief, and the mechanisms behind it, are nearly identical for both healthy individuals and those with a history of self-harm. They claim the identical nature of pain offset relief in both of these groups suggests it is a natural mechanism useful in regulating our emotions. Prior to the laboratory portion of the experiment, the researchers assessed participants for emotion dysregulation and reactivity, self-injurious behavior, and various psychiatric disorders.

When an individual is experiencing the sensation of pain or discomfort, the likelihood they will experience a negative emotion is significantly increased. The team wanted to learn specifically if pain offset relief led to more positive emotions being experienced or if it only aided in alleviation of negative emotions.

Lead author Joseph Franklin, along with his colleagues working on the study, employed the use of electrodes intended to measure the participants’ negative emotions and positive emotions when the participants were subjected to loud noises. The loud noise was sometimes presented on its own. At other times, the participants would have received a low- or high-intensity shock at either a 3.5, 6 or 14 second interval preceding the loud noise.

Participants in the study showed an increase in positive emotion in combination with decreased negative emotion after pain offset. They learned the greatest increase in positive emotion occurred almost simultaneously with the culmination of the high-intensity shocks. Alternately, the greatest decrease in negative emotion was associated with the culmination of a low-intensity shock.

The team has published their findings, which they claim will shed light on the emotional nature of pain offset relief, in the journal Psychological Science, as well as the journal Clinical Psychological Science. Additionally, they feel their study might be useful in gaining a better understanding into why some people would seek the sensation of relief by engaging in self-harm behaviors.

It is important to note the results of this study do not support the hypothesis that heightened pain offset relief is a risk factor for engaging in self-harm behaviors. In fact, the team speculates the biggest risk factors for nonsuicidal self-injury may concern how some people overcome the instinctive barriers that keep most people from inflicting self-harm.

Filed under relief pain offset relief negative emotions emotions psychology neuroscience science

109 notes

Nanotools for neuroscience and brain activity mapping
The ambitious and controversial Brain Activity Map (BAM), initiative instituted by a small group of researchers last year, has been steadily gaining momentum. Earlier this week, a proof-of-principle Zebrafish BAM was demonstrated with astounding clarity by a pair of researchers at the Howard Hughes Medical Institute.
Following on the heels of that work, an exhaustive 17-page compendium of current and soon-to-be brain mapping tools was published yesterday in ACS Nano by a rapidly snowballing list of disciples.
The BAM roster has been a carefully manicured player list from the beginning, and the role it has as ship wheel to this diffuse effort should not be underestimated. With the ranks now swelling to 27, each contributor to the paper has, in word or in spirit, contributed notably to the 185 referenced technologies on the paper. What we have here is not a research release, this is a textbook for the new neuroscience, and the journal choice, though not publicly accessible, hints at the desire to draw even more nanoscale researchers into the effort.
Media attention has channeled formative criticism to the effort in a way we have not seen before. Those sentiments on the cautionary take at least, might be summarized by likening the BAM scientists to cavemen having just discovered fire. Now sitting in the sand, they appear to be chartering a course to the internal combustion engine as they scribe on the ground with blunt bone instruments. The problem is that having just fleshed out how the brain’s wiring, the connectome, might be extracted, the community elites just leapfrogged to the full activity map, or at least one for some of the lesser animals.
The most extravagant technology proposed is undoubtedly the DNA tickertape. It appears to have been developed initially, at least in part, by Northwestern University’s Konrad Kording. Some of the earlier BAM papers show however that George Church, of human genome project fame, actually holds a patent that might cover some aspects of Kording’s idea. In particular, Church seems responsible for the wickedly unique concept of engineering DNA polymerases to produce predictable errors that would in effect record conditions within the cell or device onto DNA tapes. Fortunately Church, having entered neuroscience some time ago, is also a BAM founding father. His “nucleic acid memory device” could be the means by which the spike activity of each neuron would be recorded.
Among the other wild exotica hinted at in the ACS Nano paper is the DNA barcode proposed by Anthony Zador, from the Cold Spring Harbor Lab. This device would use a genetically modified rabies virus to infiltrate the nervous system, and record every connection in the process, web-crawl style. While Zador is not an author on this or the previous BAM papers, his techniques would not only provide a way to deliver a connectome of a complex brain, they potentially could do it non-destructively. Furthermore, the barcode mechanism would perhaps be the ideal way to propagate the Kording-Church tickertape machinery from cell to cell, bundling topology and activity together.
Many of the neurotools mentioned in the ACSNano paper are logical extensions of current technologies, just slightly smaller and a little higher in resolution. Recording cell activity with voltage-sensitive or calcium-imaging dyes, as was done in the Zebrafish map, may or may not be the process used ten years from now. Other ideas, like accessing neurons through fiber optic probes threaded through the vasculature to the capillaries, were re-invigorated, as were new sensors altogether like nanodiamond and nanogold devices.
Glaringly absent from this paper however, is a clear consensus of what exactly is to be done with these tools. The Zebrafish calcium map, for example, does not discriminate between neuron bodies, axons, dendrites, or synapses. The question of what level of detail is to be the goal of new studies still needs to be asked. This is a tough question because an activity map, like the connectome that would couch it, is rewritten on scales beneath our direct perception—not only is it a moving target, its trajectory is largely unknown. A long-term project such as this based in a set of technologies, as opposed to hypothesis-driven scientific inquiry, needs to balance fluidity with credibility.
Imagining what you would want to do if you were making a BAM of your own brain may emerge as the best way to set the project’s goals. In that case, the researchers may not be going for the whole BAM right away—just the things they would want to know in enough detail to get some answers in the least destructive way possible. If they plow through a bunch of animal studies generating terabytes of data, but cannot then use those methods used to learn about our brains, they will not have been successful. Priority then is to be the nondestructive BAM, focused on those high-interest, highly accessible areas with the highest density of observables wherein the observation risks are low. How to do this is the question of the next BAM installment.
Full Article

Nanotools for neuroscience and brain activity mapping

The ambitious and controversial Brain Activity Map (BAM), initiative instituted by a small group of researchers last year, has been steadily gaining momentum. Earlier this week, a proof-of-principle Zebrafish BAM was demonstrated with astounding clarity by a pair of researchers at the Howard Hughes Medical Institute.

Following on the heels of that work, an exhaustive 17-page compendium of current and soon-to-be brain mapping tools was published yesterday in ACS Nano by a rapidly snowballing list of disciples.

The BAM roster has been a carefully manicured player list from the beginning, and the role it has as ship wheel to this diffuse effort should not be underestimated. With the ranks now swelling to 27, each contributor to the paper has, in word or in spirit, contributed notably to the 185 referenced technologies on the paper. What we have here is not a research release, this is a textbook for the new neuroscience, and the journal choice, though not publicly accessible, hints at the desire to draw even more nanoscale researchers into the effort.

Media attention has channeled formative criticism to the effort in a way we have not seen before. Those sentiments on the cautionary take at least, might be summarized by likening the BAM scientists to cavemen having just discovered fire. Now sitting in the sand, they appear to be chartering a course to the internal combustion engine as they scribe on the ground with blunt bone instruments. The problem is that having just fleshed out how the brain’s wiring, the connectome, might be extracted, the community elites just leapfrogged to the full activity map, or at least one for some of the lesser animals.

The most extravagant technology proposed is undoubtedly the DNA tickertape. It appears to have been developed initially, at least in part, by Northwestern University’s Konrad Kording. Some of the earlier BAM papers show however that George Church, of human genome project fame, actually holds a patent that might cover some aspects of Kording’s idea. In particular, Church seems responsible for the wickedly unique concept of engineering DNA polymerases to produce predictable errors that would in effect record conditions within the cell or device onto DNA tapes. Fortunately Church, having entered neuroscience some time ago, is also a BAM founding father. His “nucleic acid memory device” could be the means by which the spike activity of each neuron would be recorded.

Among the other wild exotica hinted at in the ACS Nano paper is the DNA barcode proposed by Anthony Zador, from the Cold Spring Harbor Lab. This device would use a genetically modified rabies virus to infiltrate the nervous system, and record every connection in the process, web-crawl style. While Zador is not an author on this or the previous BAM papers, his techniques would not only provide a way to deliver a connectome of a complex brain, they potentially could do it non-destructively. Furthermore, the barcode mechanism would perhaps be the ideal way to propagate the Kording-Church tickertape machinery from cell to cell, bundling topology and activity together.

Many of the neurotools mentioned in the ACSNano paper are logical extensions of current technologies, just slightly smaller and a little higher in resolution. Recording cell activity with voltage-sensitive or calcium-imaging dyes, as was done in the Zebrafish map, may or may not be the process used ten years from now. Other ideas, like accessing neurons through fiber optic probes threaded through the vasculature to the capillaries, were re-invigorated, as were new sensors altogether like nanodiamond and nanogold devices.

Glaringly absent from this paper however, is a clear consensus of what exactly is to be done with these tools. The Zebrafish calcium map, for example, does not discriminate between neuron bodies, axons, dendrites, or synapses. The question of what level of detail is to be the goal of new studies still needs to be asked. This is a tough question because an activity map, like the connectome that would couch it, is rewritten on scales beneath our direct perception—not only is it a moving target, its trajectory is largely unknown. A long-term project such as this based in a set of technologies, as opposed to hypothesis-driven scientific inquiry, needs to balance fluidity with credibility.

Imagining what you would want to do if you were making a BAM of your own brain may emerge as the best way to set the project’s goals. In that case, the researchers may not be going for the whole BAM right away—just the things they would want to know in enough detail to get some answers in the least destructive way possible. If they plow through a bunch of animal studies generating terabytes of data, but cannot then use those methods used to learn about our brains, they will not have been successful. Priority then is to be the nondestructive BAM, focused on those high-interest, highly accessible areas with the highest density of observables wherein the observation risks are low. How to do this is the question of the next BAM installment.

Full Article

Filed under Brain Activity Map BAM brain mapping connectome neuroscience science

93 notes

Alterations in brain activity in children at risk of schizophrenia predate onset of symptoms
Research from the University of North Carolina has shown that children at risk of developing schizophrenia have brains that function differently than those not at risk.
Brain scans of children who have parents or siblings with the illness reveal a neural circuitry that is hyperactivated or stressed by tasks that peers with no family history of the illness seem to handle with ease.
Because these differences in brain functioning appear before neuropsychiatric symptoms such as trouble focusing, paranoid beliefs, or hallucinations, the scientists believe that the finding could point to early warning signs or “vulnerability markers” for schizophrenia.
“The downside is saying that anyone with a first degree relative with schizophrenia is doomed. Instead, we want to use our findings to identify those individuals with differences in brain function that indicate they are particularly vulnerable, so we can intervene to minimize that risk,” said senior study author Aysenil Belger, PhD, associate professor of psychiatry at the UNC School of Medicine.
The UNC study, published online on March 6, 2013, in the journal Psychiatry Research: Neuroimaging, is one of the first to look for alterations in brain activity associated with mental illness in individuals as young as nine years of age.
Individuals who have a first degree family member with schizophrenia have an 8-fold to 12-fold increased risk of developing the disease. However, there is no way of knowing for certain who will become schizophrenic until symptoms arise and a diagnosis is reached. Some of the earliest signs of schizophrenia are a decline in verbal memory, IQ, and other mental functions, which researchers believe stem from an inefficiency in cortical processing – the brain’s waning ability to tackle complex tasks.
In this study, Belger and her colleagues sought to identify what if any functional changes occur in the brains of adolescents at high risk of developing schizophrenia. She performed functional magnetic resonance imaging (fMRI) on 42 children and adolescents ages 9 to 18, half of which had relatives with schizophrenia and half of which did not. Study participants each spent an hour and a half playing a game where they had to identify a specific image – a simple circle – out of a lineup of emotionally evocative images, such as cute or scary animals. At the same time, the MRI machine scanned for changes in brain activity associated with each target detection task.
Belger found that the circuitry involved in emotion and higher order decision making was hyperactivated in individuals with a family history of schizophrenia, suggesting that the task was stressing out these areas of the brain in the study subjects.
“This finding shows that these regions are not activating normally,” she says. “We think that this hyperactivation eventually damages these specific areas in the brain to the point that they become hypoactivated in patients, meaning that when the brain is asked to go into high gear it no longer can.”
Belger is currently exploring what kind of role stress plays in the changing mental capacity of adolescents at high risk of developing schizophrenia. Though only a fraction of these individuals will be diagnosed with schizophrenia, Belger thinks it is important to pinpoint the most vulnerable people early to explore interventions that may stave off the mental illness.
“It may be as simple as understanding that people are different in how they cope with stress,” says Belger. “Teaching strategies to handle stress could make these individuals less vulnerable to not just schizophrenia but also other neuropsychiatric disorders.”

Alterations in brain activity in children at risk of schizophrenia predate onset of symptoms

Research from the University of North Carolina has shown that children at risk of developing schizophrenia have brains that function differently than those not at risk.

Brain scans of children who have parents or siblings with the illness reveal a neural circuitry that is hyperactivated or stressed by tasks that peers with no family history of the illness seem to handle with ease.

Because these differences in brain functioning appear before neuropsychiatric symptoms such as trouble focusing, paranoid beliefs, or hallucinations, the scientists believe that the finding could point to early warning signs or “vulnerability markers” for schizophrenia.

“The downside is saying that anyone with a first degree relative with schizophrenia is doomed. Instead, we want to use our findings to identify those individuals with differences in brain function that indicate they are particularly vulnerable, so we can intervene to minimize that risk,” said senior study author Aysenil Belger, PhD, associate professor of psychiatry at the UNC School of Medicine.

The UNC study, published online on March 6, 2013, in the journal Psychiatry Research: Neuroimaging, is one of the first to look for alterations in brain activity associated with mental illness in individuals as young as nine years of age.

Individuals who have a first degree family member with schizophrenia have an 8-fold to 12-fold increased risk of developing the disease. However, there is no way of knowing for certain who will become schizophrenic until symptoms arise and a diagnosis is reached. Some of the earliest signs of schizophrenia are a decline in verbal memory, IQ, and other mental functions, which researchers believe stem from an inefficiency in cortical processing – the brain’s waning ability to tackle complex tasks.

In this study, Belger and her colleagues sought to identify what if any functional changes occur in the brains of adolescents at high risk of developing schizophrenia. She performed functional magnetic resonance imaging (fMRI) on 42 children and adolescents ages 9 to 18, half of which had relatives with schizophrenia and half of which did not. Study participants each spent an hour and a half playing a game where they had to identify a specific image – a simple circle – out of a lineup of emotionally evocative images, such as cute or scary animals. At the same time, the MRI machine scanned for changes in brain activity associated with each target detection task.

Belger found that the circuitry involved in emotion and higher order decision making was hyperactivated in individuals with a family history of schizophrenia, suggesting that the task was stressing out these areas of the brain in the study subjects.

“This finding shows that these regions are not activating normally,” she says. “We think that this hyperactivation eventually damages these specific areas in the brain to the point that they become hypoactivated in patients, meaning that when the brain is asked to go into high gear it no longer can.”

Belger is currently exploring what kind of role stress plays in the changing mental capacity of adolescents at high risk of developing schizophrenia. Though only a fraction of these individuals will be diagnosed with schizophrenia, Belger thinks it is important to pinpoint the most vulnerable people early to explore interventions that may stave off the mental illness.

“It may be as simple as understanding that people are different in how they cope with stress,” says Belger. “Teaching strategies to handle stress could make these individuals less vulnerable to not just schizophrenia but also other neuropsychiatric disorders.”

Filed under schizophrenia neuroimaging genetics fMRI brain neuroscience science

113 notes

Making Axons Branch and Grow to Help Nerve Regeneration After Injury

One molecule makes nerve cells grow longer. Another one makes them grow branches. These new experimental manipulations have taken researchers a step closer to understanding how nerve cells are repaired at their farthest reaches after injury. The research was recently published in the Journal of Neuroscience.

“If you injure a peripheral nerve, it will spontaneously regenerate, but it goes very slowly. We’re trying to speed that up,” said Dr. Jeffery Twiss, a professor and head of the biology department at Drexel University in the College of Arts and Sciences, who was senior author of the paper.

But, Twiss said, scientists still have a lot to learn about how nerve cells repair themselves. He and his colleagues are especially interested in how nerve cells are repaired in their longest-reaching sections, their axons. Axons can be up to a meter long in adult human nerve cells, extending away from the cell body toward neighboring nerve cells, with which they exchange signals. Restoring length to damaged axons is essential to restoring nerve function, but coordinating these repairs at a great distance from the cell’s nucleus involves a mix of complex processes within each cell. To gain insight into these processes, they have focused research, including the present study, on repair proteins that are created locally near an injury site in a nerve’s axon.

Filed under axons nerve cells nerve function nerve regeneration proteins neuroscience science

free counters