Neuroscience

Articles and news from the latest research reports.

74 notes

Fishing for memories

In our interaction with our environment we constantly refer to past experiences stored as memories to guide behavioral decisions. But how memories are formed, stored and then retrieved to assist decision-making remains a mystery. By observing whole-brain activity in live zebrafish, researchers from the RIKEN Brain Science Institute have visualized for the first time how information stored as long-term memory in the cerebral cortex is processed to guide behavioral choices.

The study, published today in the journal Neuron, was carried out by Dr. Tazu Aoki and Dr. Hitoshi Okamoto from the Laboratory for Developmental Gene Regulation, a pioneer in the study of how the brain controls behavior in zebrafish.

The mammalian brain is too large to observe the whole neural circuit in action. But using a technique called calcium imaging, Aoki et al. were able to visualize for the first time the activity of the whole zebrafish brain during memory retrieval.

Calcium imaging takes advantage of the fact that calcium ions enter neurons upon neural activation. By introducing a calcium sensitive fluorescent substance in the neural tissue, it becomes possible to trace the calcium influx in neurons and thus visualize neural activity.

The researchers trained transgenic zebrafish expressing a calcium sensitive protein to avoid a mild electric shock using a red LED as cue. By observing the zebrafish brain activity upon presentation of the red LED they were able to visualize the process of remembering the learned avoidance behavior.

They observe spot-like neural activity in the dorsal part of the fish telencephalon, which corresponds to the human cortex, upon presentation of the red LED 24 hours after the training session. No activity is observed when the cue is presented 30 minutes after training.

In another experiment, Aoki et al. show that if this region of the brain is removed, the fish are able to learn the avoidance behavior, remember it short-term, but cannot form any long-term memory of it.

“This indicates that short-term and long-term memories are formed and stored in different parts of the brain. We think that short-term memories must be transferred to the cortical region to be consolidated into long-term memories,” explains Dr. Aoki.

The team then tested whether memories for the best behavioral choices can be modified by new learning. The fish were trained to learn two opposite avoidance behaviors, each associated with a different LED color, blue or red, as a cue. They find that presentation of the different cues leads to the activation of different groups of neurons in the telencephalon, which indicates that different behavioral programs are stored and retrieved by different populations of neurons.

“Using calcium imaging on zebrafish, we were able to visualize an on-going process of memory consolidation for the first time. This approach opens new avenues for research into memory using zebrafish as model organism,” concludes Dr. Okamoto.

Filed under zebrafish brain activity neural activity memory formation LTM calcium ions neuroscience science

120 notes

Fast and painless way to better mental arithmetic? Yes, there might actually be a way
In the future, if you want to improve your ability to manipulate numbers in your head, you might just plug yourself in. So say researchers who report in the Cell Press journal Current Biology on May 16 on studies of a harmless form of brain stimulation applied to an area known to be important for math ability.
"With just five days of cognitive training and noninvasive, painless brain stimulation, we were able to bring about long-lasting improvements in cognitive and brain functions," says Roi Cohen Kadosh of the University of Oxford.
Incredibly, the improvements held for a period of six months after training. No one knows exactly how this relatively new method of stimulation, called transcranial random noise stimulation (TRNS), works. But the researchers say the evidence suggests that it allows the brain to work more efficiently by making neurons fire more synchronously.
Cohen Kadosh and his colleagues had shown previously that another form of brain stimulation could make people better at learning and processing new numbers. But, he says, TRNS is even less perceptible to those receiving it. TRNS also has the potential to help even more people. That’s because it has been shown to improve mental arithmetic—the ability to add, subtract, or multiply a string of numbers in your head, for example—not just new number learning. Mental arithmetic is a more complex and challenging task, which more than 20 percent of people struggle with.
Ultimately, Cohen Kadosh says, with better integration of neuroscience and education, this line of study could really help humans reach our cognitive potential in math and beyond. It might also be of particular help to those suffering with neurodegenerative illness, stroke, or learning difficulties.
"Maths is a highly complex cognitive faculty that is based on a myriad of different abilities," Cohen Kadosh says. "If we can enhance mathematics, therefore, there is a good chance that we will be able to enhance simpler cognitive functions."

Fast and painless way to better mental arithmetic? Yes, there might actually be a way

In the future, if you want to improve your ability to manipulate numbers in your head, you might just plug yourself in. So say researchers who report in the Cell Press journal Current Biology on May 16 on studies of a harmless form of brain stimulation applied to an area known to be important for math ability.

"With just five days of cognitive training and noninvasive, painless brain stimulation, we were able to bring about long-lasting improvements in cognitive and brain functions," says Roi Cohen Kadosh of the University of Oxford.

Incredibly, the improvements held for a period of six months after training. No one knows exactly how this relatively new method of stimulation, called transcranial random noise stimulation (TRNS), works. But the researchers say the evidence suggests that it allows the brain to work more efficiently by making neurons fire more synchronously.

Cohen Kadosh and his colleagues had shown previously that another form of brain stimulation could make people better at learning and processing new numbers. But, he says, TRNS is even less perceptible to those receiving it. TRNS also has the potential to help even more people. That’s because it has been shown to improve mental arithmetic—the ability to add, subtract, or multiply a string of numbers in your head, for example—not just new number learning. Mental arithmetic is a more complex and challenging task, which more than 20 percent of people struggle with.

Ultimately, Cohen Kadosh says, with better integration of neuroscience and education, this line of study could really help humans reach our cognitive potential in math and beyond. It might also be of particular help to those suffering with neurodegenerative illness, stroke, or learning difficulties.

"Maths is a highly complex cognitive faculty that is based on a myriad of different abilities," Cohen Kadosh says. "If we can enhance mathematics, therefore, there is a good chance that we will be able to enhance simpler cognitive functions."

Filed under brain stimulation cognitive functioning mental arithmetic learning difficulties neuroscience science

91 notes

Brain Makes Call on Which Ear Is Used for Cell Phone
If you’re a left-brain thinker, chances are you use your right hand to hold your cell phone up to your right ear, according to a newly published study from Henry Ford Hospital in Detroit.
The study – to appear online in JAMA Otolaryngology-Head & Neck Surgery – shows a strong correlation between brain dominance and the ear used to listen to a cell phone. More than 70% of participants held their cell phone up to the ear on the same side as their dominant hand, the study finds.
Left-brain dominant people – who account for about 95% of the population and have their speech and language center located on the left side of the brain – are more likely to use their right hand for writing and other everyday tasks.
Likewise, the Henry Ford study reveals most left-brain dominant people also use the phone in their right ear, despite there being no perceived difference in their hearing in the left or right ear. And, right-brain dominant people are more likely to use their left hand to hold the phone in their left ear.
“Our findings have several implications, especially for mapping the language center of the brain,” says Michael Seidman, M.D., FACS, director of the division of otologic and neurotologic surgery in the Department of Otolaryngology-Head and Neck Surgery at Henry Ford.
“By establishing a correlation between cerebral dominance and sidedness of cell phone use, it may be possible to develop a less-invasive, lower-cost option to establish the side of the brain where speech and language occurs rather than the Wada test, a procedure that injects an anesthetic into the carotid artery to put part of the brain to sleep in order to map activity.”
He notes that the study also may offer additional evidence that cell phone use and tumors of the brain, head and neck may not necessarily be linked.
Since nearly 80% of people use the cell phone in their right ear, he says if there were a strong connection there would be far more people diagnosed with cancer on the right side of their brain, head and neck, the dominant side for cell phone use. It’s likely, he says, that the development of tumors is more “dose-dependent” based on cell phone usage.
The study began with the simple observation that most people use their right hand to hold a cell phone to their right ear. This practice, Dr. Seidman says, is illogical since it is challenging to listen on the phone with the right ear and take notes with the right hand.
To determine if there is an association between sidedness of cell phone use and auditory or language hemispheric dominance, the Henry Ford team developed an online survey using modifications of the Edinburgh Handedness protocol, a tool used for more than 40 years to assess handedness and predict cerebral dominance.
The survey included questions about which hand was used for tasks such as writing; time spent talking on cell phone; whether the right or left ear is used to listen to phone conversations; and if respondents had been diagnosed with a brain or head and neck tumor.
It was distributed to 5,000 individuals who were either with an otology online group or a patient undergoing Wada and MRI for non-invasive localization purposes.
On average, respondents’ cell phone usage was 540 minutes per month. The majority of respondents (90%) were right handed, 9% were left handed and 1% was ambidextrous.
Among those who are right handed, 68% reported that they hold the phone to their right ear, while 25% used the left ear and 7% used both right and left ears. For those who are left handed, 72% said they used their left ear for cell phone conversations, while 23% used their right ear and 5% had no preference.
The study also revealed that having a hearing difference can impact ear preference for cell phone use.
In all, the study found that there is a correlation between brain dominance and laterality of cell phone use, and there is a significantly higher probability of using the dominant hand side ear.
Studies are underway to look at tumor registry banks of patients with head, neck and brain cancer to evaluate cell phone usage. Controversy still exists around a potential association of cell phone use and tumors. Until this is fully understood, Dr. Seidman advises using hands-free modes for calls rather than holding a phone up to the side of the head.
(Original publication: “Study Examines Relationship Between Hemispheric Dominance and Cell Phone Use” JAMA Otolaryngology-Head & Neck Surgery, 2013; Michael D. Seidman et al.)

Brain Makes Call on Which Ear Is Used for Cell Phone

If you’re a left-brain thinker, chances are you use your right hand to hold your cell phone up to your right ear, according to a newly published study from Henry Ford Hospital in Detroit.

The study – to appear online in JAMA Otolaryngology-Head & Neck Surgery – shows a strong correlation between brain dominance and the ear used to listen to a cell phone. More than 70% of participants held their cell phone up to the ear on the same side as their dominant hand, the study finds.

Left-brain dominant people – who account for about 95% of the population and have their speech and language center located on the left side of the brain – are more likely to use their right hand for writing and other everyday tasks.

Likewise, the Henry Ford study reveals most left-brain dominant people also use the phone in their right ear, despite there being no perceived difference in their hearing in the left or right ear. And, right-brain dominant people are more likely to use their left hand to hold the phone in their left ear.

“Our findings have several implications, especially for mapping the language center of the brain,” says Michael Seidman, M.D., FACS, director of the division of otologic and neurotologic surgery in the Department of Otolaryngology-Head and Neck Surgery at Henry Ford.

“By establishing a correlation between cerebral dominance and sidedness of cell phone use, it may be possible to develop a less-invasive, lower-cost option to establish the side of the brain where speech and language occurs rather than the Wada test, a procedure that injects an anesthetic into the carotid artery to put part of the brain to sleep in order to map activity.”

He notes that the study also may offer additional evidence that cell phone use and tumors of the brain, head and neck may not necessarily be linked.

Since nearly 80% of people use the cell phone in their right ear, he says if there were a strong connection there would be far more people diagnosed with cancer on the right side of their brain, head and neck, the dominant side for cell phone use. It’s likely, he says, that the development of tumors is more “dose-dependent” based on cell phone usage.

The study began with the simple observation that most people use their right hand to hold a cell phone to their right ear. This practice, Dr. Seidman says, is illogical since it is challenging to listen on the phone with the right ear and take notes with the right hand.

To determine if there is an association between sidedness of cell phone use and auditory or language hemispheric dominance, the Henry Ford team developed an online survey using modifications of the Edinburgh Handedness protocol, a tool used for more than 40 years to assess handedness and predict cerebral dominance.

The survey included questions about which hand was used for tasks such as writing; time spent talking on cell phone; whether the right or left ear is used to listen to phone conversations; and if respondents had been diagnosed with a brain or head and neck tumor.

It was distributed to 5,000 individuals who were either with an otology online group or a patient undergoing Wada and MRI for non-invasive localization purposes.

On average, respondents’ cell phone usage was 540 minutes per month. The majority of respondents (90%) were right handed, 9% were left handed and 1% was ambidextrous.

Among those who are right handed, 68% reported that they hold the phone to their right ear, while 25% used the left ear and 7% used both right and left ears. For those who are left handed, 72% said they used their left ear for cell phone conversations, while 23% used their right ear and 5% had no preference.

The study also revealed that having a hearing difference can impact ear preference for cell phone use.

In all, the study found that there is a correlation between brain dominance and laterality of cell phone use, and there is a significantly higher probability of using the dominant hand side ear.

Studies are underway to look at tumor registry banks of patients with head, neck and brain cancer to evaluate cell phone usage. Controversy still exists around a potential association of cell phone use and tumors. Until this is fully understood, Dr. Seidman advises using hands-free modes for calls rather than holding a phone up to the side of the head.

(Original publication: “Study Examines Relationship Between Hemispheric Dominance and Cell Phone Use” JAMA Otolaryngology-Head & Neck Surgery, 2013; Michael D. Seidman et al.)

Filed under brain dominance cell phone language hemispheric dominance neuroscience science

132 notes

‘Brainbow,’ version 2.0: Researchers refine breakthrough system for producing images of brain, nervous system
The breakthrough technique that allowed scientists to obtain one-of-a-kind, colorful images of the myriad connections in the brain and nervous system is about to get a significant upgrade.
A group of Harvard researchers, led by Joshua Sanes, the Jeff C. Tarr Professor of Molecular and Cellular Biology and Paul J. Finnegan Family Director, Center for Brain Science, and Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and Santíago Ramón y Cajal Professor of Arts and Sciences, has made a host of technical improvements in the “Brainbow” imaging technique. Their work is described in a May 5 paper in Nature Methods.
First described in 2007, the system combines three fluorescent proteins — one red, one blue, and one green — to label different cells with as many as 90 colors. By studying the resulting images, researchers were able to begin to understand how the millions of neurons in the brain are connected.
“‘Brainbow’ generated beautiful images of a kind we had never been able to obtain before, but it was difficult in some ways,” said Sanes, who also serves as director of the Center for Brain Science.
“These modifications aim to overcome some of the more problematic features of the original genetic constructs,” Lichtman said. “Lead author Dawen Cai, a research associate in our labs, worked hard and creatively to find ways to make the ‘Brainbow’ colors brighter, more variable, and useable in situations where the original gene constructs were hard to implement. Our first look at these animals suggests that these improvements are fantastic.”
Among the challenges faced by researchers using the original method, Sanes said, was the chance that certain colored proteins would bleach out faster than others.
“If one color bleaches faster than the others, you start with a ‘Brainbow,’ but by the time you’re done imaging, you might just have a ‘blue-bow,’ because the red and yellow bleach too fast,” he said.
Sanes said that some colors also were too dim, causing problems in the imaging process, while in other cases the protein didn’t fill the whole neuron evenly enough, or there was an overabundance of a certain color in an image.
“What we decided to do was to make the next generation of ‘Brainbow,’” Sanes said. “We systematically set out to look at these problems. We looked at a whole range of fluorescent proteins to find the ones that were brightest and wouldn’t bleach as much, and we developed new transgenic methods to avoid the predominance of a particular color.”
The researchers also explored new ways to create “Brainbow” images, including using viruses to introduce fluorescent proteins into cells.
The advantage of the new technique, Sanes said, is it offers researchers the chance to target certain parts of the brain and better understand how neurons radiate out to connect with other brain regions. Ultimately, he said, he hopes that other researchers are able to apply the techniques outlined in the paper in the same way that they expanded on the first “Brainbow” method.
“People adapted the method to study a number of interesting questions in other tissues to examine cellular relationships and cell lineages in kidney and skin cells,” he said. “It was also used to examine the nervous system in animals like zebrafish and C. elegans. With these new tools, I think we’ve taken the next step.”

‘Brainbow,’ version 2.0: Researchers refine breakthrough system for producing images of brain, nervous system

The breakthrough technique that allowed scientists to obtain one-of-a-kind, colorful images of the myriad connections in the brain and nervous system is about to get a significant upgrade.

A group of Harvard researchers, led by Joshua Sanes, the Jeff C. Tarr Professor of Molecular and Cellular Biology and Paul J. Finnegan Family Director, Center for Brain Science, and Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and Santíago Ramón y Cajal Professor of Arts and Sciences, has made a host of technical improvements in the “Brainbow” imaging technique. Their work is described in a May 5 paper in Nature Methods.

First described in 2007, the system combines three fluorescent proteins — one red, one blue, and one green — to label different cells with as many as 90 colors. By studying the resulting images, researchers were able to begin to understand how the millions of neurons in the brain are connected.

“‘Brainbow’ generated beautiful images of a kind we had never been able to obtain before, but it was difficult in some ways,” said Sanes, who also serves as director of the Center for Brain Science.

“These modifications aim to overcome some of the more problematic features of the original genetic constructs,” Lichtman said. “Lead author Dawen Cai, a research associate in our labs, worked hard and creatively to find ways to make the ‘Brainbow’ colors brighter, more variable, and useable in situations where the original gene constructs were hard to implement. Our first look at these animals suggests that these improvements are fantastic.”

Among the challenges faced by researchers using the original method, Sanes said, was the chance that certain colored proteins would bleach out faster than others.

“If one color bleaches faster than the others, you start with a ‘Brainbow,’ but by the time you’re done imaging, you might just have a ‘blue-bow,’ because the red and yellow bleach too fast,” he said.

Sanes said that some colors also were too dim, causing problems in the imaging process, while in other cases the protein didn’t fill the whole neuron evenly enough, or there was an overabundance of a certain color in an image.

“What we decided to do was to make the next generation of ‘Brainbow,’” Sanes said. “We systematically set out to look at these problems. We looked at a whole range of fluorescent proteins to find the ones that were brightest and wouldn’t bleach as much, and we developed new transgenic methods to avoid the predominance of a particular color.”

The researchers also explored new ways to create “Brainbow” images, including using viruses to introduce fluorescent proteins into cells.

The advantage of the new technique, Sanes said, is it offers researchers the chance to target certain parts of the brain and better understand how neurons radiate out to connect with other brain regions. Ultimately, he said, he hopes that other researchers are able to apply the techniques outlined in the paper in the same way that they expanded on the first “Brainbow” method.

“People adapted the method to study a number of interesting questions in other tissues to examine cellular relationships and cell lineages in kidney and skin cells,” he said. “It was also used to examine the nervous system in animals like zebrafish and C. elegans. With these new tools, I think we’ve taken the next step.”

Filed under brainbow neurons brain imaging nervous system fluorescent proteins neuroscience science

87 notes

Physicist’s tool has potential for brain mapping
A new tool being developed by UT Arlington assistant professor of physics could help scientists map and track the interactions between neurons inside different areas of the brain.
The journal Optics Letters recently published a paper by Samarendra Mohanty on the development of a fiber-optic, two-photon, optogenetic stimulator and its use on human cells in a laboratory. The tiny tool builds on Mohanty’s previous discovery that near-infrared light can be used to stimulate a light-sensitive protein introduced into living cells and neurons in the brain. This new method could show how different parts of the brain react when a linked area is stimulated.
The technology would be useful in the BRAIN mapping initiative recently championed by President Barack Obama, Mohanty said. BRAIN stands for Brain Research Through Advancing Innovative Neurotechnologies and will include $100 million in government investments in research.
“Scientists have spent a lot of time looking at the physical connections between different regions of the brain. But that information is not sufficient unless we examine how those connections function,” Mohanty said. “That’s where two-photon optogenetics comes into play. This is a tool not only to control the neuronal activity but to understand how the brain works.”
The two-photon optogenetic stimulation described in the Optics Letter paper involves introducing the gene for ChR2, a protein that responds to light, into a sample of excitable cells. A fiber-optic infrared beam of light can then be used to precisely excite the neurons in a tissue circuit.
In the brain, researchers would then observe responses in the excited area as well as other parts of the neural circuit. In living subjects, scientists would also observe the behavioral outcome, Mohanty said. 
Optogenetic stimulation avoids damage to living tissue by using light to stimulate neurons instead of electric pulses used in past research. Mohanty’s method of using low-energy near-infrared light also enables more precision and a deeper focus than the blue or green light beams often used in optogenetic stimulation, the paper said.
Using fiber optics to deliver the two-photon optogenetic beam is another advance. Previous methods required bulky microscopes or complex scanning beams. Mohanty’s group is collaborating with UT Arlington Department of Psychology assistant professor Linda Perrotti to apply this technology in living animals.
“Dr. Mohanty’s innovations continue to be recognized because of the great potential they hold,” said Pamela Jansma, dean of the UT Arlington College of Science. “Hopefully, his work will one day provide researchers in other fields the tools they need to examine how the human body works and why normal processes sometimes fail.”
(Image: Shutterstock)

Physicist’s tool has potential for brain mapping

A new tool being developed by UT Arlington assistant professor of physics could help scientists map and track the interactions between neurons inside different areas of the brain.

The journal Optics Letters recently published a paper by Samarendra Mohanty on the development of a fiber-optic, two-photon, optogenetic stimulator and its use on human cells in a laboratory. The tiny tool builds on Mohanty’s previous discovery that near-infrared light can be used to stimulate a light-sensitive protein introduced into living cells and neurons in the brain. This new method could show how different parts of the brain react when a linked area is stimulated.

The technology would be useful in the BRAIN mapping initiative recently championed by President Barack Obama, Mohanty said. BRAIN stands for Brain Research Through Advancing Innovative Neurotechnologies and will include $100 million in government investments in research.

“Scientists have spent a lot of time looking at the physical connections between different regions of the brain. But that information is not sufficient unless we examine how those connections function,” Mohanty said. “That’s where two-photon optogenetics comes into play. This is a tool not only to control the neuronal activity but to understand how the brain works.”

The two-photon optogenetic stimulation described in the Optics Letter paper involves introducing the gene for ChR2, a protein that responds to light, into a sample of excitable cells. A fiber-optic infrared beam of light can then be used to precisely excite the neurons in a tissue circuit.

In the brain, researchers would then observe responses in the excited area as well as other parts of the neural circuit. In living subjects, scientists would also observe the behavioral outcome, Mohanty said. 

Optogenetic stimulation avoids damage to living tissue by using light to stimulate neurons instead of electric pulses used in past research. Mohanty’s method of using low-energy near-infrared light also enables more precision and a deeper focus than the blue or green light beams often used in optogenetic stimulation, the paper said.

Using fiber optics to deliver the two-photon optogenetic beam is another advance. Previous methods required bulky microscopes or complex scanning beams. Mohanty’s group is collaborating with UT Arlington Department of Psychology assistant professor Linda Perrotti to apply this technology in living animals.

“Dr. Mohanty’s innovations continue to be recognized because of the great potential they hold,” said Pamela Jansma, dean of the UT Arlington College of Science. “Hopefully, his work will one day provide researchers in other fields the tools they need to examine how the human body works and why normal processes sometimes fail.”

(Image: Shutterstock)

Filed under brain mapping neurons optogenetic stimulator optogenetics neuroscience science

98 notes

Gene Involved in Neurodegeneration Keeps Clock Running
Northwestern University scientists have shown a gene involved in neurodegenerative disease also plays a critical role in the proper function of the circadian clock.
In a study of the common fruit fly, the researchers found the gene, called Ataxin-2, keeps the clock responsible for sleeping and waking on a 24-hour rhythm. Without the gene, the rhythm of the fruit fly’s sleep-wake cycle is disturbed, making waking up on a regular schedule difficult for the fly.
The discovery is particularly interesting because mutations in the human Ataxin-2 gene are known to cause a rare disorder called spinocerebellar ataxia (SCA) and also contribute to amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease. People with SCA suffer from sleep abnormalities before other symptoms of the disease appear.
This study linking the Ataxin-2 gene with abnormalities in the sleep-wake cycle could help pinpoint what is causing these neurodegenerative diseases as well as provide a deeper understanding of the human sleep-wake cycle.
The findings will be published May 17 in the journal Science. Ravi Allada, M.D., professor of neurobiology in the Weinberg College of Arts and Sciences, and Chunghun Lim, a postdoctoral fellow in his lab, are authors of the paper.
Period (per) is a well-studied gene in fruit flies that encodes a protein, called PER, which regulates circadian rhythm. Allada and Lim discovered that Ataxin-2 helps activate translation of PER RNA into PER protein, a key step in making the circadian clock run properly.
“It’s possible that Ataxin-2’s function as an activator of protein translation may be central to understanding how, when you mutate the gene and disrupt its function, it may be causing or contributing to diseases such as ALS or spinocerebellar ataxia,” Allada said.
The fruit fly Drosophila melanogaster is a model organism for scientists studying the sleep-wake cycle because the fly’s genes are highly conserved with the genes of humans.
“I like to say that flies sleep similarly to humans, except flies don’t use pillows,” said Allada, who also is associate director for Northwestern’s Center for Sleep and Circadian Biology. The biological timing mechanism for all animals comes from a common ancestor hundreds of millions of years ago.
Ataxin-2 is the second gene in a little more than two years that Northwestern researchers have identified as a core gear of the circadian clock, and the two genes play similar roles.
Allada, Lim and colleagues in 2011 reported their discovery of a gene, which they dubbed “twenty-four,” that plays a role in translating the PER protein, keeping the sleep-wake cycle on a 24-hour rhythm.
Allada and Lim wanted to better understand how twenty-four works, so they looked at proteins that associate with twenty-four. They found the twenty-four protein sticking to ATAXIN-2 and decided to investigate further. In their experiments, reported in Science, Allada and Lim discovered the Ataxin-2 and twenty-four genes appear to be partners in PER protein translation.
“We’ve really started to define a pathway that regulates the circadian clock and seems to be especially important in a specific group of neurons that governs the fly’s morning wake-up,” Allada said. “We saw that the molecular and behavioral consequences of losing Ataxin-2 are nearly the same as losing twenty-four.”
As is the case in a mutation of the twenty-four gene, when the Ataxin-2 gene is not present, very little PER protein is found in the circadian pacemaker neurons of the brain, and the fly’s sleep-wake rhythm is disturbed.

Gene Involved in Neurodegeneration Keeps Clock Running

Northwestern University scientists have shown a gene involved in neurodegenerative disease also plays a critical role in the proper function of the circadian clock.

In a study of the common fruit fly, the researchers found the gene, called Ataxin-2, keeps the clock responsible for sleeping and waking on a 24-hour rhythm. Without the gene, the rhythm of the fruit fly’s sleep-wake cycle is disturbed, making waking up on a regular schedule difficult for the fly.

The discovery is particularly interesting because mutations in the human Ataxin-2 gene are known to cause a rare disorder called spinocerebellar ataxia (SCA) and also contribute to amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease. People with SCA suffer from sleep abnormalities before other symptoms of the disease appear.

This study linking the Ataxin-2 gene with abnormalities in the sleep-wake cycle could help pinpoint what is causing these neurodegenerative diseases as well as provide a deeper understanding of the human sleep-wake cycle.

The findings will be published May 17 in the journal Science. Ravi Allada, M.D., professor of neurobiology in the Weinberg College of Arts and Sciences, and Chunghun Lim, a postdoctoral fellow in his lab, are authors of the paper.

Period (per) is a well-studied gene in fruit flies that encodes a protein, called PER, which regulates circadian rhythm. Allada and Lim discovered that Ataxin-2 helps activate translation of PER RNA into PER protein, a key step in making the circadian clock run properly.

“It’s possible that Ataxin-2’s function as an activator of protein translation may be central to understanding how, when you mutate the gene and disrupt its function, it may be causing or contributing to diseases such as ALS or spinocerebellar ataxia,” Allada said.

The fruit fly Drosophila melanogaster is a model organism for scientists studying the sleep-wake cycle because the fly’s genes are highly conserved with the genes of humans.

“I like to say that flies sleep similarly to humans, except flies don’t use pillows,” said Allada, who also is associate director for Northwestern’s Center for Sleep and Circadian Biology. The biological timing mechanism for all animals comes from a common ancestor hundreds of millions of years ago.

Ataxin-2 is the second gene in a little more than two years that Northwestern researchers have identified as a core gear of the circadian clock, and the two genes play similar roles.

Allada, Lim and colleagues in 2011 reported their discovery of a gene, which they dubbed “twenty-four,” that plays a role in translating the PER protein, keeping the sleep-wake cycle on a 24-hour rhythm.

Allada and Lim wanted to better understand how twenty-four works, so they looked at proteins that associate with twenty-four. They found the twenty-four protein sticking to ATAXIN-2 and decided to investigate further. In their experiments, reported in Science, Allada and Lim discovered the Ataxin-2 and twenty-four genes appear to be partners in PER protein translation.

“We’ve really started to define a pathway that regulates the circadian clock and seems to be especially important in a specific group of neurons that governs the fly’s morning wake-up,” Allada said. “We saw that the molecular and behavioral consequences of losing Ataxin-2 are nearly the same as losing twenty-four.”

As is the case in a mutation of the twenty-four gene, when the Ataxin-2 gene is not present, very little PER protein is found in the circadian pacemaker neurons of the brain, and the fly’s sleep-wake rhythm is disturbed.

Filed under neurodegenerative diseases circadian clock fruit flies sleep-wake cycle genes neuroscience science

5,999 notes

yaleuniversity:

Yale researchers used light to probe the actions of the neurotransmitter GABA on single synapses along the branches of a neuron.
This photo shows a mouse cortical neuron in red, with dendritic branches that are studded with synaptic spines. Surrounding the neuron are inhibitory axons or fibers (in blue) that are genetically engineered to release GABA when activated by light, a technique known as optogenetics. Learn more  →

yaleuniversity:

Yale researchers used light to probe the actions of the neurotransmitter GABA on single synapses along the branches of a neuron.

This photo shows a mouse cortical neuron in red, with dendritic branches that are studded with synaptic spines. Surrounding the neuron are inhibitory axons or fibers (in blue) that are genetically engineered to release GABA when activated by light, a technique known as optogenetics. Learn more  →

71 notes

Repeat Brain Injury Raises Soldiers’ Suicide Risk
People in the military who suffer more than one mild traumatic brain injury face a significantly higher risk of suicide, according to research by the National Center for Veterans Studies at the University of Utah.
A survey of 161 military personnel who were stationed in Iraq and evaluated for a possible traumatic brain injury – also known as TBI – showed that the risk for suicidal thoughts or behaviors increased not only in the short term, as measured during the past 12 months, but during the individual’s lifetime.
The risk of suicidal thoughts increased significantly with the number of TBIs, even when controlling for other psychological factors, the researchers say in a paper published online Wednesday, May 15 in JAMA Psychiatry, a specialty journal of the American Medical Association.
“Up to now, no one has been able to say if multiple TBIs, which are common among combat veterans, are associated with higher suicide risk or not,” says the study’s lead author, Craig J. Bryan, assistant professor of psychology at the University of Utah and associate director of the National Center for Veterans Studies. “This study suggests they are, and it provides valuable information for professionals treating wounded combat servicemen and women to help manage the risk of suicide.”
Results showed that one in five patients (21.7 percent) who had ever sustained more than one TBI reported suicidal ideation – thoughts about or preoccupation with suicide – at any time in the past. For patients who had received one TBI, 6.9 percent reported having suicidal thoughts, and zero percent for those with no TBIs. In evaluating the lifetime risk, patients were asked if they had ever experienced suicidal thoughts and behaviors up to the point they were assessed.
The increases were similar for suicidal thoughts during the previous year rather than at any time: 12 percent of those with multiple TBIs had entertained suicidal ideas during the past year, compared with 3.4 percent with one TBI and zero percent for no TBIs.
In this study, suicidal ideation was used as the indicator of suicide risk because too few patients reported a history of suicide plan or had made a suicide attempt for statistically valid conclusions to be made.
Researchers found that multiple TBIs also were associated with a significant increase in other psychological symptoms already tied to single traumatic head injuries, including depression, post-traumatic stress disorder or PTSD, and the severity of the concussive symptoms. However, only the increase in depression severity predicted an increased suicide risk.
“That head injury and resulting psychological effects increase the risk of suicide is not new,” says Bryan. “But knowing that repetitive TBIs may make patients even more vulnerable provides new insight for attending to military personnel over the long-term, particularly when they are experiencing added emotional distress in their lives.”
How the Study was Conducted
During a six-month period in 2009, 161 patients who received a suspected brain injury while on duty in Iraq were referred to an outpatient TBI clinic at a combat support hospital there. Patients were predominantly male, average age of 27, with 6.5 years of military service.
Diagnosis of traumatic brain injury was made by a clinical psychologist specifically trained in the assessment, diagnosis and management of the condition. Only patients with mild or no TBI completed all assessments; patients with moderate to severe TBI were immediately evacuated from Iraq.
TBI was confirmed if at least one clinical event was newly presented or worsened following the injury: loss of consciousness or memory, alteration of mental state, some neurological decline or brain damage.
Patients were divided into three groups based the total number of TBIs during their entire lives – zero, single TBI and two or more – the most recent of which was typically within the days immediately preceding their evaluation and inclusion in the study.
Each individual was also given surveys as part of his or her evaluation and treatment. Using standard evaluation tools, patients were surveyed about their symptoms of depression, PTSD and concussions, and their suicidal thoughts and behaviors.
“An important feature of the study is that by being on the ground in Iraq, we were able to compile a unique data set on active military personnel and head injury,” Bryan says. “We collected data on a large number of service members within two days of impact.”
At the same time, because the results of this study are based on a single clinical sample –active military in a war zone within days of the injury – the researchers note that caution is advised before assuming that the results from this particular group will apply to every other group. Studies with larger sample sizes and conducted over longer periods of time will be needed.
Why TBI is of Concern for Military Personnel
As defined by the Centers for Disease Control and Prevention, a traumatic brain injury is caused by a bump, blow or jolt to the head, or a penetrating head injury that disrupts the normal function of the brain. Effects can be mild to severe. The majority of TBIs that occur each year are concussions or other mild forms.
TBI is considered a “signature injury” of the Iraq and Afghanistan conflicts and is of particular concern because of the frequency of concussive injuries from explosions and other combat-related incidents. Estimated prevalence of TBI for those deployed in these two countries ranges from 8 percent to 20 percent, according to a 2008 study.
In addition, according to studies by the RAND Corp., suicide is the second-leading cause of death among U.S. military personnel, and the rate has risen steadily since the conflicts began in Iraq and Afghanistan. Prevalence of PTSD, depression and substance abuse have risen as well, especially among those in combat, and each has been shown to increase risk for suicidal behaviors.
“Being aware of the number of a patient’s head injuries and the interrelation with depression and other psychological symptoms may help us better understand, and thus moderate, the risk of suicide over time,” Bryan says. “Ultimately, we would like to know why people do not kill themselves. Despite facing similar issues and circumstances, some people recover. Understanding that is the real goal.”

Repeat Brain Injury Raises Soldiers’ Suicide Risk

People in the military who suffer more than one mild traumatic brain injury face a significantly higher risk of suicide, according to research by the National Center for Veterans Studies at the University of Utah.

A survey of 161 military personnel who were stationed in Iraq and evaluated for a possible traumatic brain injury – also known as TBI – showed that the risk for suicidal thoughts or behaviors increased not only in the short term, as measured during the past 12 months, but during the individual’s lifetime.

The risk of suicidal thoughts increased significantly with the number of TBIs, even when controlling for other psychological factors, the researchers say in a paper published online Wednesday, May 15 in JAMA Psychiatry, a specialty journal of the American Medical Association.

“Up to now, no one has been able to say if multiple TBIs, which are common among combat veterans, are associated with higher suicide risk or not,” says the study’s lead author, Craig J. Bryan, assistant professor of psychology at the University of Utah and associate director of the National Center for Veterans Studies. “This study suggests they are, and it provides valuable information for professionals treating wounded combat servicemen and women to help manage the risk of suicide.”

Results showed that one in five patients (21.7 percent) who had ever sustained more than one TBI reported suicidal ideation – thoughts about or preoccupation with suicide – at any time in the past. For patients who had received one TBI, 6.9 percent reported having suicidal thoughts, and zero percent for those with no TBIs. In evaluating the lifetime risk, patients were asked if they had ever experienced suicidal thoughts and behaviors up to the point they were assessed.

The increases were similar for suicidal thoughts during the previous year rather than at any time: 12 percent of those with multiple TBIs had entertained suicidal ideas during the past year, compared with 3.4 percent with one TBI and zero percent for no TBIs.

In this study, suicidal ideation was used as the indicator of suicide risk because too few patients reported a history of suicide plan or had made a suicide attempt for statistically valid conclusions to be made.

Researchers found that multiple TBIs also were associated with a significant increase in other psychological symptoms already tied to single traumatic head injuries, including depression, post-traumatic stress disorder or PTSD, and the severity of the concussive symptoms. However, only the increase in depression severity predicted an increased suicide risk.

“That head injury and resulting psychological effects increase the risk of suicide is not new,” says Bryan. “But knowing that repetitive TBIs may make patients even more vulnerable provides new insight for attending to military personnel over the long-term, particularly when they are experiencing added emotional distress in their lives.”

How the Study was Conducted

During a six-month period in 2009, 161 patients who received a suspected brain injury while on duty in Iraq were referred to an outpatient TBI clinic at a combat support hospital there. Patients were predominantly male, average age of 27, with 6.5 years of military service.

Diagnosis of traumatic brain injury was made by a clinical psychologist specifically trained in the assessment, diagnosis and management of the condition. Only patients with mild or no TBI completed all assessments; patients with moderate to severe TBI were immediately evacuated from Iraq.

TBI was confirmed if at least one clinical event was newly presented or worsened following the injury: loss of consciousness or memory, alteration of mental state, some neurological decline or brain damage.

Patients were divided into three groups based the total number of TBIs during their entire lives – zero, single TBI and two or more – the most recent of which was typically within the days immediately preceding their evaluation and inclusion in the study.

Each individual was also given surveys as part of his or her evaluation and treatment. Using standard evaluation tools, patients were surveyed about their symptoms of depression, PTSD and concussions, and their suicidal thoughts and behaviors.

“An important feature of the study is that by being on the ground in Iraq, we were able to compile a unique data set on active military personnel and head injury,” Bryan says. “We collected data on a large number of service members within two days of impact.”

At the same time, because the results of this study are based on a single clinical sample –active military in a war zone within days of the injury – the researchers note that caution is advised before assuming that the results from this particular group will apply to every other group. Studies with larger sample sizes and conducted over longer periods of time will be needed.

Why TBI is of Concern for Military Personnel

As defined by the Centers for Disease Control and Prevention, a traumatic brain injury is caused by a bump, blow or jolt to the head, or a penetrating head injury that disrupts the normal function of the brain. Effects can be mild to severe. The majority of TBIs that occur each year are concussions or other mild forms.

TBI is considered a “signature injury” of the Iraq and Afghanistan conflicts and is of particular concern because of the frequency of concussive injuries from explosions and other combat-related incidents. Estimated prevalence of TBI for those deployed in these two countries ranges from 8 percent to 20 percent, according to a 2008 study.

In addition, according to studies by the RAND Corp., suicide is the second-leading cause of death among U.S. military personnel, and the rate has risen steadily since the conflicts began in Iraq and Afghanistan. Prevalence of PTSD, depression and substance abuse have risen as well, especially among those in combat, and each has been shown to increase risk for suicidal behaviors.

“Being aware of the number of a patient’s head injuries and the interrelation with depression and other psychological symptoms may help us better understand, and thus moderate, the risk of suicide over time,” Bryan says. “Ultimately, we would like to know why people do not kill themselves. Despite facing similar issues and circumstances, some people recover. Understanding that is the real goal.”

Filed under TBI brain injury head trauma PTSD suicide suicidal behavior neuroscience science

173 notes

Researchers develop novel Brain Training Device to reconnect the brain and paralyzed limb after stroke
The world’s first Brain Training Device has given a ray of new hope to the recovery of survivors after stroke. Developed by researchers of The Hong Kong Polytechnic University (PolyU)’s Interdisciplinary Division of Biomedical Engineering (BME), this novel device which can detect brainwave, and thereby control the movement of paralyzed limbs, or go even further to control a robotic hand based on its sophisticated algorithm.
The research was led by Prof. Raymond Tong Kai-yu, Professor of PolyU’s Interdisciplinary Division of Biomedical Engineering, who is also the Principal Investigator of the award-winning Exoskeleton Hand Robotic Training Device or the “Hand of Hope”. His team members include the BME research team (Newmen Ho, Xiaoling Hu, Ching-hang Fong, Xinxin Lou, Lawrence Chong and Nathan Lam) and the Industrial Centre team of PolyU (Robert Tam, Bun Yu, Shu-to Ng and Peter Pang).
The latest breakthrough “Brain Training Device” can be coupled with the use of the “Hand of Hope” to achieve higher degree of recovery for stroke patients. While effective motor recovery after stroke depends on early rehabilitation program and intensive voluntary practice of the paretic limbs, current rehabilitation products have not use brainwave to guide the stroke survivors to identify voluntary intention and to relearn how to reconnect to their paralyzed limb again.
Prof. Raymond Tong and his team therefore developed the Brain Training Device with a new coherence algorithm for hand function training. The new algorithm is based on frequency coherence on surface electroencephalography (EEG, brainwave) and electromyography (EMG, muscle activities) to identify voluntary intention and their connection.
"The Brain Training Device is able to guide the stroke patients to relearn the reconnection between the brain and the limb, with a new design on the EEG headset and the EMG forearm brace to transmit data for controlling a hand robotic system interfaced by a telecare software platform using iPad app." Prof. Raymond Tong explained.
The patented Brain Training System, which looks like a helmet for cyclist and can read brainwaves, also has new features to find the specific EEG electrode locations for each individual stroke patient and reduce the number of EEG electrodes, which can reduce the system cost and the preparation time for brain training, added by Prof. Tong. 
To find a minimal set of electrodes to control the device with accuracy higher than 90%, five chronic stroke patients were recruited to be trained for 20 sessions in the study. The researchers found that, in general, 32 electrodes are needed to maintain accuracy higher than 90%.
The high accuracy and low number of channels needed means that the Brain Training Device is a viable tool for assistive aid and rehabilitation training. The futuristic system will be made portable and easy-to-use at hospital and home settings.
PolyU researchers have already filed patents for this Brain Training Device in both the United States and China. This project is funded by the HKSAR Government’s Innovation and Technology Fund (ITF). The findings of this brain control algorithm have been published as the cover story in top international journal IEEE Transactions on Neural Systems and Rehabilitation Engineering (2011.12).

Researchers develop novel Brain Training Device to reconnect the brain and paralyzed limb after stroke

The world’s first Brain Training Device has given a ray of new hope to the recovery of survivors after stroke. Developed by researchers of The Hong Kong Polytechnic University (PolyU)’s Interdisciplinary Division of Biomedical Engineering (BME), this novel device which can detect brainwave, and thereby control the movement of paralyzed limbs, or go even further to control a robotic hand based on its sophisticated algorithm.

The research was led by Prof. Raymond Tong Kai-yu, Professor of PolyU’s Interdisciplinary Division of Biomedical Engineering, who is also the Principal Investigator of the award-winning Exoskeleton Hand Robotic Training Device or the “Hand of Hope”. His team members include the BME research team (Newmen Ho, Xiaoling Hu, Ching-hang Fong, Xinxin Lou, Lawrence Chong and Nathan Lam) and the Industrial Centre team of PolyU (Robert Tam, Bun Yu, Shu-to Ng and Peter Pang).

The latest breakthrough “Brain Training Device” can be coupled with the use of the “Hand of Hope” to achieve higher degree of recovery for stroke patients. While effective motor recovery after stroke depends on early rehabilitation program and intensive voluntary practice of the paretic limbs, current rehabilitation products have not use brainwave to guide the stroke survivors to identify voluntary intention and to relearn how to reconnect to their paralyzed limb again.

Prof. Raymond Tong and his team therefore developed the Brain Training Device with a new coherence algorithm for hand function training. The new algorithm is based on frequency coherence on surface electroencephalography (EEG, brainwave) and electromyography (EMG, muscle activities) to identify voluntary intention and their connection.

"The Brain Training Device is able to guide the stroke patients to relearn the reconnection between the brain and the limb, with a new design on the EEG headset and the EMG forearm brace to transmit data for controlling a hand robotic system interfaced by a telecare software platform using iPad app." Prof. Raymond Tong explained.

The patented Brain Training System, which looks like a helmet for cyclist and can read brainwaves, also has new features to find the specific EEG electrode locations for each individual stroke patient and reduce the number of EEG electrodes, which can reduce the system cost and the preparation time for brain training, added by Prof. Tong. 

To find a minimal set of electrodes to control the device with accuracy higher than 90%, five chronic stroke patients were recruited to be trained for 20 sessions in the study. The researchers found that, in general, 32 electrodes are needed to maintain accuracy higher than 90%.

The high accuracy and low number of channels needed means that the Brain Training Device is a viable tool for assistive aid and rehabilitation training. The futuristic system will be made portable and easy-to-use at hospital and home settings.

PolyU researchers have already filed patents for this Brain Training Device in both the United States and China. This project is funded by the HKSAR Government’s Innovation and Technology Fund (ITF). The findings of this brain control algorithm have been published as the cover story in top international journal IEEE Transactions on Neural Systems and Rehabilitation Engineering (2011.12).

Filed under brain training device stroke patients rehabilitation robotics neuroscience science

200 notes

Brain rewires itself after damage or injury
When the brain’s primary “learning center” is damaged, complex new neural circuits arise to compensate for the lost function, say life scientists from UCLA and Australia who have pinpointed the regions of the brain involved in creating those alternate pathways — often far from the damaged site.
The research, conducted by UCLA’s Michael Fanselow and Moriel Zelikowsky in collaboration with Bryce Vissel, a group leader of the neuroscience research program at Sydney’s Garvan Institute of Medical Research, appears this week in the early online edition of the journal Proceedings of the National Academy of Sciences.
The researchers found that parts of the prefrontal cortex take over when the hippocampus, the brain’s key center of learning and memory formation, is disabled. Their breakthrough discovery, the first demonstration of such neural-circuit plasticity, could potentially help scientists develop new treatments for Alzheimer’s disease, stroke and other conditions involving damage to the brain.
For the study, Fanselow and Zelikowsky conducted laboratory experiments with rats showing that the rodents were able to learn new tasks even after damage to the hippocampus. While the rats needed more training than they would have normally, they nonetheless learned from their experiences — a surprising finding.
"I expect that the brain probably has to be trained through experience," said Fanselow, a professor of psychology and member of the UCLA Brain Research Institute, who was the study’s senior author. "In this case, we gave animals a problem to solve."
After discovering the rats could, in fact, learn to solve problems, Zelikowsky, a graduate student in Fanselow’s laboratory, traveled to Australia, where she worked with Vissel to analyze the anatomy of the changes that had taken place in the rats’ brains. Their analysis identified significant functional changes in two specific regions of the prefrontal cortex.
"Interestingly, previous studies had shown that these prefrontal cortex regions also light up in the brains of Alzheimer’s patients, suggesting that similar compensatory circuits develop in people," Vissel said. "While it’s probable that the brains of Alzheimer’s sufferers are already compensating for damage, this discovery has significant potential for extending that compensation and improving the lives of many."
The hippocampus, a seahorse-shaped structure where memories are formed in the brain, plays critical roles in processing, storing and recalling information. The hippocampus is highly susceptible to damage through stroke or lack of oxygen and is critically inolved in Alzheimer’s disease, Fanselow said.
"Until now, we’ve been trying to figure out how to stimulate repair within the hippocampus," he said. "Now we can see other structures stepping in and whole new brain circuits coming into being."
Zelikowsky said she found it interesting that sub-regions in the prefrontal cortex compensated in different ways, with one sub-region — the infralimbic cortex — silencing its activity and another sub-region — the prelimbic cortex — increasing its activity.
"If we’re going to harness this kind of plasticity to help stroke victims or people with Alzheimer’s," she said, "we first have to understand exactly how to differentially enhance and silence function, either behaviorally or pharmacologically. It’s clearly important not to enhance all areas. The brain works by silencing and activating different populations of neurons. To form memories, you have to filter out what’s important and what’s not."
Complex behavior always involves multiple parts of the brain communicating with one another, with one region’s message affecting how another region will respond, Fanselow noted. These molecular changes produce our memories, feelings and actions.
"The brain is heavily interconnected — you can get from any neuron in the brain to any other neuron via about six synaptic connections," he said. "So there are many alternate pathways the brain can use, but it normally doesn’t use them unless it’s forced to. Once we understand how the brain makes these decisions, then we’re in a position to encourage pathways to take over when they need to, especially in the case of brain damage.
"Behavior creates molecular changes in the brain; if we know the molecular changes we want to bring about, then we can try to facilitate those changes to occur through behavior and drug therapy," he added. I think that’s the best alternative we have. Future treatments are not going to be all behavioral or all pharmacological, but a combination of both."

Brain rewires itself after damage or injury

When the brain’s primary “learning center” is damaged, complex new neural circuits arise to compensate for the lost function, say life scientists from UCLA and Australia who have pinpointed the regions of the brain involved in creating those alternate pathways — often far from the damaged site.

The research, conducted by UCLA’s Michael Fanselow and Moriel Zelikowsky in collaboration with Bryce Vissel, a group leader of the neuroscience research program at Sydney’s Garvan Institute of Medical Research, appears this week in the early online edition of the journal Proceedings of the National Academy of Sciences.

The researchers found that parts of the prefrontal cortex take over when the hippocampus, the brain’s key center of learning and memory formation, is disabled. Their breakthrough discovery, the first demonstration of such neural-circuit plasticity, could potentially help scientists develop new treatments for Alzheimer’s disease, stroke and other conditions involving damage to the brain.

For the study, Fanselow and Zelikowsky conducted laboratory experiments with rats showing that the rodents were able to learn new tasks even after damage to the hippocampus. While the rats needed more training than they would have normally, they nonetheless learned from their experiences — a surprising finding.

"I expect that the brain probably has to be trained through experience," said Fanselow, a professor of psychology and member of the UCLA Brain Research Institute, who was the study’s senior author. "In this case, we gave animals a problem to solve."

After discovering the rats could, in fact, learn to solve problems, Zelikowsky, a graduate student in Fanselow’s laboratory, traveled to Australia, where she worked with Vissel to analyze the anatomy of the changes that had taken place in the rats’ brains. Their analysis identified significant functional changes in two specific regions of the prefrontal cortex.

"Interestingly, previous studies had shown that these prefrontal cortex regions also light up in the brains of Alzheimer’s patients, suggesting that similar compensatory circuits develop in people," Vissel said. "While it’s probable that the brains of Alzheimer’s sufferers are already compensating for damage, this discovery has significant potential for extending that compensation and improving the lives of many."

The hippocampus, a seahorse-shaped structure where memories are formed in the brain, plays critical roles in processing, storing and recalling information. The hippocampus is highly susceptible to damage through stroke or lack of oxygen and is critically inolved in Alzheimer’s disease, Fanselow said.

"Until now, we’ve been trying to figure out how to stimulate repair within the hippocampus," he said. "Now we can see other structures stepping in and whole new brain circuits coming into being."

Zelikowsky said she found it interesting that sub-regions in the prefrontal cortex compensated in different ways, with one sub-region — the infralimbic cortex — silencing its activity and another sub-region — the prelimbic cortex — increasing its activity.

"If we’re going to harness this kind of plasticity to help stroke victims or people with Alzheimer’s," she said, "we first have to understand exactly how to differentially enhance and silence function, either behaviorally or pharmacologically. It’s clearly important not to enhance all areas. The brain works by silencing and activating different populations of neurons. To form memories, you have to filter out what’s important and what’s not."

Complex behavior always involves multiple parts of the brain communicating with one another, with one region’s message affecting how another region will respond, Fanselow noted. These molecular changes produce our memories, feelings and actions.

"The brain is heavily interconnected — you can get from any neuron in the brain to any other neuron via about six synaptic connections," he said. "So there are many alternate pathways the brain can use, but it normally doesn’t use them unless it’s forced to. Once we understand how the brain makes these decisions, then we’re in a position to encourage pathways to take over when they need to, especially in the case of brain damage.

"Behavior creates molecular changes in the brain; if we know the molecular changes we want to bring about, then we can try to facilitate those changes to occur through behavior and drug therapy," he added. I think that’s the best alternative we have. Future treatments are not going to be all behavioral or all pharmacological, but a combination of both."

Filed under brain damage plasticity prefrontal cortex hippocampus alzheimer's disease memory neuroscience science

free counters