Neuroscience

Articles and news from the latest research reports.

66 notes

‘Brain waves’ challenge area-specific view of brain activity
Our understanding of brain activity has traditionally been linked to brain areas – when we speak, the speech area of the brain is active. New research by an international team of psychologists led by David Alexander and Cees van Leeuwen (Laboratory for Perceptual Dynamics) shows that this view may be overly rigid. The entire cortex, not just the area responsible for a certain function, is activated when a given task is initiated. Furthermore, activity occurs in a pattern: waves of activity roll from one side of the brain to the other.
The brain can be studied on various scales, researcher David Alexander explains: “You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG’s (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas.”
Activity waves
In this study, the psychologists explore uncharted territory: “We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain’s existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”
Each activity wave in the cerebral cortex is unique. “When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean.”

‘Brain waves’ challenge area-specific view of brain activity

Our understanding of brain activity has traditionally been linked to brain areas – when we speak, the speech area of the brain is active. New research by an international team of psychologists led by David Alexander and Cees van Leeuwen (Laboratory for Perceptual Dynamics) shows that this view may be overly rigid. The entire cortex, not just the area responsible for a certain function, is activated when a given task is initiated. Furthermore, activity occurs in a pattern: waves of activity roll from one side of the brain to the other.

The brain can be studied on various scales, researcher David Alexander explains: “You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG’s (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas.”

Activity waves

In this study, the psychologists explore uncharted territory: “We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain’s existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”

Each activity wave in the cerebral cortex is unique. “When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean.”

Filed under brain brain activity activity waves EEG cerebral cortex neuroscience psychology science

69 notes

Researchers image most of vertebrae brain at single cell level
Misha Ahrens and Philipp Keller, researchers with the Howard Hughes Medical Institute have succeeded in making a near real-time video of most of a zebrafish’s brain showing individual neuron cells firing. To create the video, as the team reports in their paper published in the journal Nature Methods, the two developed a type of modified light-sheet microscopy and used it in on genetically modified fish.
To create the video, the researchers turned to zebrafish in their larval state—their brains are transparent and small. To cause firing neurons to be visible they genetically altered the fish’s brains, giving them a protein that glows when responding to changes in calcium ion levels, which happen when nerve cells fire. Next, they used a microscope that was able to broadcast a sheet of light through the fish’s brain allowing for the detection of the firing neurons. The system recorded images every 1.3 seconds. The final step was stitching the images together to create a video. The result is nothing short of breathtaking—looking like something out of a science fiction movie’s special effects department.
The video marks the first visual capture of most of a living vertebrae brain at the neuron level, as it works in near real-time and offers striking evidence of the complexity of the brain—even one as small as 100,000 neurons. The researchers say their video shows approximately 80 percent of the zebrafish’s brain as it operates—though what all those firing neurons represent in particular, is still unknown.
The researchers are careful to point out that what they’ve accomplished does not portend the creation of a video of a human brain in action—our brains are much larger, have billions more neurons and perhaps more importantly, are not transparent and are covered by a thick skull. Instead they suggest that studying a simpler brain in action might help to explain how biological neural networks actually work, perhaps leading to theories that can be generalized over larger animals.
But before that can happen, the procedure the team has developed needs to be improved—neurons can fire at hundreds of times per second, which means a lot of firing in the video has been missed. Capturing at a faster rate would mean generating nearly unmanageable amounts of data—at the current rate, just one hour of capture creates a terabyte of data. Thus a new way to store and process the data must be developed.

Researchers image most of vertebrae brain at single cell level

Misha Ahrens and Philipp Keller, researchers with the Howard Hughes Medical Institute have succeeded in making a near real-time video of most of a zebrafish’s brain showing individual neuron cells firing. To create the video, as the team reports in their paper published in the journal Nature Methods, the two developed a type of modified light-sheet microscopy and used it in on genetically modified fish.

To create the video, the researchers turned to zebrafish in their larval state—their brains are transparent and small. To cause firing neurons to be visible they genetically altered the fish’s brains, giving them a protein that glows when responding to changes in calcium ion levels, which happen when nerve cells fire. Next, they used a microscope that was able to broadcast a sheet of light through the fish’s brain allowing for the detection of the firing neurons. The system recorded images every 1.3 seconds. The final step was stitching the images together to create a video. The result is nothing short of breathtaking—looking like something out of a science fiction movie’s special effects department.

The video marks the first visual capture of most of a living vertebrae brain at the neuron level, as it works in near real-time and offers striking evidence of the complexity of the brain—even one as small as 100,000 neurons. The researchers say their video shows approximately 80 percent of the zebrafish’s brain as it operates—though what all those firing neurons represent in particular, is still unknown.

The researchers are careful to point out that what they’ve accomplished does not portend the creation of a video of a human brain in action—our brains are much larger, have billions more neurons and perhaps more importantly, are not transparent and are covered by a thick skull. Instead they suggest that studying a simpler brain in action might help to explain how biological neural networks actually work, perhaps leading to theories that can be generalized over larger animals.

But before that can happen, the procedure the team has developed needs to be improved—neurons can fire at hundreds of times per second, which means a lot of firing in the video has been missed. Capturing at a faster rate would mean generating nearly unmanageable amounts of data—at the current rate, just one hour of capture creates a terabyte of data. Thus a new way to store and process the data must be developed.

Filed under zebrafish neuronal activity nerve cells neurons brain function neuroscience science

51 notes

Fetal exposure to antiepileptic drug valproate impairs cognitive development

The effects of antiepileptic drugs during pregnancy have long been a concern of clinicians and women of childbearing age whose seizures can only be controlled by medications. In 1999, a study called the Neurodevelopmental Effects of Antiepileptic Drugs (NEAD) began following the children of women who were taking a single antiepileptic agent during pregnancy. The drugs included carbamazepine, lamotrigine, phenytoin or valproate.

Recently released final data from NEAD shows that at age 6, IQ is 7-10 points lower in children exposed in utero to the anti-epileptic drug valproate (Depakote) than those exposed to the other medications. The children exposed to valproate also did poorly on measures of verbal and memory abilities, and non-verbal and executive functions. The results were reported in the January 23, 2013, Lancet Neurology publication on line.

"Data published at ages 3 and 4.5 showed similar results in cognitive impairment," says lead study author Kimford Meador, MD, professor of neurology at Emory University School of Medicine. "Age 6 IQ was our primary outcome goal because it is standardized and predictive of school performance."

The NEAD study is the largest prospective study examining the cognitive effects of fetal antiepileptic drug exposure. The researchers monitored women through pregnancy and followed their children, performing cognitive testing at ages 2,3,4.5 and finally at 6. In addition to the effect on cognitive function, earlier data from NEAD showed an increase in the risk of anatomical birth defects.

Valproate is an anticonvulsant used in the treatment of epilepsy, migraines and bipolar disorder, and is particularly effective in the treatment of primary generalized seizures.  Except for a small number of women who only respond to valproate, there are alternative medications.

"These findings consistently show a substantial loss of developmental abilities for these children," says Meador. "Women of childbearing age who have epilepsy should talk with their doctors about their options, and possibly test the safer medications prior to pregnancy to find out if they work."

In order to avoid seizures with potentially serious consequences, Meador emphasizes that women who are already pregnant and taking valproate should not stop without consulting their physicians.

"For a woman who has significant seizures, the risk from the seizure itself is worse than the risk of taking the drugs," he points out.  "The number one reason for miscarriage late in pregnancy for women with epilepsy is trauma resulting from a seizure."

Meador will co-lead a follow-up study with Page Pennell, MD, from Harvard. The new study funded by the National Institutes of Health is called Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD), and will investigate the risks of these same drugs to both the mother and the child. The study will be conducted at 19 sites, enrolling 350 women with epilepsy during pregnancy. An additional 100 women with epilepsy who are not pregnant, and 100 healthy pregnant women will serve as controls.

(Source: news.emory.edu)

Filed under antiepileptic drugs cognitive impairment drug exposure pregnancy neuroscience science

34 notes

Transistor in the fly antenna

Highly developed antennae containing different types of olfactory receptors allow insects to use minute amounts of odors for orientation towards resources like food, oviposition sites or mates. Scientists at the Max Planck Institute for Chemical Ecology in Jena, Germany, have now used mutant flies and for the first time provided experimental proof that the extremely sensitive olfactory system of fruit flies − they are able to detect a few thousand odour molecules per millilitre of air, whereas humans need hundreds of millions − is based on self-regulation of odorant receptors. Even fewer molecules below the response threshold are sufficient to amplify the sensitivity of the receptors, and binding of molecules shortly afterwards triggers the opening of an ion channel that controls the fly’s reaction and flight behaviour. This means that a below threshold odor stimulation increases the sensitivity of the receptor, and if a second odour pulse arrives within a certain time span, a neural response will be elicited.

It is amazing how many fruit flies (Drosophila melanogaster) find their way to a rotting apple. It is known that insects are able to detect the slightest concentrations of odour molecules, especially pheromones, but also “food signals”.

Dieter Wicher, Shannon Olsson, Bill Hansson and their colleagues at the Max Planck Institute for Chemical Ecology were looking for answers to the question why insects can trace odour molecules so easily and at such low concentrations in comparison to other animals. They focused their attention on odorant receptor proteins in the antenna, the insects’ nose. These insect proteins are pretty young from an evolutionary perspective and their molecular constituents may be the basis for the insects’ highly sensitive sense of smell.

Insect odorant receptors form a receptor system that consists of the actual receptor protein and an ion channel. After binding of an odour molecule, receptor protein and ion channel trigger the neural electrical response. This mechanism was recently described in the receptor system Or22a-Orco. Apart from functioning as so-called ionotropic receptors, which enable ion flow through membranes after binding of odour molecules, odorant receptors also elicit intracellular signals. These stimulate the formation of cyclic adenosine monophosphate (cyclic AMP or cAMP), which activates an ion flow through the co-receptor Orco. The role and relevance of this weak and slow electrical current, however, was until now unclear.

Merid N. Getahun, a PhD student from Ethiopia, and his colleagues have conducted numerous experiments on Drosophila olfactory neurons. They injected tiny amounts of compounds that stimulate, inhibit or imitate cAMP formation directly into the sensory hairs housing olfactory sensory neurons on the fly antenna. The researchers tested the flies’ responses to ethyl butyrate, which has a fruity odour similar to pineapple, and measured activity in the sensory neurons by using glass microelectrodes. As a control, they used genetically modified fruit flies where the co-receptor Orco had been inactivated. “The fact that these mutants are no more able to respond to cAMP or the inhibition/activation of the involved key enzymes, such as protein kinase C and phospholipase C, shows that the highly sensitive olfactory system in insects is regulated intracellularly by their own odorant receptors,” says Dieter Wicher, the leader of the research group.

The combination of odorant receptor and co-receptor Orco can be compared to a transistor, Wicher continues: A weak basic current is sufficient to release the main electric current that activates the neuron. The process can also be seen as a short-term memory situated in the insect nose. A very weak stimulus does not elicit a response when it first occurs, but if it reoccurs within a certain time span it will release the electrical response according to the principle “one time is no time, but two is a bunch.”

Filed under fruit flies olfactory system ion channels odor stimulation receptors neuroscience science

103 notes

Stem Cell Research Could Expand Clinical Use of Regenerative Human Cells 
Research led by a biology professor in the School of Science at IUPUI has uncovered a method to produce retinal cells from regenerative human stem cells without the use of animal products, proteins or other foreign substances, which historically have limited the application of stem cells to treat disease and other human developmental disorders.
The study of human induced pluripotent stem cells (hiPSCs) has been pursued vigorously since they were first discovered in 2007 due to their ability to be manipulated into specific cell types. Scientists believe these cells hold considerable potential for cell replacement, disease modeling and pharmacological testing. However, clinical applications have been hindered by the fact that, to date, the cells have required animal products and proteins to grow and differentiate
A research team led by Jason S. Meyer, Ph.D., assistant professor of biology, successfully differentiated hiPSCs in a lab environment—completely through chemical methods—to form neural retinal cell types (including photoreceptors and retinal ganglion cells). Tests have shown the cells function and grow just as efficiently as those cells produced through traditional methods.
“Not only were we able to develop these (hiPSC) cells into retinal cells, but we were able to do so in a system devoid of any animal cells and proteins,” Meyer said. “Since these kinds of stem cells can be generated from a patient’s own cells, there will be nothing the body will recognize as foreign.”
In addition, this research should allow scientists to better reproduce these cells because they know exactly what components were included to spur growth and minimize or eliminate any variations, Meyer said. Furthermore, the cells function in a very similar fashion to human embryonic stem cells, but without controversial or immune rejection issues because they are derived from individual patients.
“This method could have a considerable impact on the treatment of retinal diseases such as age-related macular degeneration and forms of blindness with hereditary factors,” Meyer said. “We hope this will help us understand what goes wrong when diseases arise and that we can use this method as platform for the development of new treatments or drug therapies.”
“We’re talking about bringing stem cells a significant step closer to clinical use,” Meyer added.
The research will be published in the April edition of Stem Cells Translational Medicine.

Stem Cell Research Could Expand Clinical Use of Regenerative Human Cells

Research led by a biology professor in the School of Science at IUPUI has uncovered a method to produce retinal cells from regenerative human stem cells without the use of animal products, proteins or other foreign substances, which historically have limited the application of stem cells to treat disease and other human developmental disorders.

The study of human induced pluripotent stem cells (hiPSCs) has been pursued vigorously since they were first discovered in 2007 due to their ability to be manipulated into specific cell types. Scientists believe these cells hold considerable potential for cell replacement, disease modeling and pharmacological testing. However, clinical applications have been hindered by the fact that, to date, the cells have required animal products and proteins to grow and differentiate

A research team led by Jason S. Meyer, Ph.D., assistant professor of biology, successfully differentiated hiPSCs in a lab environment—completely through chemical methods—to form neural retinal cell types (including photoreceptors and retinal ganglion cells). Tests have shown the cells function and grow just as efficiently as those cells produced through traditional methods.

“Not only were we able to develop these (hiPSC) cells into retinal cells, but we were able to do so in a system devoid of any animal cells and proteins,” Meyer said. “Since these kinds of stem cells can be generated from a patient’s own cells, there will be nothing the body will recognize as foreign.”

In addition, this research should allow scientists to better reproduce these cells because they know exactly what components were included to spur growth and minimize or eliminate any variations, Meyer said. Furthermore, the cells function in a very similar fashion to human embryonic stem cells, but without controversial or immune rejection issues because they are derived from individual patients.

“This method could have a considerable impact on the treatment of retinal diseases such as age-related macular degeneration and forms of blindness with hereditary factors,” Meyer said. “We hope this will help us understand what goes wrong when diseases arise and that we can use this method as platform for the development of new treatments or drug therapies.”

“We’re talking about bringing stem cells a significant step closer to clinical use,” Meyer added.

The research will be published in the April edition of Stem Cells Translational Medicine.

Filed under embryonic stem cells stem cells retinal ganglion cells hiPSCs retinal diseases medicine neuroscience science

42 notes

Peptides helping researchers in search for Parkinson’s disease treatment
Australian researchers have taken the first step in using bioactive peptides as the building blocks to help ‘build a new brain’ to treat degenerative brain disease.
Deakin University biomedical scientist Dr Richard Williams is working in a team with Dr David Nisbet from the Australian National University and Dr Clare Parish at the Florey Neuroscience Institute to develop a way to repair the damaged parts of the brain that cause Parkinson’s disease.
Parkinson’s disease develops when the brain cells (or neurons) that produce the chemical dopamine die or are damaged. Dopamine neurons produce a lubricant that helps the brain transmit signals to the body that control muscles and movement. When these cells die or are damaged the result is the shaking and muscle stiffness that are among the common symptoms of the disease.
"We are looking at a way of helping the brain to regenerate the dead or damaged cells that transport dopamine throughout the body," Dr Williams said. "Peptides help the body heal itself, providing many positive benefits for health, particularly in regenerative medicine; this is why the sports people were using them to recover more quickly in the current doping scandal."
Peptides are both the building blocks and the messengers of the body; the team has used them to mimic the normal brain environment and provide the chemical signals needed to help the brain function.
"Peptides stick together like Lego blocks, so in the first stage of the project we have been able to make a three dimensional material or tissue scaffold that provides the networks cells need to grow; but the peptides also carry instructions in the form of chemical signals which tell the cells to grow into new neurons," Dr Williams explained.
"Importantly, this material has the same consistency as the brain, does not cause chronic inflammation and is non-toxic to the body.
"Our aim is to use this scaffold material to support the patient’s own stem cells that could be turned into dopamine neurons and implanted back into the brain. We expect that when implanted the material and stem cells would be accepted by the brain as normal tissue and grow to replace the damaged or dead cells."
While the research is not yet complete, Dr Williams is excited by the possibilities this work offers to the treatment of degenerative conditions.
"It is no secret that we are living longer, and with this we are seeing an increase in many conditions that come about because of ageing such Parkinson’s. By developing biomaterials, like the ones we are working on, it could be possible to help the body to regenerate and provide an improved quality of life to the older members of our community," he said.
"This work can also be adapted to other parts of the body which struggle to repair themselves, such as new cartilage for joints, muscle and heart cells, bones and teeth. Ultimately, it will be like taking your car to the garage to have new parts fitted to replace the worn out ones."
The results of the first stage of this Australian Research Council funded project will be published in the international journal Soft Matter.

Peptides helping researchers in search for Parkinson’s disease treatment

Australian researchers have taken the first step in using bioactive peptides as the building blocks to help ‘build a new brain’ to treat degenerative brain disease.

Deakin University biomedical scientist Dr Richard Williams is working in a team with Dr David Nisbet from the Australian National University and Dr Clare Parish at the Florey Neuroscience Institute to develop a way to repair the damaged parts of the brain that cause Parkinson’s disease.

Parkinson’s disease develops when the brain cells (or neurons) that produce the chemical dopamine die or are damaged. Dopamine neurons produce a lubricant that helps the brain transmit signals to the body that control muscles and movement. When these cells die or are damaged the result is the shaking and muscle stiffness that are among the common symptoms of the disease.

"We are looking at a way of helping the brain to regenerate the dead or damaged cells that transport dopamine throughout the body," Dr Williams said. "Peptides help the body heal itself, providing many positive benefits for health, particularly in regenerative medicine; this is why the sports people were using them to recover more quickly in the current doping scandal."

Peptides are both the building blocks and the messengers of the body; the team has used them to mimic the normal brain environment and provide the chemical signals needed to help the brain function.

"Peptides stick together like Lego blocks, so in the first stage of the project we have been able to make a three dimensional material or tissue scaffold that provides the networks cells need to grow; but the peptides also carry instructions in the form of chemical signals which tell the cells to grow into new neurons," Dr Williams explained.

"Importantly, this material has the same consistency as the brain, does not cause chronic inflammation and is non-toxic to the body.

"Our aim is to use this scaffold material to support the patient’s own stem cells that could be turned into dopamine neurons and implanted back into the brain. We expect that when implanted the material and stem cells would be accepted by the brain as normal tissue and grow to replace the damaged or dead cells."

While the research is not yet complete, Dr Williams is excited by the possibilities this work offers to the treatment of degenerative conditions.

"It is no secret that we are living longer, and with this we are seeing an increase in many conditions that come about because of ageing such Parkinson’s. By developing biomaterials, like the ones we are working on, it could be possible to help the body to regenerate and provide an improved quality of life to the older members of our community," he said.

"This work can also be adapted to other parts of the body which struggle to repair themselves, such as new cartilage for joints, muscle and heart cells, bones and teeth. Ultimately, it will be like taking your car to the garage to have new parts fitted to replace the worn out ones."

The results of the first stage of this Australian Research Council funded project will be published in the international journal Soft Matter.

Filed under parkinson's disease degenerative diseases peptides brain cells dopamine neuroscience science

62 notes

Wireless, implanted sensor broadens range of brain research
A compact, self-contained sensor recorded and transmitted brain activity data wirelessly for more than a year in early stage animal tests, according to a study funded by the National Institutes of Health. In addition to allowing for more natural studies of brain activity in moving subjects, this implantable device represents a potential major step toward cord-free control of advanced prosthetics that move with the power of thought. The report is in the April 2013 issue of the Journal of Neural Engineering.
“For people who have sustained paralysis or limb amputation, rehabilitation can be slow and frustrating because they have to learn a new way of doing things that the rest of us do without actively thinking about it,” said Grace Peng, Ph.D., who oversees the Rehabilitation Engineering Program of the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH. “Brain-computer interfaces harness existing brain circuitry, which may offer a more intuitive rehab experience, and ultimately, a better quality of life for people who have already faced serious challenges.”
Recent advances in brain-computer interfaces (BCI) have shown that it is possible for a person to control a robotic arm through implanted brain sensors linked to powerful external computers. However, such devices have relied on wired connections, which pose infection risks and restrict movement, or were wireless but had very limited computing power.
Building on this line of research, David Borton, Ph.D., and Ming Yin, Ph.D., of Brown University, Providence, R.I., and colleagues surmounted several major barriers in developing their sensor. To be fully implantable within the brain, the device needed to be very small and completely sealed off to protect the delicate machinery inside the device and the even more delicate tissue surrounding it. At the same time, it had to be powerful enough to convert the brain’s subtle electrical activity into digital signals that could be used by a computer, and then boost those signals to a level that could be detected by a wireless receiver located some distance outside the body. Like all cordless machines, the device had to be rechargeable, but in the case of an implanted brain sensor, recharging must also be done wirelessly.
The researchers consulted with brain surgeons on the shape and size of the sensor, which they built out of titanium, commonly used in joint replacements and other medical implants. They also fitted the device with a window made of sapphire, which electromagnetic signals pass through more easily than other materials, to assist with wireless transmission and inductive charging, a method of recharging also used in electronic toothbrushes. Inside, the device was densely packed with the electronics specifically designed to function on low power to reduce the amount of heat generated by the device and to extend the time it could work on battery power.
Testing the device in animal models — two pigs and two rhesus macaques — the researchers were able to receive and record data from the implanted sensors in real time over a broadband wireless connection. The sensors could transmit signals more than three feet and have continued to perform for over a year with little degradation in quality or performance.
The ability to remotely record brain activity data as an animal interacts naturally with its environment may help inform studies on muscle control and the movement-related brain circuits, the researchers say. While testing of the current devices continues, the researchers plan to refine the sensor for better heat management and data transmission, with use in human medical care as the goal.
“Clinical applications may include thought-controlled prostheses for severely neurologically impaired patients, wireless access to motorized wheelchairs or other assistive technologies, and diagnostic monitoring such as in epilepsy, where patients currently are tethered to the bedside during assessment,” said Borton.

Wireless, implanted sensor broadens range of brain research

A compact, self-contained sensor recorded and transmitted brain activity data wirelessly for more than a year in early stage animal tests, according to a study funded by the National Institutes of Health. In addition to allowing for more natural studies of brain activity in moving subjects, this implantable device represents a potential major step toward cord-free control of advanced prosthetics that move with the power of thought. The report is in the April 2013 issue of the Journal of Neural Engineering.

“For people who have sustained paralysis or limb amputation, rehabilitation can be slow and frustrating because they have to learn a new way of doing things that the rest of us do without actively thinking about it,” said Grace Peng, Ph.D., who oversees the Rehabilitation Engineering Program of the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH. “Brain-computer interfaces harness existing brain circuitry, which may offer a more intuitive rehab experience, and ultimately, a better quality of life for people who have already faced serious challenges.”

Recent advances in brain-computer interfaces (BCI) have shown that it is possible for a person to control a robotic arm through implanted brain sensors linked to powerful external computers. However, such devices have relied on wired connections, which pose infection risks and restrict movement, or were wireless but had very limited computing power.

Building on this line of research, David Borton, Ph.D., and Ming Yin, Ph.D., of Brown University, Providence, R.I., and colleagues surmounted several major barriers in developing their sensor. To be fully implantable within the brain, the device needed to be very small and completely sealed off to protect the delicate machinery inside the device and the even more delicate tissue surrounding it. At the same time, it had to be powerful enough to convert the brain’s subtle electrical activity into digital signals that could be used by a computer, and then boost those signals to a level that could be detected by a wireless receiver located some distance outside the body. Like all cordless machines, the device had to be rechargeable, but in the case of an implanted brain sensor, recharging must also be done wirelessly.

The researchers consulted with brain surgeons on the shape and size of the sensor, which they built out of titanium, commonly used in joint replacements and other medical implants. They also fitted the device with a window made of sapphire, which electromagnetic signals pass through more easily than other materials, to assist with wireless transmission and inductive charging, a method of recharging also used in electronic toothbrushes. Inside, the device was densely packed with the electronics specifically designed to function on low power to reduce the amount of heat generated by the device and to extend the time it could work on battery power.

Testing the device in animal models — two pigs and two rhesus macaques — the researchers were able to receive and record data from the implanted sensors in real time over a broadband wireless connection. The sensors could transmit signals more than three feet and have continued to perform for over a year with little degradation in quality or performance.

The ability to remotely record brain activity data as an animal interacts naturally with its environment may help inform studies on muscle control and the movement-related brain circuits, the researchers say. While testing of the current devices continues, the researchers plan to refine the sensor for better heat management and data transmission, with use in human medical care as the goal.

“Clinical applications may include thought-controlled prostheses for severely neurologically impaired patients, wireless access to motorized wheelchairs or other assistive technologies, and diagnostic monitoring such as in epilepsy, where patients currently are tethered to the bedside during assessment,” said Borton.

Filed under brain activity implants prosthetics limb amputation BCI animal model neuroscience science

110 notes

Face of the future rears its head
Meet Zoe: a digital talking head which can express human emotions on demand with “unprecedented realism” and could herald a new era of human-computer interaction.
A virtual “talking head” which can express a full range of human emotions and could be used as a digital personal assistant, or to replace texting with “face messaging”, has been developed by researchers.
The lifelike face can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate. Users can type in any message, specifying the requisite emotion as well, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.
The system, called “Zoe”, is the result of a collaboration between researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge’s Department of Engineering. Students have already spotted a striking resemblance between the disembodied head and Holly, the ship’s computer in the British sci-fi comedy, Red Dwarf.
Appropriately enough, the face is actually that of Zoe Lister, an actress perhaps best-known as Zoe Carpenter in the Channel 4 series, Hollyoaks. To recreate her face and voice, researchers spent several days recording Zoe’s speech and facial expressions. The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to “face message” friends.
The framework behind “Zoe” is also a template that, before long, could enable people to upload their own faces and voices - but in a matter of seconds, rather than days. That means that in the future, users will be able to customise and personalise their own, emotionally realistic, digital assistants.
If this can be developed, then a user could, for example, text the message “I’m going to be late” and ask it to set the emotion to “frustrated”. Their friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.
The team who created Zoe are currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read. Ultimately, the system could have multiple uses – including in gaming, in audio-visual books, as a means of delivering online lectures, and in other user interfaces.
“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

Face of the future rears its head

Meet Zoe: a digital talking head which can express human emotions on demand with “unprecedented realism” and could herald a new era of human-computer interaction.

A virtual “talking head” which can express a full range of human emotions and could be used as a digital personal assistant, or to replace texting with “face messaging”, has been developed by researchers.

The lifelike face can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate. Users can type in any message, specifying the requisite emotion as well, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.

The system, called “Zoe”, is the result of a collaboration between researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge’s Department of Engineering. Students have already spotted a striking resemblance between the disembodied head and Holly, the ship’s computer in the British sci-fi comedy, Red Dwarf.

Appropriately enough, the face is actually that of Zoe Lister, an actress perhaps best-known as Zoe Carpenter in the Channel 4 series, Hollyoaks. To recreate her face and voice, researchers spent several days recording Zoe’s speech and facial expressions. The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to “face message” friends.

The framework behind “Zoe” is also a template that, before long, could enable people to upload their own faces and voices - but in a matter of seconds, rather than days. That means that in the future, users will be able to customise and personalise their own, emotionally realistic, digital assistants.

If this can be developed, then a user could, for example, text the message “I’m going to be late” and ask it to set the emotion to “frustrated”. Their friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.

The team who created Zoe are currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read. Ultimately, the system could have multiple uses – including in gaming, in audio-visual books, as a means of delivering online lectures, and in other user interfaces.

“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

Filed under human-computer interaction talking head emotions emotional combinations technology neuroscience science

1,055 notes

Study indicates reverse impulses clear useless information, prime brain for learning
When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.
The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder—in which the mind has difficulty moving beyond a disturbing experience.
During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.
It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.
"The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important," said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. "These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information."
Their findings appear in the Proceedings of the National Academy of Sciences.
The researchers studied the activity of rats’ brain cells from the hippocampus, a tube-like structure deep in the brain. The hippocampus relays information to and from many other regions of the brain. It plays an important role in memory, orientation, and navigation.
The classic understanding of brain cell activity is that electrical signals travel from dendrites—antenna-like projections at one end of the cell—through the cell body. From the cell body, they then travel the length of the axon, a single long projection at the other end of the cell. This electrical signal stimulates the release of chemicals at the end of the axon, which bind to dendrites on adjacent cells, stimulating these recipient cells to fire electrical signals, and so on. When groups of cells repeatedly fire in this way, the electrical signals increase in intensity.
Dr. Bukalo and her team examined electrical signals that traveled in reverse—from the cell’s axon, to the cell body, and out its many dendrites. This reverse firing happens during sleep and at rest, appearing to reset the cell, the researchers found.
After first stimulating the cells with reverse electrical impulses, the researchers next stimulated the dendrites again with electrical impulses traveling in the forward direction. In response, the neurons generated a stronger signal, with the connections appearing to strengthen with repeated electrical stimulation.
This pattern appears to underlie the formation of new memories. A connection that is reset but never stimulated again may simply fade from use over time, Dr. Bukalo explained. But when a cell is stimulated again, it fires a stronger signal and may be more easily synchronized to the reinforced signals of other brain cells, all of which act in concert over time.

Study indicates reverse impulses clear useless information, prime brain for learning

When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.

The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder—in which the mind has difficulty moving beyond a disturbing experience.

During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.

It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.

"The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important," said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. "These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information."

Their findings appear in the Proceedings of the National Academy of Sciences.

The researchers studied the activity of rats’ brain cells from the hippocampus, a tube-like structure deep in the brain. The hippocampus relays information to and from many other regions of the brain. It plays an important role in memory, orientation, and navigation.

The classic understanding of brain cell activity is that electrical signals travel from dendrites—antenna-like projections at one end of the cell—through the cell body. From the cell body, they then travel the length of the axon, a single long projection at the other end of the cell. This electrical signal stimulates the release of chemicals at the end of the axon, which bind to dendrites on adjacent cells, stimulating these recipient cells to fire electrical signals, and so on. When groups of cells repeatedly fire in this way, the electrical signals increase in intensity.

Dr. Bukalo and her team examined electrical signals that traveled in reverse—from the cell’s axon, to the cell body, and out its many dendrites. This reverse firing happens during sleep and at rest, appearing to reset the cell, the researchers found.

After first stimulating the cells with reverse electrical impulses, the researchers next stimulated the dendrites again with electrical impulses traveling in the forward direction. In response, the neurons generated a stronger signal, with the connections appearing to strengthen with repeated electrical stimulation.

This pattern appears to underlie the formation of new memories. A connection that is reset but never stimulated again may simply fade from use over time, Dr. Bukalo explained. But when a cell is stimulated again, it fires a stronger signal and may be more easily synchronized to the reinforced signals of other brain cells, all of which act in concert over time.

Filed under brain cells PTSD memory learning hippocampus memory formation neuroscience science

113 notes

Brain-mapping increases understanding of alcohol’s effects on college freshmen
A research team that includes several Penn State scientists has completed a first-of-its-kind longitudinal pilot study aimed at better understanding how the neural processes that underlie responses to alcohol-related cues change during students’ first year of college.
Anecdotal evidence abounds attesting to the many negative social and physical effects of the dramatic increase in alcohol use that often comes with many students’ first year of college. The behavioral changes that accompany those effects indicate underlying changes in the brain. Yet in contrast to alcohol’s numerous other effects, its effect on the brain’s continuing development from adolescence into early adulthood — which includes the transition from high school to college — is not well known.
Penn State psychology graduate student Adriene Beltz, with a team of additional researchers, investigated the changes that occurred to alcohol-related neural processes in the brains of a small group of first-year students.
Using functional magnetic resonance imaging (fMRI) and a data analysis technique known as effective connectivity mapping, the researchers collected and analyzed data from 11 students, who participated in a series of three fMRI sessions beginning just before the start of classes and concluding part-way through the second semester.
"We wanted to know if and how brain responses to alcohol cues — pictures of alcoholic beverages in this case — changed across the first year of college," said Beltz, "and how these potential changes related to alcohol use. Moreover, we wanted our analysis approach to take advantage of the richness of fMRI data."
Analysis of the data collected from the study participants revealed signs in their brains’ emotion processing networks of habituation to alcohol-related stimuli, and noticeable alterations in their cognitive control networks.
Recent studies have indicated that young adults’ cognitive development continues through the ages of the mid-20s, particularly in those regions of the brain responsible for decision-making or judgment-related activity — the sort of cognitive “fine tuning” that potentially makes us, in some senses, as much who we are (and will be) as any other stage of our overall development.
Other recent studies suggest that binge drinking during late adolescence may damage the brain in ways that could last into adulthood.
Beltz’s study indicates that connections among brain regions involved in emotion processing and cognitive control may change with increased exposure to alcohol and alcohol-related cues. Those connections also may influence other parts of the brain, such as those still-developing regions responsible for students’ decision-making and judgment abilities.
"The brain is a complex network," Beltz said. "We know that connections among different brain regions are important for behavior, and we know that many of these connections are still developing into early adulthood. Thus, alcohol could have far-reaching consequences on a maturing brain, directly influencing some brain regions and indirectly influencing others by disrupting neural connectivity."
While in an fMRI scanner at the Penn State Social, Life and Engineering Sciences Imaging Center, students participating in the study completed a task: responding as quickly as possible, by pressing a button on a grip device, to an image of either an alcoholic beverage or a non-alcoholic beverage when both types of images were displayed consecutively on a screen. From the resulting data, effective connectivity maps were created for each individual and for the group.
Examining the final maps, the researchers found that brain regions involved in emotion-processing showed less connectivity when the students responded to alcohol cues than when they responded to non-alcohol cues, and that brain regions involved in cognitive control showed the most connectivity during the first semester of college. The findings suggest that the students needed to heavily recruit brain regions involved in cognitive control in order to overcome the alcohol-associated stimuli when instructed to respond to the non-alcohol cues.
"Connectivity among brain regions implicated in cognitive control spiked from the summer before college to the first semester of college," said Beltz. "This was particularly interesting because the spike coincided with increases in the participants’ alcohol use and increases in their exposure to alcohol cues in the college environment. From the first semester to the second semester, levels of alcohol use and cue exposure remained steady, but connectivity among cognitive control brain regions decreased. From this, we concluded that changes in alcohol use and cue exposure — not absolute levels — were reflected by the underlying neural processes."
Although the immediate implications of the pilot study for first-year students are fairly clear, there are still a number of unanswered questions related to alcohol’s longer-term effects on development, for college students after their first year and for those same individuals later in life.
To begin exploring those potential long-term effects, Beltz has planned a follow-up study to track a larger number of participants over a greater length of time.

Brain-mapping increases understanding of alcohol’s effects on college freshmen

A research team that includes several Penn State scientists has completed a first-of-its-kind longitudinal pilot study aimed at better understanding how the neural processes that underlie responses to alcohol-related cues change during students’ first year of college.

Anecdotal evidence abounds attesting to the many negative social and physical effects of the dramatic increase in alcohol use that often comes with many students’ first year of college. The behavioral changes that accompany those effects indicate underlying changes in the brain. Yet in contrast to alcohol’s numerous other effects, its effect on the brain’s continuing development from adolescence into early adulthood — which includes the transition from high school to college — is not well known.

Penn State psychology graduate student Adriene Beltz, with a team of additional researchers, investigated the changes that occurred to alcohol-related neural processes in the brains of a small group of first-year students.

Using functional magnetic resonance imaging (fMRI) and a data analysis technique known as effective connectivity mapping, the researchers collected and analyzed data from 11 students, who participated in a series of three fMRI sessions beginning just before the start of classes and concluding part-way through the second semester.

"We wanted to know if and how brain responses to alcohol cues — pictures of alcoholic beverages in this case — changed across the first year of college," said Beltz, "and how these potential changes related to alcohol use. Moreover, we wanted our analysis approach to take advantage of the richness of fMRI data."

Analysis of the data collected from the study participants revealed signs in their brains’ emotion processing networks of habituation to alcohol-related stimuli, and noticeable alterations in their cognitive control networks.

Recent studies have indicated that young adults’ cognitive development continues through the ages of the mid-20s, particularly in those regions of the brain responsible for decision-making or judgment-related activity — the sort of cognitive “fine tuning” that potentially makes us, in some senses, as much who we are (and will be) as any other stage of our overall development.

Other recent studies suggest that binge drinking during late adolescence may damage the brain in ways that could last into adulthood.

Beltz’s study indicates that connections among brain regions involved in emotion processing and cognitive control may change with increased exposure to alcohol and alcohol-related cues. Those connections also may influence other parts of the brain, such as those still-developing regions responsible for students’ decision-making and judgment abilities.

"The brain is a complex network," Beltz said. "We know that connections among different brain regions are important for behavior, and we know that many of these connections are still developing into early adulthood. Thus, alcohol could have far-reaching consequences on a maturing brain, directly influencing some brain regions and indirectly influencing others by disrupting neural connectivity."

While in an fMRI scanner at the Penn State Social, Life and Engineering Sciences Imaging Center, students participating in the study completed a task: responding as quickly as possible, by pressing a button on a grip device, to an image of either an alcoholic beverage or a non-alcoholic beverage when both types of images were displayed consecutively on a screen. From the resulting data, effective connectivity maps were created for each individual and for the group.

Examining the final maps, the researchers found that brain regions involved in emotion-processing showed less connectivity when the students responded to alcohol cues than when they responded to non-alcohol cues, and that brain regions involved in cognitive control showed the most connectivity during the first semester of college. The findings suggest that the students needed to heavily recruit brain regions involved in cognitive control in order to overcome the alcohol-associated stimuli when instructed to respond to the non-alcohol cues.

"Connectivity among brain regions implicated in cognitive control spiked from the summer before college to the first semester of college," said Beltz. "This was particularly interesting because the spike coincided with increases in the participants’ alcohol use and increases in their exposure to alcohol cues in the college environment. From the first semester to the second semester, levels of alcohol use and cue exposure remained steady, but connectivity among cognitive control brain regions decreased. From this, we concluded that changes in alcohol use and cue exposure — not absolute levels — were reflected by the underlying neural processes."

Although the immediate implications of the pilot study for first-year students are fairly clear, there are still a number of unanswered questions related to alcohol’s longer-term effects on development, for college students after their first year and for those same individuals later in life.

To begin exploring those potential long-term effects, Beltz has planned a follow-up study to track a larger number of participants over a greater length of time.

Filed under alcohol brain mapping effective connectivity mapping fMRI brain responses neuroscience science

free counters