Neuroscience

Articles and news from the latest research reports.

Posts tagged science

61 notes

Reduced brain volume in kids with low birth-weight tied to academic struggles
An analysis of recent data from magnetic resonance imaging (MRI) of 97 adolescents who were part of study begun with very low birth weight babies born in 1982-1986 in a Cleveland neonatal intensive care unit has tied smaller brain volumes to poor academic achievement.
More than half of the babies that weighed less than 1.66 pounds and more than 30 percent of those less than 3.31 pounds at birth later had academic deficits. (Less than 1.66 pounds is considered extremely low birth weight; less than 3.31 pounds is labeled very low birth weight.) Lower birth weight was associated to smaller brain volumes in some of these children, and smaller brain volume, in turn, was tied to academic deficits.
Researchers also found that 65.6 percent of very low birth weight and 41.2 percent of extremely preterm children had experienced academic achievement similar to normal weight peers.
The research team — led by Caron A.C. Clark, a scientist in the Department of Psychology and Child and Family Center at the University of Oregon — detected an overall reduced volume of mid-brain structures, the caudate and corpus callosum, which are involved in connectivity, executive attention and motor control.
The findings, based a logistic regression analyses of the MRIs done approximately five years ago, were published in the May issue of the journal Neuropsychology. The longitudinal study originally was launched in the 1980s with a grant from the National Institute of Child Health and Human Development (National Institutes of Health, grant HD 26554) to H. Gerry Taylor of Case Western University, who was the senior author and principal investigator on the new paper.
"Our new study shows that pre-term births do not necessarily mean academic difficulties are ahead," Clark said. "We had this group of children that did have academic difficulties, but there were a lot of kids in this data set who didn’t and, in fact, displayed the same trajectories as their normal birth-weight peers."
Academic progress of the 201 original participants had been assessed early in their school years, again four years later and then annually until they were almost 17 years old. “We had the opportunity to explore this very rich data set,” Clark said. “There are very few studies that follow this population of children over time, where their trajectories of growth at school are tracked. We were interested in seeing how development unfolds over time.”
The findings, Clark added, provide new insights but also raise questions such as why some low-birth-weight babies develop normally and others do not? “It is very difficult to pick up which kids will need the most intensive interventions really early, which we know can be really important.”
The findings also provide a snapshot of children of very low birth weights who were born in NICU 30 years ago. Since then, technologies and care have improved, she said, meaning that underweight babies born prematurely today might have an advantage over those followed in the study. However, she added, improving NICUs also are allowing yet smaller babies to survive.
Clark now is exploring these findings for early warning clues that might help drive informed interventions. “Pre-term birth does mean that you are much more likely to experience brain abnormalities that seem to put you at risk for these outcomes,” she said. “They seem to be a pretty strong predictor of poor cognitive development as children age. We really need to find ways to prevent these brain abnormalities and subsequent academic difficulties in these kids who are born so small.”

Reduced brain volume in kids with low birth-weight tied to academic struggles

An analysis of recent data from magnetic resonance imaging (MRI) of 97 adolescents who were part of study begun with very low birth weight babies born in 1982-1986 in a Cleveland neonatal intensive care unit has tied smaller brain volumes to poor academic achievement.

More than half of the babies that weighed less than 1.66 pounds and more than 30 percent of those less than 3.31 pounds at birth later had academic deficits. (Less than 1.66 pounds is considered extremely low birth weight; less than 3.31 pounds is labeled very low birth weight.) Lower birth weight was associated to smaller brain volumes in some of these children, and smaller brain volume, in turn, was tied to academic deficits.

Researchers also found that 65.6 percent of very low birth weight and 41.2 percent of extremely preterm children had experienced academic achievement similar to normal weight peers.

The research team — led by Caron A.C. Clark, a scientist in the Department of Psychology and Child and Family Center at the University of Oregon — detected an overall reduced volume of mid-brain structures, the caudate and corpus callosum, which are involved in connectivity, executive attention and motor control.

The findings, based a logistic regression analyses of the MRIs done approximately five years ago, were published in the May issue of the journal Neuropsychology. The longitudinal study originally was launched in the 1980s with a grant from the National Institute of Child Health and Human Development (National Institutes of Health, grant HD 26554) to H. Gerry Taylor of Case Western University, who was the senior author and principal investigator on the new paper.

"Our new study shows that pre-term births do not necessarily mean academic difficulties are ahead," Clark said. "We had this group of children that did have academic difficulties, but there were a lot of kids in this data set who didn’t and, in fact, displayed the same trajectories as their normal birth-weight peers."

Academic progress of the 201 original participants had been assessed early in their school years, again four years later and then annually until they were almost 17 years old. “We had the opportunity to explore this very rich data set,” Clark said. “There are very few studies that follow this population of children over time, where their trajectories of growth at school are tracked. We were interested in seeing how development unfolds over time.”

The findings, Clark added, provide new insights but also raise questions such as why some low-birth-weight babies develop normally and others do not? “It is very difficult to pick up which kids will need the most intensive interventions really early, which we know can be really important.”

The findings also provide a snapshot of children of very low birth weights who were born in NICU 30 years ago. Since then, technologies and care have improved, she said, meaning that underweight babies born prematurely today might have an advantage over those followed in the study. However, she added, improving NICUs also are allowing yet smaller babies to survive.

Clark now is exploring these findings for early warning clues that might help drive informed interventions. “Pre-term birth does mean that you are much more likely to experience brain abnormalities that seem to put you at risk for these outcomes,” she said. “They seem to be a pretty strong predictor of poor cognitive development as children age. We really need to find ways to prevent these brain abnormalities and subsequent academic difficulties in these kids who are born so small.”

Filed under brain volume cognitive development low birth weight corpus callosum learning neuroimaging psychology neuroscience science

152 notes

Motor neurons like this one, found in the crab Cancer borealis, underlie the walking, swimming, breathing, flying and other rhythmic behaviors found in most creatures, including humans.
Eve Marder wins 2013 Gruber Neuroscience Prize
Award recognizes ‘the best neuroscience research being done anywhere’
The Gruber Foundation today awarded its 2013 neuroscience prize to Eve Marder ’69, a pioneering researcher who has dedicated her career to understanding the nervous system’s basic functions. The Victor and Gwendolyn Beinfield Professor of Neuroscience at Brandeis, Marder studies a relatively simple network of some 30 large neurons found in the gut of lobsters and crabs — a small yet elegant window into humans’ unfathomably rich nervous system, home to billions of neurons and trillions of interconnections.
The $500,000 prize recognizes and rewards “the best [neuroscience] work being done anywhere in the world,” according to the Gruber Foundation website. 
"Eve Marder has made a number of remarkable and groundbreaking discoveries that have fundamentally changed our understanding of how neural circuits operate and produce behavior," says Carol Barnes, chair of the selection advisory board to the Neuroscience Prize. "She has also been an exceptional leader outside the laboratory, working tirelessly to bring people together to improve scientific research, policy, and education."
Marder’s singular contributions to neuroscience through her use of crustaceans — in a field heavily dominated by scientists using vertebrate model organisms, chiefly rodents — have helped define how we think about neurons and their astounding capabilities. 
Despite not practicing “consensus” science — Marder avoids the well-trodden path of established modes of inquiry, such as working in vertebrates — she has received numerous accolades, including election to the National Academy of Sciences and to the helm of the Society for Neuroscience, both in 2007.
“I’m a maverick within a conservative framework — I obey carefully the rules of scientific rigor and discipline,” says Marder, who began her freshman year at Brandeis thinking she would major in politics. By her senior year, enthralled with the emerging field of neuroscience, she applied to graduate school while some of her friends made their plans to join the counterculture.
As a graduate student at the University of California, San Diego, in the early 1970s, Marder began studying the stomatogastric nervous system of the West Coast spiny lobster, Panulirus interruptus. The stomatogastric nervous system, which controls the motion of the gut, is an example of a central pattern generator. These circuits generate organized and repetitive motor patterns that also underlie walking, swimming, flying, breathing and many other rhythmic behaviors that creatures from earthworms to humans take for granted. 
The big questions Marder has asked throughout her career attempt to understand the fundamental nature of neuronal circuit operation. In a Brandeis lab staffed by post-docs, graduate students and undergraduates, she’s helped advance basic tenets of neuroscience while continuing to refine several related lines of inquiry. 
Early in Marder’s Brandeis career, her lab demonstrated that neuromodulatory substances such as dopamine, serotonin and neuropeptides can alter circuit performance so that the same group of neurons can produce a variety of behaviors. Her research has helped reshape the way scientists think about conditions like depression, now believed to stem from imbalances in neuromodulation. 
Later, her lab studied how neurons and networks maintain stable network performance despite the ongoing turnover of the membrane proteins that give neurons their characteristic electrical properties. Most recently, her lab is studying animal-to-animal variability in neuronal properties. How much variability in circuit function is there between animals even as they respond similarly to changes in hormones or temperature?
“I’m always looking for the things we can study more effectively than someone working in a large nervous system,” explains Marder. “I don’t want to work on problems that someone else can do better.”
Awarded by a distinguished panel of experts following an international nomination process, the Gruber Foundation neuroscience prize is a humbling honor, Marder says. It is also recognition that great science requires both intellectual risk-taking and persistence. 
Marder plans to celebrate, just not over a fancy lobster dinner. She gave up eating crustaceans long ago.

Motor neurons like this one, found in the crab Cancer borealis, underlie the walking, swimming, breathing, flying and other rhythmic behaviors found in most creatures, including humans.

Eve Marder wins 2013 Gruber Neuroscience Prize

Award recognizes ‘the best neuroscience research being done anywhere’

The Gruber Foundation today awarded its 2013 neuroscience prize to Eve Marder ’69, a pioneering researcher who has dedicated her career to understanding the nervous system’s basic functions. The Victor and Gwendolyn Beinfield Professor of Neuroscience at Brandeis, Marder studies a relatively simple network of some 30 large neurons found in the gut of lobsters and crabs — a small yet elegant window into humans’ unfathomably rich nervous system, home to billions of neurons and trillions of interconnections.

The $500,000 prize recognizes and rewards “the best [neuroscience] work being done anywhere in the world,” according to the Gruber Foundation website. 

"Eve Marder has made a number of remarkable and groundbreaking discoveries that have fundamentally changed our understanding of how neural circuits operate and produce behavior," says Carol Barnes, chair of the selection advisory board to the Neuroscience Prize. "She has also been an exceptional leader outside the laboratory, working tirelessly to bring people together to improve scientific research, policy, and education."

Marder’s singular contributions to neuroscience through her use of crustaceans — in a field heavily dominated by scientists using vertebrate model organisms, chiefly rodents — have helped define how we think about neurons and their astounding capabilities. 

Despite not practicing “consensus” science — Marder avoids the well-trodden path of established modes of inquiry, such as working in vertebrates — she has received numerous accolades, including election to the National Academy of Sciences and to the helm of the Society for Neuroscience, both in 2007.

“I’m a maverick within a conservative framework — I obey carefully the rules of scientific rigor and discipline,” says Marder, who began her freshman year at Brandeis thinking she would major in politics. By her senior year, enthralled with the emerging field of neuroscience, she applied to graduate school while some of her friends made their plans to join the counterculture.

As a graduate student at the University of California, San Diego, in the early 1970s, Marder began studying the stomatogastric nervous system of the West Coast spiny lobster, Panulirus interruptus. The stomatogastric nervous system, which controls the motion of the gut, is an example of a central pattern generator. These circuits generate organized and repetitive motor patterns that also underlie walking, swimming, flying, breathing and many other rhythmic behaviors that creatures from earthworms to humans take for granted. 

The big questions Marder has asked throughout her career attempt to understand the fundamental nature of neuronal circuit operation. In a Brandeis lab staffed by post-docs, graduate students and undergraduates, she’s helped advance basic tenets of neuroscience while continuing to refine several related lines of inquiry. 

Early in Marder’s Brandeis career, her lab demonstrated that neuromodulatory substances such as dopamine, serotonin and neuropeptides can alter circuit performance so that the same group of neurons can produce a variety of behaviors. Her research has helped reshape the way scientists think about conditions like depression, now believed to stem from imbalances in neuromodulation. 

Later, her lab studied how neurons and networks maintain stable network performance despite the ongoing turnover of the membrane proteins that give neurons their characteristic electrical properties. Most recently, her lab is studying animal-to-animal variability in neuronal properties. How much variability in circuit function is there between animals even as they respond similarly to changes in hormones or temperature?

“I’m always looking for the things we can study more effectively than someone working in a large nervous system,” explains Marder. “I don’t want to work on problems that someone else can do better.”

Awarded by a distinguished panel of experts following an international nomination process, the Gruber Foundation neuroscience prize is a humbling honor, Marder says. It is also recognition that great science requires both intellectual risk-taking and persistence. 

Marder plans to celebrate, just not over a fancy lobster dinner. She gave up eating crustaceans long ago.

Filed under nervous system crustaceans neural circuits vertebrate model Gruber Neuroscience Prize neuroscience science

73 notes

New Scientific Analysis Shines a Light on Ötzi the Iceman’s Dark Secrets
Protein investigation supports brain injury theory and opens up new research possibilities for mummies
After decoding the Iceman’s genetic make-up, a research team from the European Academy of Bolzano/Bozen (EURAC), Saarland University, Kiel University and other partners has now made another major breakthrough in mummy research: using just a pinhead-sized sample of brain tissue from the world-famous glacier corpse, the team was able to extract and analyse proteins to further support the theory that Ötzi suffered some form of brain damage in the final moments of his life.
Two dark coloured areas at the back of the Iceman’s cerebrum had first been mentioned back in 2007 during a discussion about the fracture to his skull. Scientists surmised from a CAT scan of his brain that he had received a blow to the forehead during his deadly attack that caused his brain to knock against the back of his head, creating dark spots from the bruising. Till now, this hypothesis had been left unexplored.
In 2010, with the help of computer-controlled endoscopy, two samples of brain tissue the size of a pinhead were extracted from the glacier mummy. This procedure was carried out via two tiny (previously existing) access holes and was thus minimally invasive. Microbiologist Frank Maixner (EURAC, Institute for Mummies and the Iceman) and his fellow scientist Andreas Tholey (Institute for Experimental Medicine, Kiel University) conducted two parallel, independent studies on the tiny bundles of cells. Tholey’s team provided the latest technology used in the study of complex protein mixtures known as “proteomes”. The various analyses were coordinated by Frank Maixner and Andreas Keller.
The protein research revealed a surprising amount of information. Scientists were able to identify numerous brain proteins, as well as proteins from blood cells. Microscopic investigation also confirmed the presence of astonishingly well-preserved neural cell structures and clotted blood cells. On the one hand, this led the scientists to conclude that the recovered samples did indeed come from brain tissue in remarkably good condition (the proteins contained amino acid sequence features specific to Ötzi). On the other hand, these blood clots in a corpse almost devoid of blood provided further evidence that Ötzi’s brain had possibly suffered bruising shortly before his death. Whether this was due to a blow to the forehead or a fall after being injured by the arrow remains unclear.
The discoveries represent a major breakthrough for the scientists. The research team emphasised that “the use of new protein-analysis methods has enabled us to pioneer this type of protein investigation on the soft tissue of a mummified human, extracting from the tiniest sample a vast quantity of data which in the future may well answer many further questions.” While many DNA samples from mummies are difficult or impossible to analyse because of natural biological decay, one can often still find proteins in tissue samples which allow a closer analysis and provide valuable information, explained Andreas Tholey: “Proteins are the decisive players in tissues and cells, and they conduct most of the processes which take place in cells. Identification of the proteins is therefore key to understanding the functional potential of a particular tissue. DNA is always constant, regardless of from where it originates in the body, whereas proteins provide precise information about what is happening in specific regions within the body.” Protein analysis of mummified tissue makes an especially valuable contribution to DNA research, Maixner added: “Investigating mummified tissue can be very frustrating. The samples are often damaged or contaminated and do not necessarily yield results, even after several attempts and using a variety of investigative methods. When you think that we have succeeded in identifying actual tissue changes in a human who lived over 5,000 years ago, you can begin to understand how pleased we are as scientists that we persisted with our research after many unsuccessful attempts. It has definitely proved worthwhile!”
The results of this joint study are published in the renowned journal “Cellular and Molecular Life Sciences”. Along with a sample taken from the Iceman´s stomach content, more than a dozen tissue samples from less well preserved mummies from all over the world will be submitted to this new protein-based research method and should provide insights which previously had not been possible.

New Scientific Analysis Shines a Light on Ötzi the Iceman’s Dark Secrets

Protein investigation supports brain injury theory and opens up new research possibilities for mummies

After decoding the Iceman’s genetic make-up, a research team from the European Academy of Bolzano/Bozen (EURAC), Saarland University, Kiel University and other partners has now made another major breakthrough in mummy research: using just a pinhead-sized sample of brain tissue from the world-famous glacier corpse, the team was able to extract and analyse proteins to further support the theory that Ötzi suffered some form of brain damage in the final moments of his life.

Two dark coloured areas at the back of the Iceman’s cerebrum had first been mentioned back in 2007 during a discussion about the fracture to his skull. Scientists surmised from a CAT scan of his brain that he had received a blow to the forehead during his deadly attack that caused his brain to knock against the back of his head, creating dark spots from the bruising. Till now, this hypothesis had been left unexplored.

In 2010, with the help of computer-controlled endoscopy, two samples of brain tissue the size of a pinhead were extracted from the glacier mummy. This procedure was carried out via two tiny (previously existing) access holes and was thus minimally invasive. Microbiologist Frank Maixner (EURAC, Institute for Mummies and the Iceman) and his fellow scientist Andreas Tholey (Institute for Experimental Medicine, Kiel University) conducted two parallel, independent studies on the tiny bundles of cells. Tholey’s team provided the latest technology used in the study of complex protein mixtures known as “proteomes”. The various analyses were coordinated by Frank Maixner and Andreas Keller.

The protein research revealed a surprising amount of information. Scientists were able to identify numerous brain proteins, as well as proteins from blood cells. Microscopic investigation also confirmed the presence of astonishingly well-preserved neural cell structures and clotted blood cells. On the one hand, this led the scientists to conclude that the recovered samples did indeed come from brain tissue in remarkably good condition (the proteins contained amino acid sequence features specific to Ötzi). On the other hand, these blood clots in a corpse almost devoid of blood provided further evidence that Ötzi’s brain had possibly suffered bruising shortly before his death. Whether this was due to a blow to the forehead or a fall after being injured by the arrow remains unclear.

The discoveries represent a major breakthrough for the scientists. The research team emphasised that “the use of new protein-analysis methods has enabled us to pioneer this type of protein investigation on the soft tissue of a mummified human, extracting from the tiniest sample a vast quantity of data which in the future may well answer many further questions.” While many DNA samples from mummies are difficult or impossible to analyse because of natural biological decay, one can often still find proteins in tissue samples which allow a closer analysis and provide valuable information, explained Andreas Tholey: “Proteins are the decisive players in tissues and cells, and they conduct most of the processes which take place in cells. Identification of the proteins is therefore key to understanding the functional potential of a particular tissue. DNA is always constant, regardless of from where it originates in the body, whereas proteins provide precise information about what is happening in specific regions within the body.” Protein analysis of mummified tissue makes an especially valuable contribution to DNA research, Maixner added: “Investigating mummified tissue can be very frustrating. The samples are often damaged or contaminated and do not necessarily yield results, even after several attempts and using a variety of investigative methods. When you think that we have succeeded in identifying actual tissue changes in a human who lived over 5,000 years ago, you can begin to understand how pleased we are as scientists that we persisted with our research after many unsuccessful attempts. It has definitely proved worthwhile!”

The results of this joint study are published in the renowned journal “Cellular and Molecular Life Sciences”. Along with a sample taken from the Iceman´s stomach content, more than a dozen tissue samples from less well preserved mummies from all over the world will be submitted to this new protein-based research method and should provide insights which previously had not been possible.

Filed under Ötzi tyrolean iceman brain tissue proteins brain damage proteomes neuroscience science

188 notes

3-D map of blood vessels in cerebral cortex holds suprises
Blood vessels within a sensory area of the mammalian brain loop and connect in unexpected ways, a new map has revealed.
The study, published June 9 in the early online edition of Nature Neuroscience, describes vascular architecture within a well-known region of the cerebral cortex and explores what that structure means for functional imaging of the brain and the onset of a kind of dementia.
David Kleinfeld, professor of physics and neurobiology at the University of California, San Diego, and colleagues mapped blood vessels in an area of the mouse brain that receives sensory signals from the whiskers.
The organization of neural cells in this brain region is well-understood, as was a pattern of blood vessels that plunge from the surface of the brain and return from the depths, but the network in between was uncharted. Yet these tiny arterioles and venules deliver oxygen and nutrients to energy-hungry brain cells and carry away wastes.
The team traced this fine network by filling the vessels with a fluorescent gel. Then, using an automated system, developed by co-author Philbert Tsai, that removes thin layers of tissue with a laser while capturing a series of images to reconstructed the three-dimensional network of tiny vessels.
The project focused on a region of the cerebral cortex in which the nerve cells are so well known that they can be traced to individual whiskers. These neurons cluster in “barrels,” one per whisker, a pattern of organization seen in other sensory areas as well.
The scientists expected each whisker barrel to match up with its own blood supply, but that was not the case. The blood vessels don’t line up with the functional structure of the neurons they feed.
"This was a surprise, because the blood vessels develop in tandem with neural tissue," Kleinfeld said. Instead, microvessels beneath the surface loop and connect in patterns that don’t obviously correspond to the barrels.
To search for patterns, they turned to a branch of mathematics called graph theory, which describes systems as interconnected nodes. Using this approach, no hidden subunits emerged, demonstrating that the mesh indeed forms a continous network they call the “angiome.”
The vascular maps traced in this study raise a question of what we’re actually seeing in a widely used kind of brain imaging called functional MRI, which in one form measures brain activity by recording changes in oxygen levels in the blood. The idea is that activity will locally deplete oxygen. So they wiggled whiskers on individual mice and found that optical signals associated with depleted oxygen centered on the barrels, where electrical recordings confirmed neural activity. Thus brain mapping does not depend on a modular arrangement of blood vessels.
The researchers also went a step further to calculate patterns of blood flow based on the diameters and connections of the vessels and asked how this would change if a feeder arteriole were blocked. The map allowed them to identify “perfusion domains,” which predict the volumes of lesions that result when a clot occludes a vessel. Critically, they were able to build a physical model of how these lesions form, as may occur in cases of human dementia.
(Image: Andreas Weil)

3-D map of blood vessels in cerebral cortex holds suprises

Blood vessels within a sensory area of the mammalian brain loop and connect in unexpected ways, a new map has revealed.

The study, published June 9 in the early online edition of Nature Neuroscience, describes vascular architecture within a well-known region of the cerebral cortex and explores what that structure means for functional imaging of the brain and the onset of a kind of dementia.

David Kleinfeld, professor of physics and neurobiology at the University of California, San Diego, and colleagues mapped blood vessels in an area of the mouse brain that receives sensory signals from the whiskers.

The organization of neural cells in this brain region is well-understood, as was a pattern of blood vessels that plunge from the surface of the brain and return from the depths, but the network in between was uncharted. Yet these tiny arterioles and venules deliver oxygen and nutrients to energy-hungry brain cells and carry away wastes.

The team traced this fine network by filling the vessels with a fluorescent gel. Then, using an automated system, developed by co-author Philbert Tsai, that removes thin layers of tissue with a laser while capturing a series of images to reconstructed the three-dimensional network of tiny vessels.

The project focused on a region of the cerebral cortex in which the nerve cells are so well known that they can be traced to individual whiskers. These neurons cluster in “barrels,” one per whisker, a pattern of organization seen in other sensory areas as well.

The scientists expected each whisker barrel to match up with its own blood supply, but that was not the case. The blood vessels don’t line up with the functional structure of the neurons they feed.

"This was a surprise, because the blood vessels develop in tandem with neural tissue," Kleinfeld said. Instead, microvessels beneath the surface loop and connect in patterns that don’t obviously correspond to the barrels.

To search for patterns, they turned to a branch of mathematics called graph theory, which describes systems as interconnected nodes. Using this approach, no hidden subunits emerged, demonstrating that the mesh indeed forms a continous network they call the “angiome.”

The vascular maps traced in this study raise a question of what we’re actually seeing in a widely used kind of brain imaging called functional MRI, which in one form measures brain activity by recording changes in oxygen levels in the blood. The idea is that activity will locally deplete oxygen. So they wiggled whiskers on individual mice and found that optical signals associated with depleted oxygen centered on the barrels, where electrical recordings confirmed neural activity. Thus brain mapping does not depend on a modular arrangement of blood vessels.

The researchers also went a step further to calculate patterns of blood flow based on the diameters and connections of the vessels and asked how this would change if a feeder arteriole were blocked. The map allowed them to identify “perfusion domains,” which predict the volumes of lesions that result when a clot occludes a vessel. Critically, they were able to build a physical model of how these lesions form, as may occur in cases of human dementia.

(Image: Andreas Weil)

Filed under cerebral cortex blood vessels dementia oxygen levels blood flow animal model neuroscience science

262 notes

Incredible Technology: How to See Inside the Mind
Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.
Imaging the brain
Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.
Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.
"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.
In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.
"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.
Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.
Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.
Decoding thoughts
Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.
Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.
More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.
Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.
Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).
But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”

Incredible Technology: How to See Inside the Mind

Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.

Imaging the brain

Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.

Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.

"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.

In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.

"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.

Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.

Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.

Decoding thoughts

Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.

Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.

More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.

Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.

Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).

But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”

Filed under brain brain imaging BCI neuroscience science

81 notes

China’s Alzheimer’s time bomb revealed

In 2010, China had more people living with Alzheimer’s disease than any other country in the world – and twice as many cases of Alzheimer’s and other kinds of dementia as the World Health Organization thought.

image

Cases of all kinds of age-related dementia in the country rose from 3.7 million in 1990 to 9.2 million in 2010. This is the finding of the first comprehensive analysis of Chinese epidemiological research, made possible by the recent digitisation of Chinese-language research papers. Previous estimates, based on English-language papers, seem to have under-reported the number of cases by half.

"We are now only beginning to comprehend the enormous value in this ‘parallel universe’ of information," says Igor Rutan of the University of Edinburgh, UK, who was part of the team that carried out the research.

The figures are bad news for a country where 90 per cent of the elderly must be cared for by their families – old people who still have family members living are not allowed to be admitted to a nursing home – even as widespread migration to cities has disrupted the traditional family structure.

Population bulge

The findings are a reflection of China’s ageing population, and its policies.

As countries modernise, death rates fall, and later on birth rates fall as more people take up birth control. Between the two events, though, there is a “bulge” of births, the source of the modern world’s population explosion. Eventually birth and death rates roughly equalise, but the birth bulge remains as an age bulge in the population.

This reached an extreme in China, where a surge in births in the 1950s and 1960s was followed by plummeting birth rates in the 1970s, later reinforced by China’s one-child policy. “Family planning policy means China is becoming an ageing country much faster than other middle-income countries such as India,” says co-author Wei Wang of Edith Cowan University in Perth, Australia.

In its youth, the bulge underpinned China’s economic development. But by 2033, it is predicted that working-age people will be outnumbered by dependents, mostly the elderly.

The new research shows that they will need more care than China was expecting. Dementia rises in an ageing population: cases increased from 4.9 to 6.3 million in the greying European Union between 2004 and 2010.

Unhealthy lifestyle

"The rates in China are similar or even higher than rates in Europe and the US," says Wang.

And they are rising. In 1990, the team estimates, 1.8 per cent of Chinese aged 65 to 69, and 42.1 per cent aged 95 to 99, had dementia. In 2010 those figures were 2.6 and 60.5 per cent, respectively. If similar rates hold in other middle-income countries, there might be 20 per cent more cases of Alzheimer’s worldwide – five million more – than now estimated, the authors calculate.

The increase in China might reflect better diagnosis, but an urbanising lifestyle could also be causing more dementia. “Obesity, diabetes and suboptimal health contribute,” says Wang.

Martin Prince of King’s College London, who is organising another survey for dementia in China, says that if midlife obesity is a risk factor for dementia, then future rates in China could be 20 per cent higher than estimated.

(Source: newscientist.com)

Filed under alzheimer's disease dementia China aging one-child policy lifestyle psychology neuroscience science

78 notes

Neurostimulation Lowers Need for Opioids in Chronic Pain

Expert Panel of Physicians and Neuroscientists Announce International Guidance on Using Neurostimulation to Significantly Reduce the Need for Opioids in Chronic Pain

Recognizing that treatment of chronic pain can be confounding, the Neuromodulation Appropriateness Consensus Committee (NACC), an international group of more than 60 leading pain specialists, has created the first consensus guidelines for the use of neurostimulation in chronic pain.

Neurostimulation is an established and growing area of pain therapy that treats nerves with electrical stimulation rather than drugs. The NACC findings, announced at the International Neuromodulation Society (INS) 11th World Congress, address provider training, patient screening, and treatment recommendations.

While the extent and suffering of chronic pain is becoming better recognized, the danger of opioids for addiction, diversion or misuse is well known. Long-term opioid use can lead to the need for escalating doses to bring relief, and raises the risk of physical dependence, overdose, weight gain, depression, and immune and hormone system dysfunction.

“Many studies contain insufficient evidence to prove the safety or effectiveness of any long-term opioid regimen for chronic pain,” said study lead author Dr. Timothy Deer, INS president-elect and director of the Center for Pain Relief in Charleston, W. Va. “Indeed, many patients discontinue long-term opioid therapy due to insufficient pain relief or adverse events.”

Neurostimulation has been shown in clinical studies to be safe and effective for properly selected patients, and is approved by the FDA to treat chronic pain of the trunk and limbs. It belongs to a family of therapies known as neuromodulation because they modulate, or alter, the function of nerves, such as nerves that may have become hypersensitized or damaged, or are otherwise sending pain signals long past the initial injury. Since the components of neurostimulators bear some resemblance to heart pacemakers, they are sometimes called pain pacemakers.

The NACC recommends neurostimulation be used earlier in the treatment of some kinds of chronic pain, such as failed back surgery syndrome and complex regional pain syndrome. A study being presented at the world congress shows neurostimulation effectiveness correlates with early use in those conditions, with the added benefit of shortening the time patients spend trying other methods and containing long-term costs of managing chronic pain.

The most common form of neurostimulation, spinal cord stimulation (SCS), was introduced in 1967 and is now implanted in some 4,000 patients annually in the United States. With SCS, appropriately selected patients who have had back and/or leg pain longer than six months often find their symptoms relieved by 50 percent or more. The therapy uses slender electrical leads placed beneath the skin along the spinal cord and connected to a compact pulse generator, about the size of a pocket watch, that sends mild current along the leads to elicit a natural biological response and limit pain messages sent to the brain. Patients try the minimally invasive technique to see if it works for them before receiving a permanent implant.

“The lessons learned over the last few decades of clinical practice have influenced neurostimulator design, placement, and programming – and added new insights into spinal anatomy and pain physiology,” said INS President Dr. Simon Thomson, consultant in in pain medicine and neuromodulation at Basildon and Thurrock University NHS Trust in the United Kingdom.

Although neurostimulation devices may seem novel at first, using electrical current to limit pain dates back to antiquity, when standing on an electric fish was one remedy. Use of modern neurostimulation devices is likely to expand as the aging populace lives longer with chronic conditions, while technological refinements and clinical evidence continue to accumulate.

“A reduction in opioid use among patients treated with spinal cord stimulation was shown in a several studies, notably a 2005 randomized controlled clinical trial led by Dr. Richard North under the auspices of the Johns Hopkins University School of Medicine,” commented INS Secretary and study co-author Dr. Marc Russo, director of the Hunter Pain Clinic in New South Wales, Australia. “Broad-based studies show that within two years, using spinal cord stimulation rather than repeat back surgery is not only a more cost-effective use of health resources, it also is correlated with higher rates of return to work.”

Consensus committee authors believe that when appropriately applied, neurostimulation to target treatment directly to nerves can improve productivity and quality of life for chronic pain patients, offering a potentially less costly and risky option than repeat surgery or long-term painkiller use. They recommend:

  • Neuromodulation providers receive at least 12 hours of continuing medical education per year directly related to improving outcomes with neuromodulation, with additional mentoring by a credentialed provider at a hospital officially accredited by the Joint Commission on Accreditation of Healthcare Organizations or its equivalent.
  • Spinal cord stimulation should be used early in the treatment of failed back surgery syndrome as long as there is no progression of a neurological condition requiring semi-urgent intervention.
  • Patient selection decisions should be made with any clinicians who are treating co-existing conditions, who may include the patient’s primary care provider, cardiologist, or neurologist.
  • Due to the emotional impact of the experience of pain, an assessment of a psychologist or psychiatrist is recommended within the first year of implant.
  • Spinal cord stimulation and peripheral nerve stimulation should be considered earlier, when possible, and are recommended to be trialed in the first two years of chronic pain.
  • Peripheral nerve stimulation (beyond the spine) should be reserved for patients in whom the pain distribution is primarily in a named nerve that is known to connect the area of pain. Temporary relief of the patients’ pain by an injection of local anesthetic in the nerve distribution should be seen as an encouraging sign for the use of this therapy.
  • To cover an area that is not located in the distribution of a named peripheral nerve, stimulation of a peripheral nerve field with electrodes placed in the subcutaneous area just beneath the skin may give relief if stimulation from SCS does not reach this area. In many cases a hybrid of two or more of these methods may present the best chance of an acceptable outcome.
  • SCS should be used as an early intervention in patients with Raynaud’s syndrome and other painful ischemic vascular disorders, which involve insufficient blood supply to part of the body. If ischemic symptoms persist despite initial surgical or reasonable medical treatment, SCS should be trialed.
  • In the use of spinal cord stimulation to treat painful diabetic peripheral neuropathy, decision-making should be performed on an individualized basis, considering current diagnoses and other factors. A type of SCS that stimulates a structure at the edge of the spinal column, the dorsal root ganglion, may be most suited for this disorder.

(Source: newswise.com)

Filed under chronic pain neurostimulation pain therapy spinal cord opioids neuroscience science

159 notes

Scientists Map Process by Which Brain Cells Form Long-Term Memories
Scientists at the Gladstone Institutes have deciphered how a protein called Arc regulates the activity of neurons – providing much-needed clues into the brain’s ability to form long-lasting memories.
These findings, reported Sunday in Nature Neuroscience, also offer newfound understanding as to what goes on at the molecular level when this process becomes disrupted.
Led by Gladstone senior investigator Steve Finkbeiner, MD, PhD, this research delved deep into the inner workings of synapses. Synapses are the highly specialized junctions that process and transmit information between neurons. Most of the synapses our brain will ever have are formed during early brain development, but throughout our lifetimes these synapses can be made, broken and strengthened. Synapses that are more active become stronger, a process that is essential for forming new memories.
However, this process is also dangerous, as it can overstimulate the neurons and lead to epileptic seizures. It must therefore be kept in check.
Neuroscientists recently discovered one important mechanism that the brain uses to maintain this important balance: a process called “homeostatic scaling.” Homeostatic scaling allows individual neurons to strengthen the new synaptic connections they’ve made to form memories, while at the same time protecting the neurons from becoming overly excited. Exactly how the neurons pull this off has eluded researchers, but they suspected that the Arc protein played a key role.
“Scientists knew that Arc was involved in long-term memory, because mice lacking the Arc protein could learn new tasks, but failed to remember them the next day,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco, with which Gladstone is affiliated. “Because initial observations showed Arc accumulating at the synapses during learning, researchers thought that Arc’s presence at these synapses was driving the formation of long-lasting memories.”
But Finkbeiner and his team thought there was something else in play.
The Role of Arc in Homeostatic Scaling
In laboratory experiments, first in animal models and then in greater detail in the petri dish, the researchers tracked Arc’s movements. And what they found was surprising.
“When individual neurons are stimulated during learning, Arc begins to accumulate at the synapses – but what we discovered was that soon after, the majority of Arc gets shuttled into the nucleus,” said Erica Korb, PhD, the paper’s lead author who completed her graduate work at Gladstone and UCSF.
“A closer look revealed three regions within the Arc protein itself that direct its movements: one exports Arc from the nucleus, a second transports it into the nucleus, and a third keeps it there,” she said. “The presence of this complex and tightly regulated system is strong evidence that this process is biologically important.”
In fact, the team’s experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories.  From inside the nucleus, the authors found that it was Arc that directed this process required for homeostatic scaling to occur. This strengthened the synaptic connections without overstimulating them – thus translating learning into long-term memories. 
Implications for a Variety of Neurological Diseases
“This discovery is important not only because it solves a long-standing mystery on the role of Arc in long-term memory formation, but also gives new insight into the homeostatic scaling process itself – disruptions in which have already been implicated in a whole host of neurological diseases,” said Finkbeiner. “For example, scientists recently discovered that Arc is depleted in the hippocampus, the brain’s memory center, in Alzheimer’s disease patients. It’s possible that disruptions to the homeostatic scaling process may contribute to the learning and memory deficits seen in Alzheimer’s.”
Dysfunctions in Arc production and transport may also be a vital player in autism. For example, the genetic disorder Fragile X syndrome – a common cause of both mental retardation and autism, directly affects the production of Arc in neurons.
“In the future,” added Dr. Korb, “we hope further research into Arc’s role in human health and disease can provide even deeper insight into these and other disorders, and also lay the groundwork for new therapeutic strategies to fight them.”
(Image: Wikimedia)

Scientists Map Process by Which Brain Cells Form Long-Term Memories

Scientists at the Gladstone Institutes have deciphered how a protein called Arc regulates the activity of neurons – providing much-needed clues into the brain’s ability to form long-lasting memories.

These findings, reported Sunday in Nature Neuroscience, also offer newfound understanding as to what goes on at the molecular level when this process becomes disrupted.

Led by Gladstone senior investigator Steve Finkbeiner, MD, PhD, this research delved deep into the inner workings of synapses. Synapses are the highly specialized junctions that process and transmit information between neurons. Most of the synapses our brain will ever have are formed during early brain development, but throughout our lifetimes these synapses can be made, broken and strengthened. Synapses that are more active become stronger, a process that is essential for forming new memories.

However, this process is also dangerous, as it can overstimulate the neurons and lead to epileptic seizures. It must therefore be kept in check.

Neuroscientists recently discovered one important mechanism that the brain uses to maintain this important balance: a process called “homeostatic scaling.” Homeostatic scaling allows individual neurons to strengthen the new synaptic connections they’ve made to form memories, while at the same time protecting the neurons from becoming overly excited. Exactly how the neurons pull this off has eluded researchers, but they suspected that the Arc protein played a key role.

“Scientists knew that Arc was involved in long-term memory, because mice lacking the Arc protein could learn new tasks, but failed to remember them the next day,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco, with which Gladstone is affiliated. “Because initial observations showed Arc accumulating at the synapses during learning, researchers thought that Arc’s presence at these synapses was driving the formation of long-lasting memories.”

But Finkbeiner and his team thought there was something else in play.

The Role of Arc in Homeostatic Scaling

In laboratory experiments, first in animal models and then in greater detail in the petri dish, the researchers tracked Arc’s movements. And what they found was surprising.

“When individual neurons are stimulated during learning, Arc begins to accumulate at the synapses – but what we discovered was that soon after, the majority of Arc gets shuttled into the nucleus,” said Erica Korb, PhD, the paper’s lead author who completed her graduate work at Gladstone and UCSF.

“A closer look revealed three regions within the Arc protein itself that direct its movements: one exports Arc from the nucleus, a second transports it into the nucleus, and a third keeps it there,” she said. “The presence of this complex and tightly regulated system is strong evidence that this process is biologically important.”

In fact, the team’s experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories.  From inside the nucleus, the authors found that it was Arc that directed this process required for homeostatic scaling to occur. This strengthened the synaptic connections without overstimulating them – thus translating learning into long-term memories. 

Implications for a Variety of Neurological Diseases

“This discovery is important not only because it solves a long-standing mystery on the role of Arc in long-term memory formation, but also gives new insight into the homeostatic scaling process itself – disruptions in which have already been implicated in a whole host of neurological diseases,” said Finkbeiner. “For example, scientists recently discovered that Arc is depleted in the hippocampus, the brain’s memory center, in Alzheimer’s disease patients. It’s possible that disruptions to the homeostatic scaling process may contribute to the learning and memory deficits seen in Alzheimer’s.”

Dysfunctions in Arc production and transport may also be a vital player in autism. For example, the genetic disorder Fragile X syndrome – a common cause of both mental retardation and autism, directly affects the production of Arc in neurons.

“In the future,” added Dr. Korb, “we hope further research into Arc’s role in human health and disease can provide even deeper insight into these and other disorders, and also lay the groundwork for new therapeutic strategies to fight them.”

(Image: Wikimedia)

Filed under arc protein neurons synapses memory brain development epileptic seizures neuroscience science

900 notes

Why Music Makes Our Brain Sing
MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.
In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.
So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?
The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.
More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.
When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.
But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.
The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.
To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.
We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.
But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.
Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.
The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.
We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.
These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.
So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.
Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.
In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?
When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Why Music Makes Our Brain Sing

MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.

In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.

So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?

The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.

The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.

To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.

We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.

But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.

Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.

The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.

We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.

These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.

So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.

Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.

In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?

When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Filed under music dopamine emotion reward system neural activity auditory cortex psychology neuroscience science

430 notes

Bionic eye prototype unveiled by Victorian scientists and designers
A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.
It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.
If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.
"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.
"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."
Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.
"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.
"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.
"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."
A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.
"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.
"We don’t want a Heath Robinson wire springs affair on somebody’s head.
"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."
The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.
The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.
Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.
"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.
"But I’ve been proven completely wrong.
"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."

Bionic eye prototype unveiled by Victorian scientists and designers

A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.

It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.

If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.

"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.

"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."

Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.

"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.

"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.

"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."

A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.

"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.

"We don’t want a Heath Robinson wire springs affair on somebody’s head.

"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."

The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.

The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.

Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.

"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.

"But I’ve been proven completely wrong.

"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."

Filed under vision bionic eye implants brain blindness technology science

free counters