Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

303 notes

Creating a ‘Window to the Brain’
A team of University of California, Riverside researchers have developed a novel transparent skull implant that literally provides a “window to the brain”, which they hope will eventually open new treatment options for patients with life-threatening neurological disorders, such as brain cancer and traumatic brain injury.
The team’s implant is made of the same ceramic material currently used in hip implants and dental crowns, yttria-stabilized zirconia (YSZ). However, the key difference is that their material has been processed in a unique way to make it transparent.
Since YSZ has already proven itself to be well-tolerated by the body in other applications, the team’s advancement now allows use of YSZ as a permanent window through which doctors can aim laser-based treatments for the brain, importantly, without having to perform repeated craniectomies, which involve removing a portion of the skull to access the brain.

The work also dovetails with President Obama’s recently-announced BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, which aims to revolutionize the understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders. The team envisions potential for their YSZ windows to facilitate the clinical translation of promising brain imaging and neuromodulation technologies being developed under this initiative.

“This is a case of a science fiction sounding idea becoming science fact, with strong potential for positive impact on patients,” said Guillermo Aguilar, a professor of mechanical engineering at UC Riverside’s Bourns College of Engineering (BCOE).
Aguilar is part of 10-person team, comprised of faculty, graduate students and researchers from UC Riverside’s Bourns College of Engineering and School of Medicine, who recently published a paper “Transparent Nanocrystalline Yttria-Stabilized-Zirconia Calvarium Prosthesis”  about their findings online in the journal Nanomedicine: Nanotechnology, Biology and Medicine.
Laser-based treatments have shown significant promise for many brain disorders. However, realization of this promise has been constrained by the need for performing a craniectomy to access the brain since most medical lasers are unable to penetrate the skull. The transparent YSZ implants developed by the UC Riverside team address this issue by providing a permanently implanted view port through the skull.
“This is a crucial first step towards an innovative new concept that would provide a clinically-viable means for optically accessing the brain, on-demand, over large areas, and on a chronically-recurring basis, without need for repeated craniectomies,” said team member Dr. Devin Binder, a clinician and an associate professor of biomedical sciences at UC Riverside.
Although the team’s YSZ windows are not the first transparent skull implants to be reported, they are the first that could be conceivably used in humans, which is a crucial distinction. This is due to the inherent toughness of YSZ, which makes it far more resistant to shock and impact than the glass-based implants previously demonstrated by others. This not only enhances safety, but it may also reduce patient self-consciousness, since the reduced vulnerability of the implant could minimize the need for conspicuous protective headgear.

Creating a ‘Window to the Brain’

A team of University of California, Riverside researchers have developed a novel transparent skull implant that literally provides a “window to the brain”, which they hope will eventually open new treatment options for patients with life-threatening neurological disorders, such as brain cancer and traumatic brain injury.

The team’s implant is made of the same ceramic material currently used in hip implants and dental crowns, yttria-stabilized zirconia (YSZ). However, the key difference is that their material has been processed in a unique way to make it transparent.

Since YSZ has already proven itself to be well-tolerated by the body in other applications, the team’s advancement now allows use of YSZ as a permanent window through which doctors can aim laser-based treatments for the brain, importantly, without having to perform repeated craniectomies, which involve removing a portion of the skull to access the brain.

The work also dovetails with President Obama’s recently-announced BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, which aims to revolutionize the understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders. The team envisions potential for their YSZ windows to facilitate the clinical translation of promising brain imaging and neuromodulation technologies being developed under this initiative.

“This is a case of a science fiction sounding idea becoming science fact, with strong potential for positive impact on patients,” said Guillermo Aguilar, a professor of mechanical engineering at UC Riverside’s Bourns College of Engineering (BCOE).

Aguilar is part of 10-person team, comprised of faculty, graduate students and researchers from UC Riverside’s Bourns College of Engineering and School of Medicine, who recently published a paper “Transparent Nanocrystalline Yttria-Stabilized-Zirconia Calvarium Prosthesis”  about their findings online in the journal Nanomedicine: Nanotechnology, Biology and Medicine.

Laser-based treatments have shown significant promise for many brain disorders. However, realization of this promise has been constrained by the need for performing a craniectomy to access the brain since most medical lasers are unable to penetrate the skull. The transparent YSZ implants developed by the UC Riverside team address this issue by providing a permanently implanted view port through the skull.

“This is a crucial first step towards an innovative new concept that would provide a clinically-viable means for optically accessing the brain, on-demand, over large areas, and on a chronically-recurring basis, without need for repeated craniectomies,” said team member Dr. Devin Binder, a clinician and an associate professor of biomedical sciences at UC Riverside.

Although the team’s YSZ windows are not the first transparent skull implants to be reported, they are the first that could be conceivably used in humans, which is a crucial distinction. This is due to the inherent toughness of YSZ, which makes it far more resistant to shock and impact than the glass-based implants previously demonstrated by others. This not only enhances safety, but it may also reduce patient self-consciousness, since the reduced vulnerability of the implant could minimize the need for conspicuous protective headgear.

Filed under neurological disorders cranial implants brain imaging neuroimaging neuroscience science

207 notes

Brain Wiring Quiets the Voice Inside Your Head
Researchers find nerve circuits connecting motion and hearing
During a normal conversation, your brain is constantly adjusting the volume to soften the sound of your own voice and boost the voices of others in the room. This ability to distinguish between the sounds generated from your own movements and those coming from the outside world is important not only for catching up on water cooler gossip, but also for learning how to speak or play a musical instrument.
Now, researchers have developed the first diagram of the brain circuitry that enables this complex interplay between the motor system and the auditory system to occur.
The research, which appears Sept. 4 in The Journal of Neuroscience, could lend insight into schizophrenia and mood disorders that arise when this circuitry goes awry and individuals hear voices other people do not hear.
"Our finding is important because it provides the blueprint for understanding how the brain communicates with itself, and how that communication can break down to cause disease," said Richard Mooney, Ph.D., senior author of the study and professor of neurobiology at Duke University School of Medicine. "Normally, motor regions would warn auditory regions that they are making a command to speak, so be prepared for a sound. But in psychosis, you can no longer distinguish between the activity in your motor system and somebody else’s, and you think the sounds coming from within your own brain are external."
Researchers have long surmised that the neuronal circuitry conveying movement — to voice an opinion or hit a piano key — also feeds into the wiring that senses sound. But the nature of the nerve cells that provided that input, and how they functionally interacted to help the brain anticipate the impending sound, was not known.
In this study, Mooney used a technology created by Fan Wang, Ph.D., associate professor of cell biology at Duke, to trace all of the inputs into the auditory cortex — the sound-interpreting region of the brain. Though the researchers found that a number of different areas of the brain fed into the auditory cortex, they were most interested in one region called the secondary motor cortex, or M2, because it is responsible for sending motor signals directly into the brain stem and the spinal cord.
"That suggests these neurons are providing a copy of the motor command directly to the auditory system," said David M. Schneider, Ph.D., co-lead author of the study and a postdoctoral fellow in Mooney’s lab. "In other words,they send a signal that says ‘move,’ but they also send a signal to the auditory system saying ‘I am going to move.’"
Having discovered this connection, the researchers then explored what type of influence this interaction was having on auditory processing or hearing. They took slices of brain tissue from mice and specifically manipulated the neurons that led from the M2 region to the auditory cortex. The researchers found that stimulating those neurons actually dampened the activity of the auditory cortex.
"It jibed nicely with our expectations," said Anders Nelson, co-lead author of the study and a graduate student in Mooney’s lab. "It is the brain’s way of muting or suppressing the sounds that come from our own actions."
Finally, the researchers tested this circuitry in live animals, artificially turning on the motor neurons in anesthetized mice and then looking to see how the auditory cortex responded. Mice usually sing to each other through a kind of song called ultrasonic vocalizations, which are too high-pitched for a human to hear. The researchers played back these ultrasonic vocalizations to the mice after they had activated the motor cortex and found that the neurons became much less responsive to the sounds.
"It appears that the functional role that these neurons play on hearing is they make sounds we generate seem quieter," said Mooney. "The question we now want to know is if this is the mechanism that is being used when an animal is actually moving. That is the missing link, and the subject of our ongoing experiments."
Once the researchers have pinned down the basics of the circuitry, they could begin to investigate whether altering this circuitry could induce auditory hallucinations or perhaps even take them away in models of schizophrenia.

Brain Wiring Quiets the Voice Inside Your Head

Researchers find nerve circuits connecting motion and hearing

During a normal conversation, your brain is constantly adjusting the volume to soften the sound of your own voice and boost the voices of others in the room. This ability to distinguish between the sounds generated from your own movements and those coming from the outside world is important not only for catching up on water cooler gossip, but also for learning how to speak or play a musical instrument.

Now, researchers have developed the first diagram of the brain circuitry that enables this complex interplay between the motor system and the auditory system to occur.

The research, which appears Sept. 4 in The Journal of Neuroscience, could lend insight into schizophrenia and mood disorders that arise when this circuitry goes awry and individuals hear voices other people do not hear.

"Our finding is important because it provides the blueprint for understanding how the brain communicates with itself, and how that communication can break down to cause disease," said Richard Mooney, Ph.D., senior author of the study and professor of neurobiology at Duke University School of Medicine. "Normally, motor regions would warn auditory regions that they are making a command to speak, so be prepared for a sound. But in psychosis, you can no longer distinguish between the activity in your motor system and somebody else’s, and you think the sounds coming from within your own brain are external."

Researchers have long surmised that the neuronal circuitry conveying movement — to voice an opinion or hit a piano key — also feeds into the wiring that senses sound. But the nature of the nerve cells that provided that input, and how they functionally interacted to help the brain anticipate the impending sound, was not known.

In this study, Mooney used a technology created by Fan Wang, Ph.D., associate professor of cell biology at Duke, to trace all of the inputs into the auditory cortex — the sound-interpreting region of the brain. Though the researchers found that a number of different areas of the brain fed into the auditory cortex, they were most interested in one region called the secondary motor cortex, or M2, because it is responsible for sending motor signals directly into the brain stem and the spinal cord.

"That suggests these neurons are providing a copy of the motor command directly to the auditory system," said David M. Schneider, Ph.D., co-lead author of the study and a postdoctoral fellow in Mooney’s lab. "In other words,they send a signal that says ‘move,’ but they also send a signal to the auditory system saying ‘I am going to move.’"

Having discovered this connection, the researchers then explored what type of influence this interaction was having on auditory processing or hearing. They took slices of brain tissue from mice and specifically manipulated the neurons that led from the M2 region to the auditory cortex. The researchers found that stimulating those neurons actually dampened the activity of the auditory cortex.

"It jibed nicely with our expectations," said Anders Nelson, co-lead author of the study and a graduate student in Mooney’s lab. "It is the brain’s way of muting or suppressing the sounds that come from our own actions."

Finally, the researchers tested this circuitry in live animals, artificially turning on the motor neurons in anesthetized mice and then looking to see how the auditory cortex responded. Mice usually sing to each other through a kind of song called ultrasonic vocalizations, which are too high-pitched for a human to hear. The researchers played back these ultrasonic vocalizations to the mice after they had activated the motor cortex and found that the neurons became much less responsive to the sounds.

"It appears that the functional role that these neurons play on hearing is they make sounds we generate seem quieter," said Mooney. "The question we now want to know is if this is the mechanism that is being used when an animal is actually moving. That is the missing link, and the subject of our ongoing experiments."

Once the researchers have pinned down the basics of the circuitry, they could begin to investigate whether altering this circuitry could induce auditory hallucinations or perhaps even take them away in models of schizophrenia.

Filed under auditory system schizophrenia psychosis brain circuitry motor cortex neuroscience science

131 notes

Primate calls, like human speech, can help infants form categories
Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.
Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.
Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.
"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.
In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.
Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.
Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”
"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.
Waxman said these new results open the door to new research questions.
"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"
(Image: Corbis)

Primate calls, like human speech, can help infants form categories

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"

(Image: Corbis)

Filed under primates vocalizations language categorization psychology neuroscience science

125 notes

Single tone alerts brain to complete sound pattern
The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team from Utrecht and Nijmegen published an article on the subject in the journal PNAS on 2 September. 
We hear with our brain. The cochlea picks up sound vibrations but the signals produced as a result are processed by the brain, using known patterns. If, for example, you briefly hear a weak tone, your hearing focuses on that tone and suppresses any frequencies around it. This makes it easier to notice any relevant sounds in your surroundings. The present research has shown that this ‘auditory attention filter’ is much more complex than believed until now: frequencies that have an octave relationship with the target tone are also heard better.
John van Opstal, professor of Biophysics at Radboud University: ‘This test proves that the brain prepares for a more extensive pattern of tones, even if the person just hears a single test tone or if he has a tone in mind. These extra tones in the pattern were not sounded during the experiment, but the brain complements the information received from the cochlea. This is scientifically interesting. Audiology, for example, at present places great emphasis on the cochlea.’
Octave relationshipThe subjects undergoing the experiment did not have an easy time. For an hour they listened to unstructured noise containing very soft tones that they had to detect. Every few seconds they were presented with a tone of 1000 Hz, the cue. Then during one of two time intervals, a very quiet, short second tone was sounded. The subject had to indicate in which of the two intervals they had heard the second tone. It became apparent that tones having an octave relationship with the cue were all heard better, and those around the cue were heard less well. An octave is a well-known term in music, indicating the distance between two tones, the frequencies of which have a 2-to-1 relationship. 
Voice Van Opstal: ‘We wanted to gather data on the auditory attention filter around the target tone. When we made the range larger than other researchers had done previously, more peaks suddenly appeared. This was a complete surprise to us. One possible explanation could be that the hearing system has evolved in order to hear sounds made by members of an animal’s own species (voices in the case of humans) in noisy surroundings. Vocalisations always consist of harmonic complexes of several simultaneous tones having an octave relationship with each other.’
Hearing aid The researchers, who work at Utrecht University, the UMC Utrecht Brain Center and Radboud University Nijmegen, can easily think up applications for this fundamental research. If, for example, someone no longer hears high tones because of damage to the cochlear hair cells, the hearing aid can be adjusted in such a way that it converts those tones so they sound one or more octaves lower. Since the brain itself ‘fills in’ tones with an octave relationship, that person’s perception should then become more normal. It is also important for commercial sound producers to know how tones are perceived. That is why Philips Research is involved in this research in their department ‘Brain, Body and Behavior’.

Single tone alerts brain to complete sound pattern

The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team from Utrecht and Nijmegen published an article on the subject in the journal PNAS on 2 September.

We hear with our brain. The cochlea picks up sound vibrations but the signals produced as a result are processed by the brain, using known patterns. If, for example, you briefly hear a weak tone, your hearing focuses on that tone and suppresses any frequencies around it. This makes it easier to notice any relevant sounds in your surroundings. The present research has shown that this ‘auditory attention filter’ is much more complex than believed until now: frequencies that have an octave relationship with the target tone are also heard better.

John van Opstal, professor of Biophysics at Radboud University: ‘This test proves that the brain prepares for a more extensive pattern of tones, even if the person just hears a single test tone or if he has a tone in mind. These extra tones in the pattern were not sounded during the experiment, but the brain complements the information received from the cochlea. This is scientifically interesting. Audiology, for example, at present places great emphasis on the cochlea.’

Octave relationship
The subjects undergoing the experiment did not have an easy time. For an hour they listened to unstructured noise containing very soft tones that they had to detect. Every few seconds they were presented with a tone of 1000 Hz, the cue. Then during one of two time intervals, a very quiet, short second tone was sounded. The subject had to indicate in which of the two intervals they had heard the second tone. It became apparent that tones having an octave relationship with the cue were all heard better, and those around the cue were heard less well. An octave is a well-known term in music, indicating the distance between two tones, the frequencies of which have a 2-to-1 relationship. 

Voice
Van Opstal: ‘We wanted to gather data on the auditory attention filter around the target tone. When we made the range larger than other researchers had done previously, more peaks suddenly appeared. This was a complete surprise to us. One possible explanation could be that the hearing system has evolved in order to hear sounds made by members of an animal’s own species (voices in the case of humans) in noisy surroundings. Vocalisations always consist of harmonic complexes of several simultaneous tones having an octave relationship with each other.’

Hearing aid
The researchers, who work at Utrecht University, the UMC Utrecht Brain Center and Radboud University Nijmegen, can easily think up applications for this fundamental research. If, for example, someone no longer hears high tones because of damage to the cochlear hair cells, the hearing aid can be adjusted in such a way that it converts those tones so they sound one or more octaves lower. Since the brain itself ‘fills in’ tones with an octave relationship, that person’s perception should then become more normal. It is also important for commercial sound producers to know how tones are perceived. That is why Philips Research is involved in this research in their department ‘Brain, Body and Behavior’.

Filed under auditory system auditory attention filter cochlea hair cells neuroscience science

41 notes

A fly’s hearing
UI study shows fruit fly is ideal model to study hearing loss in people
If your attendance at too many rock concerts has impaired your hearing, listen up.
University of Iowa researchers say that the common fruit fly, Drosophila melanogaster, is an ideal model to study hearing loss in humans caused by loud noise. The reason: The molecular underpinnings to its hearing are roughly the same as with people.
As a result, scientists may choose to use the fruit fly to quicken the pace of research into the cause of noise-induced hearing loss and potential treatment for the condition, according to a paper published this week in the online Early Edition of the journal Proceedings of the National Academy of Sciences.
“As far as we know, this is the first time anyone has used an insect system as a model for NIHL (noise-induced hearing loss),” says Daniel Eberl, UI biology professor and corresponding author on the study.
Hearing loss caused by loud noise encountered in an occupational or recreational setting is an expensive and growing health problem, as young people use ear buds to listen to loud music and especially as the aging Baby Boomer generation enters retirement. Despite this trend, “the molecular and physiological models involved in the problem or the recovery are not fully understood,” Eberl notes.
Enter the fruit fly as an unlikely proxy for researchers to learn more about how loud noises can damage the human ear. Eberl and Kevin Christie, lead author on the paper and a post-doctoral researcher in biology, say they were motivated by the prospect of finding a model that may hasten the day when medical researchers can fully understand the factors involved in noise-induced hearing loss and how to alleviate the problem. The study arose from a pilot project conducted by UI undergraduate student Wes Smith, in Eberl’s lab.
“The fruit fly model is superior to other models in genetic flexibility, cost, and ease of testing,” Christie says.
The fly uses its antenna as its ear, which resonates in response to courtship songs generated by wing vibration. The researchers exposed a test group of flies to a loud, 120 decibel tone that lies in the center of a fruit fly’s range of sounds it can hear. This over-stimulated their auditory system, similar to exposure at a rock concert or to a jack hammer. Later, the flies’ hearing was tested by playing a series of song pulses at a naturalistic volume, and measuring the physiological response by inserting tiny electrodes into their antennae. The fruit flies receiving the loud tone were found to have their hearing impaired relative to the control group.
When the flies were tested again a week later, those exposed to noise had recovered normal hearing levels. In addition, when the structure of the flies’ ears was examined in detail, the researchers discovered that nerve cells of the noise-rattled flies showed signs that they had been exposed to stress, including altered shapes of the mitochondria, which are responsible for generating most of a cell’s energy supply. Flies with a mutation making them susceptible to stress not only showed more severe reductions in hearing ability and more prominent changes in mitochondria shape, they still had deficits in hearing 7 days later, when normal flies had recovered.
The effect on the molecular underpinnings of the fruit fly’s ear are the same as experienced by humans, making the tests generally applicable to people, the researchers note.
“We found that fruit flies exhibit acoustic trauma effects resembling those found in vertebrates, including inducing metabolic stress in sensory cells,” Eberl says. “Our report is the first to report noise trauma in Drosophila and is a foundation for studying molecular and genetic conditions resulting from NIHL.”
“We hope eventually to use the system to look at how genetic pathways change in response to NIHL. Also, we would like to learn how the modification of genetic pathways might reduce the effects of noise trauma,” Christie adds.

A fly’s hearing

UI study shows fruit fly is ideal model to study hearing loss in people

If your attendance at too many rock concerts has impaired your hearing, listen up.

University of Iowa researchers say that the common fruit fly, Drosophila melanogaster, is an ideal model to study hearing loss in humans caused by loud noise. The reason: The molecular underpinnings to its hearing are roughly the same as with people.

As a result, scientists may choose to use the fruit fly to quicken the pace of research into the cause of noise-induced hearing loss and potential treatment for the condition, according to a paper published this week in the online Early Edition of the journal Proceedings of the National Academy of Sciences.

“As far as we know, this is the first time anyone has used an insect system as a model for NIHL (noise-induced hearing loss),” says Daniel Eberl, UI biology professor and corresponding author on the study.

Hearing loss caused by loud noise encountered in an occupational or recreational setting is an expensive and growing health problem, as young people use ear buds to listen to loud music and especially as the aging Baby Boomer generation enters retirement. Despite this trend, “the molecular and physiological models involved in the problem or the recovery are not fully understood,” Eberl notes.

Enter the fruit fly as an unlikely proxy for researchers to learn more about how loud noises can damage the human ear. Eberl and Kevin Christie, lead author on the paper and a post-doctoral researcher in biology, say they were motivated by the prospect of finding a model that may hasten the day when medical researchers can fully understand the factors involved in noise-induced hearing loss and how to alleviate the problem. The study arose from a pilot project conducted by UI undergraduate student Wes Smith, in Eberl’s lab.

“The fruit fly model is superior to other models in genetic flexibility, cost, and ease of testing,” Christie says.

The fly uses its antenna as its ear, which resonates in response to courtship songs generated by wing vibration. The researchers exposed a test group of flies to a loud, 120 decibel tone that lies in the center of a fruit fly’s range of sounds it can hear. This over-stimulated their auditory system, similar to exposure at a rock concert or to a jack hammer. Later, the flies’ hearing was tested by playing a series of song pulses at a naturalistic volume, and measuring the physiological response by inserting tiny electrodes into their antennae. The fruit flies receiving the loud tone were found to have their hearing impaired relative to the control group.

When the flies were tested again a week later, those exposed to noise had recovered normal hearing levels. In addition, when the structure of the flies’ ears was examined in detail, the researchers discovered that nerve cells of the noise-rattled flies showed signs that they had been exposed to stress, including altered shapes of the mitochondria, which are responsible for generating most of a cell’s energy supply. Flies with a mutation making them susceptible to stress not only showed more severe reductions in hearing ability and more prominent changes in mitochondria shape, they still had deficits in hearing 7 days later, when normal flies had recovered.

The effect on the molecular underpinnings of the fruit fly’s ear are the same as experienced by humans, making the tests generally applicable to people, the researchers note.

“We found that fruit flies exhibit acoustic trauma effects resembling those found in vertebrates, including inducing metabolic stress in sensory cells,” Eberl says. “Our report is the first to report noise trauma in Drosophila and is a foundation for studying molecular and genetic conditions resulting from NIHL.”

“We hope eventually to use the system to look at how genetic pathways change in response to NIHL. Also, we would like to learn how the modification of genetic pathways might reduce the effects of noise trauma,” Christie adds.

Filed under fruit flies hearing noise-induced hearing loss auditory system neuroscience science

74 notes

Administering Natural Substance Spermidin Stopped Dementia

Scientists from Freie Universität Berlin and the University of Graz Have Shown That Feeding Fruit Flies with Spermidin Suppresses Age-dependent Memory Impairment

Age-induced memory impairment can be suppressed by administration of the natural substance spermidin. This was found in a recent study conducted by Prof. Dr. Stephan Sigrist from Freie Universität Berlin and the Neurocure Cluster of Excellence and Prof. Dr. Frank Madeo from Karl-Franzens-Universität Graz. Both biologists, they were able to show that the endogenous substance spermidine triggers a cellular cleansing process, which is followed by an improvement in the memory performance of older fruit flies. At the molecular level, memory processes in animal organisms such as fruit flies and mice are similar to those in humans. The work by Sigrist and Madeo has potential for developing substances for treating age-related memory impairment. The study was first published in the online version of Nature Neuroscience.

Aggregated proteins are potential candidates for causing age-related dementia. With increasing age, the proteins accumulate in the brains of fruit flies, mice, and humans. In 2009 Madeo’s group in Graz already found that the spermidin molecule has an anti-aging effect by setting off autophagy, a cleaning process at the cellular level. Protein aggregates and other cellular waste are delivered to lysosomes, the digestive apparatus in cells, and degraded.

Feeding the fruit flies spermidin significantly reduced the amount of protein aggregates in their brains, and their memories improved to juvenile levels. This can be measured because flies can learn under classical Pavovian conditioning and adjust their behavior accordingly.

In humans, memory capacity decreases beginnning around the age of 50. This loss accelerates with increasing age. Due to increasing life expectancy, age-related memory impairment is expected to increase drastically. The spermidine concentration increases with age in flies as in humans. If it were possible to delay the onset of age-related dementia by giving individuals spermidin as a food supplement, it would be a great breakthrough for individuals and for society. Patient studies are the next step for Sigrist and Madeo.

(Source: fu-berlin.de)

Filed under spermidin fruit flies memory impairment dementia aging neuroscience science

364 notes

Fetus in womb learns language cues before birth, study finds 
Watch your mouth around your unborn child – he or she could be listening in. Babies can pick up language skills while they’re still in the womb, Finnish researchers say.

Fetuses exposed to fake words after week 29 in utero were able to distinguish them after being born, according to new research in the Proceedings of the National Academy of Sciences.
"Prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy," the authors wrote in their study. 
As revealed by the allure of the so-called Mozart Effect – the idea that exposing the fetus to classical music earns kids extra IQ points in spatial reasoning down the line – parents are constantly looking for ways to give their children an intelligence advantage.
That’s even if the research their parenting tactics are based on is too narrow to draw such broad conclusions or remains under question (the Mozart Effect was deemed "crap," for example, by one scientist.)
Nonetheless, scientists have discovered plenty of evidence that what’s heard in utero can make a lasting impression. Fetuses respond differently to native and nonnative vowels, and newborns cry with their native language prosody (a combination of rhythm, stress and intonation). Researchers led by Eino Partanen at the University of Helsinki wanted to see what other language cues a fetus might pick up in the womb.
For the experiment, Finnish mothers were asked to play a CD with a pair of four-minute tracks that held music punctuated by a fake word: tatata. On occasion, they changed up the vowel – tatota – and in other instances they switched the pitch – tatata, when the middle syllable could be 8% higher or lower, or 15% higher or lower. The false word and its variants featured hundreds of times as the tracks played, and the mothers were asked to play the CD five to seven times per week.
Then, after several weeks of exposure to the fake word, the researchers had to determine whether all this in-utero training had somehow stuck.
The researchers were relying on a phenomenon called mismatch response: a flash of neural activity when the brain picks up on something off, something not quite right – such as when the word tatata is suddenly tatota. If that flash goes off, it means that something doesn’t make sense compared to what the brain has already learned.
The scientists figured that if the flash went off the first time the infant babies heard the modified words (tatota or tatata) after being born, it would mean that they’d been paying attention while in the womb.
They tested the mismatch response once the babies were born by attaching electrodes and studying their brain activity.
Sure enough, the newborns that had been trained in the womb had a response roughly four times stronger to the pitch change (tatota versus tatata) than untrained newborns. (Both trained and untrained babies picked up the tatata versus tatota vowel distinction.)
The findings could mean it’s possible to give babies a little language leg-up before they ever say a word — particularly the children who may need it most.
"It might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia," the authors wrote.
But, the scientists point out, it could mean that babies are also vulnerable to harmful acoustic effects – “abnormal, unstructured, and novel sound stimulation” – an idea that will also require further study. Until then, perhaps it’s best not to hang around any noisy construction sites while pregnant.

Fetus in womb learns language cues before birth, study finds

Watch your mouth around your unborn child – he or she could be listening in. Babies can pick up language skills while they’re still in the womb, Finnish researchers say.

Fetuses exposed to fake words after week 29 in utero were able to distinguish them after being born, according to new research in the Proceedings of the National Academy of Sciences.

"Prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy," the authors wrote in their study. 

As revealed by the allure of the so-called Mozart Effect – the idea that exposing the fetus to classical music earns kids extra IQ points in spatial reasoning down the line – parents are constantly looking for ways to give their children an intelligence advantage.

That’s even if the research their parenting tactics are based on is too narrow to draw such broad conclusions or remains under question (the Mozart Effect was deemed "crap," for example, by one scientist.)

Nonetheless, scientists have discovered plenty of evidence that what’s heard in utero can make a lasting impression. Fetuses respond differently to native and nonnative vowels, and newborns cry with their native language prosody (a combination of rhythm, stress and intonation). Researchers led by Eino Partanen at the University of Helsinki wanted to see what other language cues a fetus might pick up in the womb.

For the experiment, Finnish mothers were asked to play a CD with a pair of four-minute tracks that held music punctuated by a fake word: tatata. On occasion, they changed up the vowel – tatota – and in other instances they switched the pitch – tatata, when the middle syllable could be 8% higher or lower, or 15% higher or lower. The false word and its variants featured hundreds of times as the tracks played, and the mothers were asked to play the CD five to seven times per week.

Then, after several weeks of exposure to the fake word, the researchers had to determine whether all this in-utero training had somehow stuck.

The researchers were relying on a phenomenon called mismatch response: a flash of neural activity when the brain picks up on something off, something not quite right – such as when the word tatata is suddenly tatota. If that flash goes off, it means that something doesn’t make sense compared to what the brain has already learned.

The scientists figured that if the flash went off the first time the infant babies heard the modified words (tatota or tatata) after being born, it would mean that they’d been paying attention while in the womb.

They tested the mismatch response once the babies were born by attaching electrodes and studying their brain activity.

Sure enough, the newborns that had been trained in the womb had a response roughly four times stronger to the pitch change (tatota versus tatata) than untrained newborns. (Both trained and untrained babies picked up the tatata versus tatota vowel distinction.)

The findings could mean it’s possible to give babies a little language leg-up before they ever say a word — particularly the children who may need it most.

"It might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia," the authors wrote.

But, the scientists point out, it could mean that babies are also vulnerable to harmful acoustic effects – “abnormal, unstructured, and novel sound stimulation” – an idea that will also require further study. Until then, perhaps it’s best not to hang around any noisy construction sites while pregnant.

Filed under language language acquisition brain activity fetus womb neuroscience science

148 notes

Striking Patterns: Skill for Forming Tools and Words Evolved Together



When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.
Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.
In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.
The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.
In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.
The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.
The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”
Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.
That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”
Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.
Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Striking Patterns: Skill for Forming Tools and Words Evolved Together

When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.

Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.

In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.

The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.

In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.

The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.

The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”

Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.

That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”

Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.

Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Filed under language toolmaking tool use brain activity blood flow evolution neuroscience psychology science

78 notes

Learning how the brain takes out its trash may help decode neurological diseases
Imagine that garbage haulers don’t exist. Slowly, the trash accumulates in our offices, our homes, it clogs the streets and damages our cars, causes illness and renders normal life impossible.
Garbage in the brain, in the form of dead cells, must also be removed before it accumulates, because it can cause both rare and common neurological diseases, such as Parkinson’s. Now, University of Michigan researchers are a leap closer to decoding the critical process of how the brain clears dead cells, said Haoxing Xu, associate professor in the U-M Department of Molecular, Cellular and Developmental Biology.
A new U-M study identified two critical components of this cell clearing process: an essential calcium channel protein, TRPML1, that helps the so-called garbage collecting cells, called microphages or microglia, to clear out the dead cells; and alipid molecule, which helps activate TRPML1 and the process that allows the microphages to remove these dead cells.
Moreover, the Xu lab identified a synthetic chemical compound that can activate TRPML1. Because this chemical compound ultimately helps activate this cell-clearing process, it provides a drug target that could help combat these neurological diseases.
"This is clearly a drug target," Xu said. "What this paper picks out is exactly what is going wrong in this process."
Scientists began by looking at a very rare neurodegenerative disease called Type IV Mucolipidosis, a childhood neurodegenerative disease characterized by multiple disabilities.
Xu’s group found that lack of TRPML1 function, which is the channel through which calcium is released from the lysosome—the cell’s recycling center—into the microphage cells, contributes to these neurodegenerative conditions. If this calcium channel doesn’t work, calcium cannot be released, and dead cells aren’t removed, Xu said. The synthetic chemical compound stimulates the TRPML1 calcium channel to release the calcium into the cell.
Further, dead cells “are bad for live cells,” Xu said. An excess of dead cells leads the macrophage cells to also kill healthy neurons necessary for neurological function, which in turn can lead to these neurodegenerative diseases.
There are many neurodegenerative diseases, some very rare and some more common, such as Parkinson’s and ALS. The common thread among them is the dearth of live and functioning neurons, which prevents the neurological system from carrying out normal functions, Xu said.
Thus, identifying a lipid molecule and also chemical compounds that stimulates proper function of the TRMPL1 function could revolutionize the treatment of these neurodegenerative diseases.
The next step in Xu’s research is to test how these general observations are helpful to the neurological diseases and whether the compound is effective in animal models of neurological diseases.
The paper, “A TRP channel in the lysosome regulates large particle phagocytosis via focal exocytosis,” appeared Aug. 29 online in Developmental Cell.

Learning how the brain takes out its trash may help decode neurological diseases

Imagine that garbage haulers don’t exist. Slowly, the trash accumulates in our offices, our homes, it clogs the streets and damages our cars, causes illness and renders normal life impossible.

Garbage in the brain, in the form of dead cells, must also be removed before it accumulates, because it can cause both rare and common neurological diseases, such as Parkinson’s. Now, University of Michigan researchers are a leap closer to decoding the critical process of how the brain clears dead cells, said Haoxing Xu, associate professor in the U-M Department of Molecular, Cellular and Developmental Biology.

A new U-M study identified two critical components of this cell clearing process: an essential calcium channel protein, TRPML1, that helps the so-called garbage collecting cells, called microphages or microglia, to clear out the dead cells; and alipid molecule, which helps activate TRPML1 and the process that allows the microphages to remove these dead cells.

Moreover, the Xu lab identified a synthetic chemical compound that can activate TRPML1. Because this chemical compound ultimately helps activate this cell-clearing process, it provides a drug target that could help combat these neurological diseases.

"This is clearly a drug target," Xu said. "What this paper picks out is exactly what is going wrong in this process."

Scientists began by looking at a very rare neurodegenerative disease called Type IV Mucolipidosis, a childhood neurodegenerative disease characterized by multiple disabilities.

Xu’s group found that lack of TRPML1 function, which is the channel through which calcium is released from the lysosome—the cell’s recycling center—into the microphage cells, contributes to these neurodegenerative conditions. If this calcium channel doesn’t work, calcium cannot be released, and dead cells aren’t removed, Xu said. The synthetic chemical compound stimulates the TRPML1 calcium channel to release the calcium into the cell.

Further, dead cells “are bad for live cells,” Xu said. An excess of dead cells leads the macrophage cells to also kill healthy neurons necessary for neurological function, which in turn can lead to these neurodegenerative diseases.

There are many neurodegenerative diseases, some very rare and some more common, such as Parkinson’s and ALS. The common thread among them is the dearth of live and functioning neurons, which prevents the neurological system from carrying out normal functions, Xu said.

Thus, identifying a lipid molecule and also chemical compounds that stimulates proper function of the TRMPL1 function could revolutionize the treatment of these neurodegenerative diseases.

The next step in Xu’s research is to test how these general observations are helpful to the neurological diseases and whether the compound is effective in animal models of neurological diseases.

The paper, “A TRP channel in the lysosome regulates large particle phagocytosis via focal exocytosis,” appeared Aug. 29 online in Developmental Cell.

Filed under neurological diseases microphages microglia calcium channel lysosome neuroscience science

147 notes

Left brain, right brain: Different patterns of cortical interaction
The human brain is divided into two hemispheres – left and right – in which neural functions are said to be lateralized. (For example, language and motor abilities are associated with the left hemisphere, and visuospatial attention with the right.) Although hemispheric lateralization is generally thought to benefit brain function, relationships between lateralization degree and functioning levels have not been quantified. Recently, however, scientists at the National Institutes of Health in Bethesda, MD demonstrated that the two hemispheres have qualitatively different biases: the left prefers to interact with itself – especially for regions associated with language and fine motor coordination – while the right visuospatial and attentional processing regions interact with both hemispheres. Moreover, the researchers provided direct evidence that an individual’s degree of  lateralization is associated with enhanced cognitive ability.
Dr. Stephen J. Gotts spoke with Medical Xpress about the research that he, Dr. Hang Joon Jo, and Dr. Alex Martin and colleagues conducted – and the challenges they faced in so doing. “One of the tricky things about studying lateralization of function is that it’s hard to know exactly which points in the two hemispheres are correspondent.” Gotts tells Medical Xpress. This is the case, he explains, because while the hemispheres are roughly symmetrical, there are idiosyncratic differences in cortical folding between left and right for any given individual. In addition, he notes, the exact location of particular folds (known as gyri) varies across individuals.

"Neuroimaging studies have historically adopted a couple of different approaches to deal with this situation," Gotts explains. Some studies, he illustrates, transform the geometry of the brain for each individual into a so-called standard three-dimensional coordinate reference brain – for example, the Talairach-Tournoux atlas. This allows them to estimate symmetrical corresponding points by flipping the left/right x-coordinate about zero. However, he acknowledges that this technique is prone to error by as much as 1-2 centimeters in some brain locations.

"Another approach," Gotts continues, "has been to compare the magnitude of the neural response in each hemisphere during the performance of a task – for example, a language comprehension task – and calculate a quantitative laterality index to enumerate the extent of lateralization. While this approach makes a lot of sense, and doesn’t necessarily require one to solve the correspondence problem, it will be strictly limited to the brain areas that can be activated by the task.” In other words, if an area isn’t engaged by the task, it’s hard to know whether or not it’s lateralized. Moreover, it requires many different tasks to be selected in order to address the spatial scope of the entire brain – and Gotts points out that this hasn’t been carried out to date.

"Our solution addressed the correspondence problem more directly," Gotts says. The scientists first flattened out a model of each individual’s folded cortex onto a smooth surface, spatially warping and stretching each individual brain so that each cortical landmark – that is, gyrus or sulcus – was aligned across individuals. They then found corresponding points in the two hemispheres by their position on this standardized, flattened surface relative to the full set of cortical landmarks. (Sulci are depressions or fissures in the surface of the brain surrounding the gyri.) "Applying the same spatial warping to the functional data then allowed us to compare ongoing, resting brain dynamics between the hemispheres at every position on the cortical surface," Gotts explains.
Utilizing a more traditional, task-based approach to measuring laterality has another downside: researchers typically assess the average magnitude of neural response to a task condition across many individual stimulus events, meaning that dynamical interactions of brain areas aren’t as easily assessed. “It’s not impossible,” notes Gotts, “but to eliminate the effects of stimulus artifacts on connectivity estimates, it requires particular choices of neuroimaging task timing – and it’s been done a lot less often than magnitude estimation. The qualitative distinction that we observed in our study between how the hemispheres interact with one another really requires the examination of time-varying neural responses and their co-variation. I don’t think that you’d be able to anticipate this finding solely from examining average activity levels.”
With respect to the correlations with behavioral ability, Gotts points out that there are probably many different tasks that one could have chosen. “Our choice was to use tasks that have been well-studied and well-normed across individuals as part of the Wechsler intelligence scales – specifically, Vocabulary, which is correlated with many aspects of verbal abilities, and Block Design and Matrix Reasoning, which index aspects of visuospatial processing. These obviously aren’t the only possible choices, and it would be nice to follow up this work with a more thorough battery of tasks that would allow us to examine more detailed aspects of language, fine motor control, and visuospatial abilities.”

It is important to point out, Gotts adds, that there have been several previous task-based studies that have examined the relationships between lateralization magnitude and cognitive ability, with some reporting a direct relationship as their current study shows. “The main contribution of our study is to demonstrate, at a whole-brain scope, the qualitative differences between the hemispheres in their within- and between-hemisphere interactions. The correlations with behavioral ability really hammer this distinction home, since one needed to use the appropriate metric – that is, segregation versus integration – to see these correlations.”

One of the interesting things about the distinction between the hemispheres that the scientists observed, Gotts notes, is that there are implicit hints about it in the literatures on individual cognitive domains. “When people discuss language lateralization, the notion is more like classic modularity: language is operating in the left hemisphere in a manner somewhat isolated, or segregated, from the right hemisphere. This notion may come in large part from the neuropsychological literature, which shows that brain damage to the left hemisphere is much more likely to cause aphasia than damage to the right hemisphere in right-handed individuals.

In contrast, Gotts continues, visuospatial processing and attention involves coordinated processing across the entire visual field, with the left and right halves of visual space represented separately in the right and left occipital cortex, respectively. “Visual processing over the entire visual field requires inter-hemispheric integration, and integration and/or control relates to visuospatial attentional control that is more right-hemisphere lateralized. “Our findings highlight this implicit distinction, making it more explicit and showing that the respective cognitive abilities benefit from it. As a field, I think that we’ve always assumed that hemispheric lateralization was somehow beneficial for function, but very few brain imaging studies have even examined the issue directly, much less at a whole-brain scope across the range of cognitive domains known to be lateralized.”

Moving forward, says Gotts, one of the key outstanding questions is: What is the developmental time course of these hemispheric differences? That is, does the left hemisphere bias for self-interaction exist prior to skilled motor control and language function – or does it emerge later as a consequence of these functions? “If it were to exist prior to handedness and language acquisition in the first few months of age, or even in utero, then the bias could plausibly serve as the cause of the preferential left-lateralization of these functions. One could even try to predict the degree of lateralization present later in life during various tasks, or when at rest, from estimates measured early in life.”

A similar set of questions exists for the domain of visuospatial function and the right-hemisphere bias for bilateral interaction, Gotts adds. “Because our method for assessing lateralization only requires measuring resting brain activity and not the performance of complex cognitive tasks, these experiments are actually possible to perform with young infants in a reasonably parallel manner.”

According to Gotts, another crucial question for the field of human neuroscience is: What changed from monkeys to apes to humans with respect to lateralization? “Several decades ago, there was the suggestion that monkeys exhibit hand preferences like the ones humans exhibit. After much research, it became clear that monkeys are more symmetrical in their brain control of both motor and visuospatial function. However, apes – such as chimpanzees – appear to be a different story. They appear to exhibit some hand preference lateralization with accompanying brain lateralization, although perhaps not to the extremes to which humans do.” (Roughly 80-90% of human males and females are right-handed.) “As with infants, resting brain scans can be performed on monkeys and chimpanzees in a manner similar to those conducted on adult humans.”

Regarding other areas of research that might benefit from this study, Gotts thinks it would be possible to apply their methods for assessing lateralization to a range of psychiatric disorders, such as autism and schizophrenia. “There’s some suggestion in the literature that lateralization of function is altered in these disorders. Is lateralization qualitatively different from the hemispheric biases we demonstrate for typical individuals – or do they differ in magnitude? We’d also like to understand more about the relationship between handedness and cognitive ability.’
Being left-handed, he illustrates, is associated with a more bilateral representation of language – but this doesn’t appear to mandate poorer cognitive abilities in left-handed individuals. “It may be that in left-handed individuals a different optimal weighting or balance of power between the hemispheres is achieved which differs from what we’ve observed in right-handed males,” Gotts concludes. “Our methods could certainly be applied to examine this set of issues.”

Left brain, right brain: Different patterns of cortical interaction

The human brain is divided into two hemispheres – left and right – in which neural functions are said to be lateralized. (For example, language and motor abilities are associated with the left hemisphere, and visuospatial attention with the right.) Although hemispheric lateralization is generally thought to benefit brain function, relationships between lateralization degree and functioning levels have not been quantified. Recently, however, scientists at the National Institutes of Health in Bethesda, MD demonstrated that the two hemispheres have qualitatively different biases: the left prefers to interact with itself – especially for regions associated with language and fine motor coordination – while the right visuospatial and attentional processing regions interact with both hemispheres. Moreover, the researchers provided direct evidence that an individual’s degree of lateralization is associated with enhanced cognitive ability.

Dr. Stephen J. Gotts spoke with Medical Xpress about the research that he, Dr. Hang Joon Jo, and Dr. Alex Martin and colleagues conducted – and the challenges they faced in so doing. “One of the tricky things about studying lateralization of function is that it’s hard to know exactly which points in the two hemispheres are correspondent.” Gotts tells Medical Xpress. This is the case, he explains, because while the hemispheres are roughly symmetrical, there are idiosyncratic differences in cortical folding between left and right for any given individual. In addition, he notes, the exact location of particular folds (known as gyri) varies across individuals.

"Neuroimaging studies have historically adopted a couple of different approaches to deal with this situation," Gotts explains. Some studies, he illustrates, transform the geometry of the brain for each individual into a so-called standard three-dimensional coordinate reference brain – for example, the Talairach-Tournoux atlas. This allows them to estimate symmetrical corresponding points by flipping the left/right x-coordinate about zero. However, he acknowledges that this technique is prone to error by as much as 1-2 centimeters in some brain locations.

"Another approach," Gotts continues, "has been to compare the magnitude of the neural response in each hemisphere during the performance of a task – for example, a language comprehension task – and calculate a quantitative laterality index to enumerate the extent of lateralization. While this approach makes a lot of sense, and doesn’t necessarily require one to solve the correspondence problem, it will be strictly limited to the brain areas that can be activated by the task.” In other words, if an area isn’t engaged by the task, it’s hard to know whether or not it’s lateralized. Moreover, it requires many different tasks to be selected in order to address the spatial scope of the entire brain – and Gotts points out that this hasn’t been carried out to date.

"Our solution addressed the correspondence problem more directly," Gotts says. The scientists first flattened out a model of each individual’s folded cortex onto a smooth surface, spatially warping and stretching each individual brain so that each cortical landmark – that is, gyrus or sulcus – was aligned across individuals. They then found corresponding points in the two hemispheres by their position on this standardized, flattened surface relative to the full set of cortical landmarks. (Sulci are depressions or fissures in the surface of the brain surrounding the gyri.) "Applying the same spatial warping to the functional data then allowed us to compare ongoing, resting brain dynamics between the hemispheres at every position on the cortical surface," Gotts explains.

Utilizing a more traditional, task-based approach to measuring laterality has another downside: researchers typically assess the average magnitude of neural response to a task condition across many individual stimulus events, meaning that dynamical interactions of brain areas aren’t as easily assessed. “It’s not impossible,” notes Gotts, “but to eliminate the effects of stimulus artifacts on connectivity estimates, it requires particular choices of neuroimaging task timing – and it’s been done a lot less often than magnitude estimation. The qualitative distinction that we observed in our study between how the hemispheres interact with one another really requires the examination of time-varying neural responses and their co-variation. I don’t think that you’d be able to anticipate this finding solely from examining average activity levels.”

With respect to the correlations with behavioral ability, Gotts points out that there are probably many different tasks that one could have chosen. “Our choice was to use tasks that have been well-studied and well-normed across individuals as part of the Wechsler intelligence scales – specifically, Vocabulary, which is correlated with many aspects of verbal abilities, and Block Design and Matrix Reasoning, which index aspects of visuospatial processing. These obviously aren’t the only possible choices, and it would be nice to follow up this work with a more thorough battery of tasks that would allow us to examine more detailed aspects of language, fine motor control, and visuospatial abilities.”

It is important to point out, Gotts adds, that there have been several previous task-based studies that have examined the relationships between lateralization magnitude and cognitive ability, with some reporting a direct relationship as their current study shows. “The main contribution of our study is to demonstrate, at a whole-brain scope, the qualitative differences between the hemispheres in their within- and between-hemisphere interactions. The correlations with behavioral ability really hammer this distinction home, since one needed to use the appropriate metric – that is, segregation versus integration – to see these correlations.”

One of the interesting things about the distinction between the hemispheres that the scientists observed, Gotts notes, is that there are implicit hints about it in the literatures on individual cognitive domains. “When people discuss language lateralization, the notion is more like classic modularity: language is operating in the left hemisphere in a manner somewhat isolated, or segregated, from the right hemisphere. This notion may come in large part from the neuropsychological literature, which shows that brain damage to the left hemisphere is much more likely to cause aphasia than damage to the right hemisphere in right-handed individuals.

In contrast, Gotts continues, visuospatial processing and attention involves coordinated processing across the entire visual field, with the left and right halves of visual space represented separately in the right and left occipital cortex, respectively. “Visual processing over the entire visual field requires inter-hemispheric integration, and integration and/or control relates to visuospatial attentional control that is more right-hemisphere lateralized. “Our findings highlight this implicit distinction, making it more explicit and showing that the respective cognitive abilities benefit from it. As a field, I think that we’ve always assumed that hemispheric lateralization was somehow beneficial for function, but very few brain imaging studies have even examined the issue directly, much less at a whole-brain scope across the range of cognitive domains known to be lateralized.”

Moving forward, says Gotts, one of the key outstanding questions is: What is the developmental time course of these hemispheric differences? That is, does the left hemisphere bias for self-interaction exist prior to skilled motor control and language function – or does it emerge later as a consequence of these functions? “If it were to exist prior to handedness and language acquisition in the first few months of age, or even in utero, then the bias could plausibly serve as the cause of the preferential left-lateralization of these functions. One could even try to predict the degree of lateralization present later in life during various tasks, or when at rest, from estimates measured early in life.”

A similar set of questions exists for the domain of visuospatial function and the right-hemisphere bias for bilateral interaction, Gotts adds. “Because our method for assessing lateralization only requires measuring resting brain activity and not the performance of complex cognitive tasks, these experiments are actually possible to perform with young infants in a reasonably parallel manner.”

According to Gotts, another crucial question for the field of human neuroscience is: What changed from monkeys to apes to humans with respect to lateralization? “Several decades ago, there was the suggestion that monkeys exhibit hand preferences like the ones humans exhibit. After much research, it became clear that monkeys are more symmetrical in their brain control of both motor and visuospatial function. However, apes – such as chimpanzees – appear to be a different story. They appear to exhibit some hand preference lateralization with accompanying brain lateralization, although perhaps not to the extremes to which humans do.” (Roughly 80-90% of human males and females are right-handed.) “As with infants, resting brain scans can be performed on monkeys and chimpanzees in a manner similar to those conducted on adult humans.”

Regarding other areas of research that might benefit from this study, Gotts thinks it would be possible to apply their methods for assessing lateralization to a range of psychiatric disorders, such as autism and schizophrenia. “There’s some suggestion in the literature that lateralization of function is altered in these disorders. Is lateralization qualitatively different from the hemispheric biases we demonstrate for typical individuals – or do they differ in magnitude? We’d also like to understand more about the relationship between handedness and cognitive ability.’

Being left-handed, he illustrates, is associated with a more bilateral representation of language – but this doesn’t appear to mandate poorer cognitive abilities in left-handed individuals. “It may be that in left-handed individuals a different optimal weighting or balance of power between the hemispheres is achieved which differs from what we’ve observed in right-handed males,” Gotts concludes. “Our methods could certainly be applied to examine this set of issues.”

Filed under brain lateralization brain hemispheres cognitive ability psychology neuroscience science

free counters