Neuroscience

Articles and news from the latest research reports.

91 notes

Accused of complicity in Alzheimer’s, amyloid proteins may be getting a bad rap
Amyloids — clumps of misfolded proteins found in the brains of people with Alzheimer’s disease and other neurodegenerative disorders — are the quintessential bad boys of neurobiology. They’re thought to muck up the seamless workings of the neurons responsible for memory and movement, and researchers around the world have devoted themselves to devising ways of blocking their production or accumulation in humans.
But now a pair of recent research studies from the Stanford University School of Medicine sets a solid course toward rehabilitating the reputation of the proteins that form these amyloid tangles, or plaques. In the process, they appear poised to turn the field of neurobiology on its head.
The first study, published in August, showed that an amyloid-forming protein called beta amyloid, which is strongly implicated in Alzheimer’s disease, could reverse the symptoms of a multiple-sclerosis-like neurodegenerative disease in laboratory mice.
The second study, published April 3 in Science Translational Medicine, extends the finding to show that small portions of several notorious amyloid-forming proteins (including well-known culprits like tau and prion proteins) can also quickly alleviate symptoms in mice with the condition — despite the fact that the fragments can and do form the long tendrils, or fibrils, previously thought harmful to nerve health.
“What we’re finding is that, at least under certain circumstances, these amyloid peptides actually help the brain,” said Lawrence Steinman, MD, professor of neurology and neurological sciences and of pediatrics. “This really turns the ‘amyloid-is-bad’ dogma upside down. It will require a shift in people’s fundamental beliefs about neurodegeneration and diseases like multiple sclerosis, Alzheimer’s and Parkinson’s.”
Steinman is a noted expert in multiple sclerosis whose research led to the development of natalizumab (marketed as Tysabri), a potent treatment for the disease.
Taken together, the studies begin to suggest the radical new idea that full-length, amyloid-forming proteins may in fact be produced by the body as a protective, rather than destructive, force. In particular, Steinman’s study shows that these proteins may function as molecular chaperones, escorting and removing from sites of injury specific molecules involved in inflammation and inappropriate immune responses.
Steinman, who is also the medical school’s George A. Zimmermann Professor, is the corresponding author of the research. Jonathan Rothbard, PhD, a senior research scientist in the Steinman laboratory, is the senior author; postdoctoral scholar Michael Kurnellas, PhD, is the lead author.
Although the specific findings of Steinman’s two studies are surprising, there have been inklings from previous research that amyloid-forming proteins may not be all bad. In particular, inhibiting, or knocking out, the expression of several of the proteins in the mouse models of multiple sclerosis — a technique that should block the course of the disease if these proteins are the cause — instead worsened the animals’ symptoms.
And there’s the fact that these so-called dangerous amyloid-forming molecules are surprisingly prevalent. “We know the body makes a lot of amyloid-forming proteins in response to injury,” said Steinman. “I’m doubtful that that’s done to produce more harm. For example, the prion protein is found in every cell in our bodies. What is it doing? It’s possible that any therapeutic maneuver to remove all of these proteins could interfere with their natural function.”
Understanding how amyloids form requires an understanding of the biology of proteins, which are essentially strings of smaller components called amino acids attached end to end. Once they’re made, these protein strings twist and fold into specific three-dimensional shapes that fit together like keys and locks to do the work of the cell.
A misfolded protein is likely to be unable to carry out its duties and must be disposed of by the body’s cellular waste-management system. Amyloid-forming proteins (of which there are around 20), however, don’t go quietly, if at all. Instead, they initiate a chain reaction with other misfolded proteins — forming long, insoluble strands called fibrils that mat together to form amyloid clumps. These clumps appear consistently in the brains of people with neurodegenerative diseases like Alzheimer’s and multiple sclerosis, but not in the brains of healthy people.
Although these clumps are thought to be detrimental to nerve cells, it’s not entirely clear how they cause harm. One possibility is the ability of the fibrils to form cylindrical pores that could disrupt the cellular membrane and interfere with the orderly flow of ions and molecules used by the cells to communicate and transmit nerve signals. Regardless, their very presence suggests a diagnosis of neurodegeneration to many clinicians, including — until recently — Steinman.
“We began this research because these molecules are present in the brains of people with multiple sclerosis,” said Steinman. “We expected to show that the presence of beta amyloid made the disease worse in laboratory animals. Instead, we saw a great deal of benefit.”
Intrigued by the results of their first study, the researchers next tested the effect of small, six-amino-acid portions of several amyloid-forming proteins, including beta amyloid, which appeared likely to share a three-dimensional structure. They found that nearly all of the tiny protein molecules, or hexamers, were also able to temporarily reverse the symptoms of multiple sclerosis in the mice (when the treatment was stopped, the mice developed signs of the condition within a few days).
The researchers noted, however, that the curative effect of the hexamers was linked to their ability to form fibrils similar, but not identical, to their longer parent molecules. For example, these simplified hexamer fibrils are more easily formed and broken apart than those composed of whole proteins. They are also thought not to be able to form the cylindrical pores that might damage cell membranes. Finally, the hexamer fibrils appear to inhibit the formation of fibrils from full-length proteins — perhaps by blocking, or failing to promote, the chain reaction that initiates fibril formation.
When Steinman and his colleagues mixed the fibril-forming hexamers with blood plasma from three people with multiple sclerosis, they found that the fibrils bound to and removed from solution many potentially damaging molecules involved in inflammation and the immune response.
“These hexamer fibrils appear to be working to remove dangerous chemicals from the vicinity of the injury,” said Steinman.
The researchers are eager to pursue the use of these small hexamers as therapies for neurodegenerative diseases like multiple sclerosis. Much research is still needed, but Steinman is hopeful.
“The lessons we learn from our study of amyloid-forming proteins in multiple sclerosis could be helpful for stroke and brain trauma, as well as for Alzheimer’s,” said Steinman. “We’re gaining insight into how current therapeutic approaches may be affecting the body, and beginning to understand the nuances necessary to design a successful treatment. Although it will take time, we’re determined to move promising results out of the laboratory and into the clinic as quickly as possible.”
(Image: Wikimedia Commons)

Accused of complicity in Alzheimer’s, amyloid proteins may be getting a bad rap

Amyloids — clumps of misfolded proteins found in the brains of people with Alzheimer’s disease and other neurodegenerative disorders — are the quintessential bad boys of neurobiology. They’re thought to muck up the seamless workings of the neurons responsible for memory and movement, and researchers around the world have devoted themselves to devising ways of blocking their production or accumulation in humans.

But now a pair of recent research studies from the Stanford University School of Medicine sets a solid course toward rehabilitating the reputation of the proteins that form these amyloid tangles, or plaques. In the process, they appear poised to turn the field of neurobiology on its head.

The first study, published in August, showed that an amyloid-forming protein called beta amyloid, which is strongly implicated in Alzheimer’s disease, could reverse the symptoms of a multiple-sclerosis-like neurodegenerative disease in laboratory mice.

The second study, published April 3 in Science Translational Medicine, extends the finding to show that small portions of several notorious amyloid-forming proteins (including well-known culprits like tau and prion proteins) can also quickly alleviate symptoms in mice with the condition — despite the fact that the fragments can and do form the long tendrils, or fibrils, previously thought harmful to nerve health.

“What we’re finding is that, at least under certain circumstances, these amyloid peptides actually help the brain,” said Lawrence Steinman, MD, professor of neurology and neurological sciences and of pediatrics. “This really turns the ‘amyloid-is-bad’ dogma upside down. It will require a shift in people’s fundamental beliefs about neurodegeneration and diseases like multiple sclerosis, Alzheimer’s and Parkinson’s.”

Steinman is a noted expert in multiple sclerosis whose research led to the development of natalizumab (marketed as Tysabri), a potent treatment for the disease.

Taken together, the studies begin to suggest the radical new idea that full-length, amyloid-forming proteins may in fact be produced by the body as a protective, rather than destructive, force. In particular, Steinman’s study shows that these proteins may function as molecular chaperones, escorting and removing from sites of injury specific molecules involved in inflammation and inappropriate immune responses.

Steinman, who is also the medical school’s George A. Zimmermann Professor, is the corresponding author of the research. Jonathan Rothbard, PhD, a senior research scientist in the Steinman laboratory, is the senior author; postdoctoral scholar Michael Kurnellas, PhD, is the lead author.

Although the specific findings of Steinman’s two studies are surprising, there have been inklings from previous research that amyloid-forming proteins may not be all bad. In particular, inhibiting, or knocking out, the expression of several of the proteins in the mouse models of multiple sclerosis — a technique that should block the course of the disease if these proteins are the cause — instead worsened the animals’ symptoms.

And there’s the fact that these so-called dangerous amyloid-forming molecules are surprisingly prevalent. “We know the body makes a lot of amyloid-forming proteins in response to injury,” said Steinman. “I’m doubtful that that’s done to produce more harm. For example, the prion protein is found in every cell in our bodies. What is it doing? It’s possible that any therapeutic maneuver to remove all of these proteins could interfere with their natural function.”

Understanding how amyloids form requires an understanding of the biology of proteins, which are essentially strings of smaller components called amino acids attached end to end. Once they’re made, these protein strings twist and fold into specific three-dimensional shapes that fit together like keys and locks to do the work of the cell.

A misfolded protein is likely to be unable to carry out its duties and must be disposed of by the body’s cellular waste-management system. Amyloid-forming proteins (of which there are around 20), however, don’t go quietly, if at all. Instead, they initiate a chain reaction with other misfolded proteins — forming long, insoluble strands called fibrils that mat together to form amyloid clumps. These clumps appear consistently in the brains of people with neurodegenerative diseases like Alzheimer’s and multiple sclerosis, but not in the brains of healthy people.

Although these clumps are thought to be detrimental to nerve cells, it’s not entirely clear how they cause harm. One possibility is the ability of the fibrils to form cylindrical pores that could disrupt the cellular membrane and interfere with the orderly flow of ions and molecules used by the cells to communicate and transmit nerve signals. Regardless, their very presence suggests a diagnosis of neurodegeneration to many clinicians, including — until recently — Steinman.

“We began this research because these molecules are present in the brains of people with multiple sclerosis,” said Steinman. “We expected to show that the presence of beta amyloid made the disease worse in laboratory animals. Instead, we saw a great deal of benefit.”

Intrigued by the results of their first study, the researchers next tested the effect of small, six-amino-acid portions of several amyloid-forming proteins, including beta amyloid, which appeared likely to share a three-dimensional structure. They found that nearly all of the tiny protein molecules, or hexamers, were also able to temporarily reverse the symptoms of multiple sclerosis in the mice (when the treatment was stopped, the mice developed signs of the condition within a few days).

The researchers noted, however, that the curative effect of the hexamers was linked to their ability to form fibrils similar, but not identical, to their longer parent molecules. For example, these simplified hexamer fibrils are more easily formed and broken apart than those composed of whole proteins. They are also thought not to be able to form the cylindrical pores that might damage cell membranes. Finally, the hexamer fibrils appear to inhibit the formation of fibrils from full-length proteins — perhaps by blocking, or failing to promote, the chain reaction that initiates fibril formation.

When Steinman and his colleagues mixed the fibril-forming hexamers with blood plasma from three people with multiple sclerosis, they found that the fibrils bound to and removed from solution many potentially damaging molecules involved in inflammation and the immune response.

“These hexamer fibrils appear to be working to remove dangerous chemicals from the vicinity of the injury,” said Steinman.

The researchers are eager to pursue the use of these small hexamers as therapies for neurodegenerative diseases like multiple sclerosis. Much research is still needed, but Steinman is hopeful.

“The lessons we learn from our study of amyloid-forming proteins in multiple sclerosis could be helpful for stroke and brain trauma, as well as for Alzheimer’s,” said Steinman. “We’re gaining insight into how current therapeutic approaches may be affecting the body, and beginning to understand the nuances necessary to design a successful treatment. Although it will take time, we’re determined to move promising results out of the laboratory and into the clinic as quickly as possible.”

(Image: Wikimedia Commons)

Filed under neurodegenerative diseases neurodegeneration MS proteins beta amyloid alzheimer's disease neuroscience science

176 notes

Laser Light Zaps Away Cocaine Addiction
By stimulating one part of the brain with laser light, researchers at the National Institutes of Health (NIH) and the Ernest Gallo Clinic and Research Center at UC San Francisco (UCSF) have shown that they can wipe away addictive behavior in rats – or conversely turn non-addicted rats into compulsive cocaine seekers.
“When we turn on a laser light in the prelimbic region of the prefrontal cortex, the compulsive cocaine seeking is gone,” said Antonello Bonci, MD, scientific director of the intramural research program at the NIH’s National Institute on Drug Abuse (NIDA), where the work was done. Bonci is also an adjunct professor of neurology at UCSF and an adjunct professor at Johns Hopkins University.
Described this week in the journal Nature, the new study demonstrates the central role the prefrontal cortex plays in compulsive cocaine addiction. It also suggests a new therapy that could be tested immediately in humans, said Billy Chen of NIDA, the lead author of the study.
Any new human therapy would not be based on using lasers, but would most likely rely on electromagnetic stimulation outside the scalp, in particular a technique called transcranial magnetic stimulation (TMS). Clinical trials are now being designed to test whether this approach works, Chen added.
The High Cost of Cocaine Abuse
Cocaine abuse is a major public health problem in the United States today, and it places a heavy toll on society in terms of lost job productivity, lost earnings, cocaine-related crime, incarcerations, investigations, and treatment and prevention programs.
The human toll is even greater, with an estimated 1.4 million Americans addicted to the drug. It is frequently the cause of emergency room visits – 482,188 in 2008 alone – and it is a top cause of heart attacks and strokes for people under 35.
One of the hallmarks of cocaine addiction is compulsive drug taking – the loss of ability to refrain from taking the drug even if it’s destroying one’s life.
What makes the new work so promising, said Bonci, is that Chen and his colleagues were working with an animal model that mimics this sort of compulsive cocaine addiction. The animals, like human addicts, are more likely to make bad decisions and take cocaine even when they are conditioned to expect self-harm associated with it.
Electrophysiological studies involving these rats have shown that they have extremely low activity in the prefrontal cortex – a brain region fundamental for impulse control, decision making and behavioral flexibility. Similar studies that imaged the brains of humans have shown the same pattern of low activity in this region in people who are compulsively addicted to cocaine.
Altering Brain Activity with a Laser
To test whether altering the activity in this brain region could impact addiction, Chen and his colleagues employed a technique called optogenetics to shut the activity on and off using a laser.
First they took light-sensitive proteins called rhodopsins and used genetic engineering to insert them into neurons in the rat’s prefrontal cortex. Activating this region with a laser tuned to the rhodopsins turned the nerve cells on and off.
Turning on these cells wiped out the compulsive behavior, while switching them off turned the non-addicted ones into addicted, researchers found.
What’s exciting, said Bonci, is that there is a way to induce a similar activation of the prelimbic cortex in people through a technique called transcranial magnetic stimulation (TMS), which applies an external electromagnetic field to the brain and has been used as a treatment for symptoms of depression.
Bonci and his colleagues plan to begin clinical trials at NIH in which they will use this technique a few sessions a week to stimulate the prefrontal cortex in people who are addicted to cocaine and see if they can restore activity to that part of the brain and help them avoid taking the drug.

Laser Light Zaps Away Cocaine Addiction

By stimulating one part of the brain with laser light, researchers at the National Institutes of Health (NIH) and the Ernest Gallo Clinic and Research Center at UC San Francisco (UCSF) have shown that they can wipe away addictive behavior in rats – or conversely turn non-addicted rats into compulsive cocaine seekers.

“When we turn on a laser light in the prelimbic region of the prefrontal cortex, the compulsive cocaine seeking is gone,” said Antonello Bonci, MD, scientific director of the intramural research program at the NIH’s National Institute on Drug Abuse (NIDA), where the work was done. Bonci is also an adjunct professor of neurology at UCSF and an adjunct professor at Johns Hopkins University.

Described this week in the journal Nature, the new study demonstrates the central role the prefrontal cortex plays in compulsive cocaine addiction. It also suggests a new therapy that could be tested immediately in humans, said Billy Chen of NIDA, the lead author of the study.

Any new human therapy would not be based on using lasers, but would most likely rely on electromagnetic stimulation outside the scalp, in particular a technique called transcranial magnetic stimulation (TMS). Clinical trials are now being designed to test whether this approach works, Chen added.

The High Cost of Cocaine Abuse

Cocaine abuse is a major public health problem in the United States today, and it places a heavy toll on society in terms of lost job productivity, lost earnings, cocaine-related crime, incarcerations, investigations, and treatment and prevention programs.

The human toll is even greater, with an estimated 1.4 million Americans addicted to the drug. It is frequently the cause of emergency room visits – 482,188 in 2008 alone – and it is a top cause of heart attacks and strokes for people under 35.

One of the hallmarks of cocaine addiction is compulsive drug taking – the loss of ability to refrain from taking the drug even if it’s destroying one’s life.

What makes the new work so promising, said Bonci, is that Chen and his colleagues were working with an animal model that mimics this sort of compulsive cocaine addiction. The animals, like human addicts, are more likely to make bad decisions and take cocaine even when they are conditioned to expect self-harm associated with it.

Electrophysiological studies involving these rats have shown that they have extremely low activity in the prefrontal cortex – a brain region fundamental for impulse control, decision making and behavioral flexibility. Similar studies that imaged the brains of humans have shown the same pattern of low activity in this region in people who are compulsively addicted to cocaine.

Altering Brain Activity with a Laser

To test whether altering the activity in this brain region could impact addiction, Chen and his colleagues employed a technique called optogenetics to shut the activity on and off using a laser.

First they took light-sensitive proteins called rhodopsins and used genetic engineering to insert them into neurons in the rat’s prefrontal cortex. Activating this region with a laser tuned to the rhodopsins turned the nerve cells on and off.

Turning on these cells wiped out the compulsive behavior, while switching them off turned the non-addicted ones into addicted, researchers found.

What’s exciting, said Bonci, is that there is a way to induce a similar activation of the prelimbic cortex in people through a technique called transcranial magnetic stimulation (TMS), which applies an external electromagnetic field to the brain and has been used as a treatment for symptoms of depression.

Bonci and his colleagues plan to begin clinical trials at NIH in which they will use this technique a few sessions a week to stimulate the prefrontal cortex in people who are addicted to cocaine and see if they can restore activity to that part of the brain and help them avoid taking the drug.

Filed under cocaine cocaine addiction addictive behavior prefrontal cortex transcranial magnetic stimulation optogenetics neuroscience science

104 notes

Anything you can do I can do better: Neuromolecular foundations of the superiority illusion
The existential psychologist Rollo May wrote that “depression is the inability to construct a future” while Lionel Tiger stated that “optimism has been central to the process of human evolution”. These deceptively simple phrases are remarkable in their depth and the connections they form between philosophy, psychology and neuroscience. Both capture the essence of human nature by articulating their insight that our ability to imagine and plan for the future is not only one of the most striking aspects of our species, but also that the inability to exercise this faculty is profoundly damaging to our happiness and sense of self. Two concepts related to these observations are depressive realism – the assertion that people with depression actually have a more accurate perception of reality, and moreover are less affected by its counterpoint, the superiority illusion. The superiority illusion is a cognitive bias by which individuals, relative to others, overestimate their positive qualities and abilities (such as intelligence, cognitive ability, and desirable traits) and underestimate their negative qualities. (Other cognitive biases include optimism bias and illusion of control.) While mathematically flawed – given a normal population distribution, most people are not above average – the superiority illusion is a positive belief that promotes mental health. Recently, scientists at the National Institute of Radiological Sciences (Chiba, Japan), the Japan Science and Technology Agency (Saitama), and Stanford University School of Medicine used resting-state functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) to study the default states of neural and molecular systems that generate the superiority illusion. They showed that resting-state functional connectivity between the frontal cortex and striatum regulated by inhibitory dopaminergic neurotransmission determines individual levels of the superiority illusion. The scientists state that their findings help clarify how the superiority illusion is biologically determined and identify potential molecular and neural targets for treating depressive realism.
Read more

Anything you can do I can do better: Neuromolecular foundations of the superiority illusion

The existential psychologist Rollo May wrote that “depression is the inability to construct a future” while Lionel Tiger stated that “optimism has been central to the process of human evolution”. These deceptively simple phrases are remarkable in their depth and the connections they form between philosophy, psychology and neuroscience. Both capture the essence of human nature by articulating their insight that our ability to imagine and plan for the future is not only one of the most striking aspects of our species, but also that the inability to exercise this faculty is profoundly damaging to our happiness and sense of self. Two concepts related to these observations are depressive realism – the assertion that people with depression actually have a more accurate perception of reality, and moreover are less affected by its counterpoint, the superiority illusion. The superiority illusion is a cognitive bias by which individuals, relative to others, overestimate their positive qualities and abilities (such as intelligence, cognitive ability, and desirable traits) and underestimate their negative qualities. (Other cognitive biases include optimism bias and illusion of control.) While mathematically flawed – given a normal population distribution, most people are not above average – the superiority illusion is a positive belief that promotes mental health. Recently, scientists at the National Institute of Radiological Sciences (Chiba, Japan), the Japan Science and Technology Agency (Saitama), and Stanford University School of Medicine used resting-state functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) to study the default states of neural and molecular systems that generate the superiority illusion. They showed that resting-state functional connectivity between the frontal cortex and striatum regulated by inhibitory dopaminergic neurotransmission determines individual levels of the superiority illusion. The scientists state that their findings help clarify how the superiority illusion is biologically determined and identify potential molecular and neural targets for treating depressive realism.

Read more

Filed under superiority illusion cognitive bias frontal cortex evolution psychology neuroscience science

142 notes

New Genetic Evidence Suggests a Continuum Among Neurodevelopmental and Psychiatric Disorders
A paper published this month in the prestigious medical journal The Lancet Neurology suggests that a broad spectrum of developmental and psychiatric disorders, ranging from autism and intellectual disability to schizophrenia, should be conceptualized as different manifestations of a common underlying denominator, “developmental brain dysfunction,” rather than completely independent conditions with distinct causes.
In “Developmental Brain Dysfunction: Revival and Expansion of Old Concepts Based on New Genetic Evidence,” the authors make two key points:
Developmental disorders (such as autism and intellectual disability) and psychiatric disorders (such as schizophrenia and bipolar disorder), while considered clinically distinct, actually share many of the same underlying genetic causes. This is an example of “variable expressivity:” the same genetic variant results in different clinical signs and symptoms in different individuals.
When quantitative measures of neuropsychological and neurobehavioral traits are studied instead of categorical diagnoses (which are either present or absent) and individuals are compared to their unaffected family members, it is possible to more accurately demonstrate the impact of genetic variants.
According to Andres Moreno De Luca, M.D., research scientist at the Autism and Developmental Medicine Institute at Geisinger Health System and article co-author, “Recent genetic studies conducted in thousands of individuals have shown that identical genetic mutations are shared among neurodevelopmental disorders that are thought to be clinically distinct. What we have seen over the past few years is that genetic mutations that were initially found in individuals with one disorder, such as intellectual disability or autism, are then identified in people with an apparently different condition like schizophrenia, epilepsy, or bipolar disorder.”
“It turns out that the genes don’t respect our diagnostic classification boundaries, but that really isn’t surprising given the overlapping symptoms and frequent co-existence of neurodevelopmental disorders,” said Scott M. Myers, M.D., autism specialist at Geisinger Health System and article co-author.
“We believe this study supports use of the term ‘developmental brain dysfunction’ or DBD, which would encompass the broad spectrum of neurodevelopmental and neuropsychiatric disorders,” said David H. Ledbetter, Ph.D., executive vice president and chief scientific officer at Geisinger Health System, and article co-author. “Additionally, it is clear that diagnostic tools such as whole genome analysis for both children and their families are essential when diagnosing and treating these disorders in order to ensure the most personalized treatment.”
An example used in the study was analysis of intelligence quotient (IQ) scores. The average IQ score in the general population is 100. Historically, the medical community has defined intellectual disability as an IQ of less than 70 (with concurrent deficits in adaptive functioning). But according to Dr. Ledbetter, there is little difference in the function of a child with an IQ of 69 versus 71, yet one may be diagnosed with a disability and the other may not.
“We know a variety of factors contribute to IQ score, including genetics, as a child’s IQ is highly correlated with that of his or her parents and siblings. Therefore, an important factor to take into consideration when interpreting IQ is family background,” said Dr. Ledbetter. “Imagine if we have a child with a genetic abnormality, but the child’s IQ is 85. Technically, we would not diagnose this child with a disability. However, if the family of this child has IQs around 130, we could consider that this child’s genetic anomaly has ‘cost’ him or her 45 IQ points – a very substantial difference.”
According to Dr. Myers, “One implication of this concept is that studies designed to investigate the causes and mechanisms of developmental brain dysfunction should focus on measurement of quantifiable neuropsychological and neurobehavioral traits across groups of individuals with different clinical diagnoses. Another is that whenever possible, individuals with a particular genetic variant or other risk factor should be compared to their unaffected family members, not just to population norms.”

New Genetic Evidence Suggests a Continuum Among Neurodevelopmental and Psychiatric Disorders

A paper published this month in the prestigious medical journal The Lancet Neurology suggests that a broad spectrum of developmental and psychiatric disorders, ranging from autism and intellectual disability to schizophrenia, should be conceptualized as different manifestations of a common underlying denominator, “developmental brain dysfunction,” rather than completely independent conditions with distinct causes.

In “Developmental Brain Dysfunction: Revival and Expansion of Old Concepts Based on New Genetic Evidence,” the authors make two key points:

  • Developmental disorders (such as autism and intellectual disability) and psychiatric disorders (such as schizophrenia and bipolar disorder), while considered clinically distinct, actually share many of the same underlying genetic causes. This is an example of “variable expressivity:” the same genetic variant results in different clinical signs and symptoms in different individuals.
  • When quantitative measures of neuropsychological and neurobehavioral traits are studied instead of categorical diagnoses (which are either present or absent) and individuals are compared to their unaffected family members, it is possible to more accurately demonstrate the impact of genetic variants.

According to Andres Moreno De Luca, M.D., research scientist at the Autism and Developmental Medicine Institute at Geisinger Health System and article co-author, “Recent genetic studies conducted in thousands of individuals have shown that identical genetic mutations are shared among neurodevelopmental disorders that are thought to be clinically distinct. What we have seen over the past few years is that genetic mutations that were initially found in individuals with one disorder, such as intellectual disability or autism, are then identified in people with an apparently different condition like schizophrenia, epilepsy, or bipolar disorder.”

“It turns out that the genes don’t respect our diagnostic classification boundaries, but that really isn’t surprising given the overlapping symptoms and frequent co-existence of neurodevelopmental disorders,” said Scott M. Myers, M.D., autism specialist at Geisinger Health System and article co-author.

“We believe this study supports use of the term ‘developmental brain dysfunction’ or DBD, which would encompass the broad spectrum of neurodevelopmental and neuropsychiatric disorders,” said David H. Ledbetter, Ph.D., executive vice president and chief scientific officer at Geisinger Health System, and article co-author. “Additionally, it is clear that diagnostic tools such as whole genome analysis for both children and their families are essential when diagnosing and treating these disorders in order to ensure the most personalized treatment.”

An example used in the study was analysis of intelligence quotient (IQ) scores. The average IQ score in the general population is 100. Historically, the medical community has defined intellectual disability as an IQ of less than 70 (with concurrent deficits in adaptive functioning). But according to Dr. Ledbetter, there is little difference in the function of a child with an IQ of 69 versus 71, yet one may be diagnosed with a disability and the other may not.

“We know a variety of factors contribute to IQ score, including genetics, as a child’s IQ is highly correlated with that of his or her parents and siblings. Therefore, an important factor to take into consideration when interpreting IQ is family background,” said Dr. Ledbetter. “Imagine if we have a child with a genetic abnormality, but the child’s IQ is 85. Technically, we would not diagnose this child with a disability. However, if the family of this child has IQs around 130, we could consider that this child’s genetic anomaly has ‘cost’ him or her 45 IQ points – a very substantial difference.”

According to Dr. Myers, “One implication of this concept is that studies designed to investigate the causes and mechanisms of developmental brain dysfunction should focus on measurement of quantifiable neuropsychological and neurobehavioral traits across groups of individuals with different clinical diagnoses. Another is that whenever possible, individuals with a particular genetic variant or other risk factor should be compared to their unaffected family members, not just to population norms.”

Filed under neurodevelopmental disorder psychiatric disorders brain developmental brain dysfunction genes neuroscience science

69 notes

Epileptic Seizures Can Propagate Using Functional Brain Networks
The seizures that affect people with temporal-lobe epilepsy usually start in a region of the brain called the hippocampus. But they are often able to involve other areas outside the temporal lobe, propagating via anatomically and functionally connected networks in the brain. New research findings that link decreased brain cell concentration to altered functional connectivity in temporal-lobe epilepsy are reported in an article in Brain Connectivity, a bimonthly peer-reviewed publication from Mary Ann Liebert, Inc., publishers. The article is available on the Brain Connectivity website.
Martha Holmes and colleagues from Vanderbilt University, Nashville, TN, identified regions in the brains of patients with temporal-lobe epilepsy that had reduced gray-matter concentrations. Greater reductions in gray-matter concentration correlated with either decreased or increased signaling and communication between brain regions connected through known functional networks.
The authors present their findings in the article “Functional Networks in Temporal-Lobe Epilepsy: A Voxel-Wise Study of Resting-State Functional Connectivity and Gray-Matter Concentration.”
“This is one of the first studies to actually correlate both functional and structural brain changes in epilepsy,” says Christopher Pawela, PhD, Co-Editor-in-Chief and Assistant Professor, Medical College of Wisconsin. “This is an exciting finding and may have impact in other brain disorders in which both the structure and function of the brain are involved.”

Epileptic Seizures Can Propagate Using Functional Brain Networks

The seizures that affect people with temporal-lobe epilepsy usually start in a region of the brain called the hippocampus. But they are often able to involve other areas outside the temporal lobe, propagating via anatomically and functionally connected networks in the brain. New research findings that link decreased brain cell concentration to altered functional connectivity in temporal-lobe epilepsy are reported in an article in Brain Connectivity, a bimonthly peer-reviewed publication from Mary Ann Liebert, Inc., publishers. The article is available on the Brain Connectivity website.

Martha Holmes and colleagues from Vanderbilt University, Nashville, TN, identified regions in the brains of patients with temporal-lobe epilepsy that had reduced gray-matter concentrations. Greater reductions in gray-matter concentration correlated with either decreased or increased signaling and communication between brain regions connected through known functional networks.

The authors present their findings in the article “Functional Networks in Temporal-Lobe Epilepsy: A Voxel-Wise Study of Resting-State Functional Connectivity and Gray-Matter Concentration.”

“This is one of the first studies to actually correlate both functional and structural brain changes in epilepsy,” says Christopher Pawela, PhD, Co-Editor-in-Chief and Assistant Professor, Medical College of Wisconsin. “This is an exciting finding and may have impact in other brain disorders in which both the structure and function of the brain are involved.”

Filed under epilepsy epileptic seizures temporal lobe epilepsy hippocampus neuroscience science

149 notes

Feeling hungry may protect the brain against Alzheimer’s disease

The feeling of hunger itself may protect against Alzheimer’s disease, according to study published today in the journal PLOS ONE. Interestingly, the results of this study in mice suggest that mild hunger pangs, and related hormonal pathways, may be as important to the much-discussed value of “caloric restriction” as actually eating less.

image

Caloric restriction is a regimen where an individual consumes fewer calories than average, but not so few that they become malnourished. Studies in many species have suggested that it could protect against neurodegenerative disorders and extend lifespans, but the effect has not been confirmed in human randomized clinical trials.

Efforts to understand how cutting calories may protect the brain have grown increasingly important with news that American Alzheimer’s deaths are increasing, and because the best available treatments only delay onset in a subset of patients.

Study authors argue that hormonal signals are the middlemen between an empty gut and the perception of hunger in the brain, and that manipulating them may effectively counter age-related cognitive decline in the same way as caloric restriction.

“This is the first paper, as far as we are aware, to show that the sensation of hunger can reduce Alzheimer’s disease pathology in a mouse model of the disease,” said Inga Kadish, Ph.D., assistant professor in the Department of Cell, Developmental and Integrative Biology (CDIB) within the School of Medicine at the University of Alabama at Birmingham. “If the mechanisms are confirmed, hormonal hunger signaling may represent a new way to combat Alzheimer’s disease, either by itself or combined with caloric restriction.”

The team theorizes that feeling hungry creates mild stress. That, in turn, fires up metabolic signaling pathways that counter plaque deposits known to destroy nerve cells in Alzheimer’s patients. The idea is an example of hormesis theory, where damaging stressors like starvation are thought to be good for you when experienced to a lesser degree.

To study the sensation of hunger, the research team analyzed the effects of the hormone ghrelin, which is known to make us feel hungry. They used a synthetic form of ghrelin in pill form, which let them control dosage such that the ghrelin-treated mice felt steadily, mildly hungry.

If it could be developed, a treatment that affected biochemical pathways downstream of hunger signals might help delay cognitive decline without consigning people to a life of feeling hungry. Straight caloric restriction would not be tolerable for many persons over the long-run, but manipulating post-hunger signaling might.

This line of thinking becomes important because any protective benefit brought about by drugs or diets that mildly adjust post-hunger signals might be most useful if started in those at risk as early in life as possible. Attempts to treat the disease years later – when nerve networks are damaged enough for neurological symptoms to appear – may be too late. In the current study, it was long-term treatment with a ghrelin agonist that improved cognitive performance in mice tested when they had reached an advanced age.

Study details

The study looked at whether or not the feeling of hunger, in the absence of caloric restriction, could counter Alzheimer’s pathology in mice genetically engineered to have three genetic mutations known to cause the disease in humans.

Study mice were divided into three groups: one that received the ‘synthetic ghrelin’ (ghrelin agonist), a second that underwent caloric restriction (20 percent less food) and a third group that was fed normally. Study measures looked at each group’s ability to remember, their degree of Alzheimer’s pathology and their level of related, potentially harmful immune cell activation.

Results of such studies are most appropriately presented in terms of general trends in the data and statistical assessments of their likelihood if only chance factors were in play, a trait captured in each result’s P value (the smaller the better). Thus, the first formal result of the study are that, in mice with the human Alzheimer’s mutations, both the group treated with the ghrelin agonist LY444711 and the group that underwent caloric restriction performed significantly better in the a water maze than did than mice fed normally (p=0.023).

The water maze is the standard test used to measure mouse memory. Researchers put mice in a pool with an invisible platform on which they could rest, and measured how quickly the mice found the platform in a series of tests. Mice with normal memory will remember where the platform is, and find it more quickly each time they are placed in the pool. Ghrelin agonist-treated mice found the hidden platform 26 percent faster than control mice, with caloric restricted mice doing so 23 percent faster than control mice.

The second result was a measure of the buildup of a cholesterol-related protein called amyloid beta in the forebrain, an early step in the destruction of nerve cells that accompanies Alzheimer’s disease. The formal amyloid beta results show that mice either treated with the ghrelin agonist or calorically restricted had significantly less buildup of amyloid beta in the dentate gyrus, the part of the brain that controls memory function, than mice fed normally (i.e., control, 3.95±0.83; LY, 2.05±0.26 and CR, 1.28±0.17%, respectively; Wilcoxon p=0.04).

The above results translate roughly into a 67 percent reduction of this pathology in caloric-restricted mice as compared to control mice, and a 48 percent reduction of amyloid beta deposits when comparing the ghrelin-treated mice with the control group. These percentages are neither final nor translatable to humans, but are simply meant to convey the idea of “better.”

Finally, the team examined the difference in immune responses related to Alzheimer’s pathology in each of the three groups. Microglia are the immune cells of the brain, engulfing and removing invading pathogens and dead tissue. They have also been implicated in several diseases when their misplaced activation damages tissues. The team found that mice receiving the ghrelin agonist treatment had both reduced levels of microglial activation compared to the control group, similar to the effect of caloric restriction.

The ghrelin agonist used in the study does not lend itself to clinical use and will not play a role in the future prevention of Alzheimer’s disease, said Kadish. It was meant instead to prove a principle that hormonal hunger signaling itself can counter Alzheimer’s pathology in a mammal. The next step is to understand exactly how it achieved this as a prerequisite to future treatment design.

Ghrelin is known to create hunger signals by interacting with the arcuate nucleus in the part of the brain called the hypothalamus, which then sends out signaling neuropeptides that help the body sense and respond to energy needs. Studies already underway in Kadish’s lab seek to determine the potential role of these pathways and related genes in countering disease.

“Our group in the School of Public Health was studying whether or not a ghrelin agonist could make mice hungry as we sought to unravel mechanisms contributing to the life-prolonging effects of caloric restriction,” said David Allison, Ph.D., associate dean for Science in the UAB School of Public Health and the project’s initiator.

“Because of the interdisciplinary nature of UAB, our work with Dr. Allison led to an amazing conversation with Dr. Kadish about how we might combine our research with her longtime expertise in neurology because caloric restriction had been shown in early studies to counter Alzheimer’s disease,” said Emily Dhurandhar, Ph.D., a trainee in the UAB Nutrition Obesity Research Center and first study author. “The current study is the result.”

(Source: uab.edu)

Filed under alzheimer's disease brain caloric restriction hunger hormone metabolism neuroscience science

225 notes

Mental illness associated with heavy cannabis use 
People with mental illnesses are more than seven times more likely to use cannabis weekly compared to people without a mental illness, according to researchers from the Centre for Addiction and Mental Health (CAMH) who studied U.S. data.
Cannabis is the most widely used illicit substance globally, with an estimated 203 million people reporting use. Although research has found links between cannabis use and mental illness, exact numbers and prevalence of problem cannabis use had not been investigated.
“We know that people with mental illness consume more cannabis, perhaps partially as a way to self- medicate psychiatric symptoms, but this data showed us the degree of the correlation between cannabis use, misuse, and mental illness,” said Dr. Shaul Lev-ran, Adjunct Scientist at CAMH and Head of Addiction Medicine at the Sheba Medical Center, Israel.
“Based on the number of individuals reporting weekly use, we see that people with mental illness use cannabis at high rates. This can be of concern because it could worsen the symptoms of their mental illness,” said Lev-ran, who conducted the research as a post-doctoral fellow with the Social Aetiology of Mental Illness (SAMI) Training Program at CAMH.
Researchers also found that individuals with mental illness were 10 times more likely to have a cannabis use disorder.
In this new study, published in the journal Comprehensive Psychiatry, CAMH researchers analyzed data from face-to-face interviews with over 43,000 respondents over the age of 18 from the National Epidemiologic Survey on Alcohol and Related Conditions. Using structured questionnaires, the researchers assessed cannabis use as well as various mental illnesses including depression, anxiety, drug and alcohol use disorders and personality disorders, based on criteria from the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV).
Among those will mental illness reporting at least weekly cannabis use, rates of use were particularly elevated for those with bipolar disorder, personality disorders and other substance use disorders.
In total, 4.4 per cent of individuals with a mental illness in the past 12 months reported using cannabis weekly, compared to 0.6 per cent among individuals without any mental illness. Cannabis use disorders occurred among 4 per cent of those with mental illness versus 0.4 per cent among those without.
Researchers also noted that, although cannabis use is generally higher among younger people, the association between mental illness and cannabis use was pervasive across most age groups.
They emphasize the importance of screening for frequent and problem cannabis use among those with mental illness, so that targeted prevention and intervention may be employed.

Mental illness associated with heavy cannabis use

People with mental illnesses are more than seven times more likely to use cannabis weekly compared to people without a mental illness, according to researchers from the Centre for Addiction and Mental Health (CAMH) who studied U.S. data.

Cannabis is the most widely used illicit substance globally, with an estimated 203 million people reporting use. Although research has found links between cannabis use and mental illness, exact numbers and prevalence of problem cannabis use had not been investigated.

“We know that people with mental illness consume more cannabis, perhaps partially as a way to self- medicate psychiatric symptoms, but this data showed us the degree of the correlation between cannabis use, misuse, and mental illness,” said Dr. Shaul Lev-ran, Adjunct Scientist at CAMH and Head of Addiction Medicine at the Sheba Medical Center, Israel.

“Based on the number of individuals reporting weekly use, we see that people with mental illness use cannabis at high rates. This can be of concern because it could worsen the symptoms of their mental illness,” said Lev-ran, who conducted the research as a post-doctoral fellow with the Social Aetiology of Mental Illness (SAMI) Training Program at CAMH.

Researchers also found that individuals with mental illness were 10 times more likely to have a cannabis use disorder.

In this new study, published in the journal Comprehensive Psychiatry, CAMH researchers analyzed data from face-to-face interviews with over 43,000 respondents over the age of 18 from the National Epidemiologic Survey on Alcohol and Related Conditions. Using structured questionnaires, the researchers assessed cannabis use as well as various mental illnesses including depression, anxiety, drug and alcohol use disorders and personality disorders, based on criteria from the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV).

Among those will mental illness reporting at least weekly cannabis use, rates of use were particularly elevated for those with bipolar disorder, personality disorders and other substance use disorders.

In total, 4.4 per cent of individuals with a mental illness in the past 12 months reported using cannabis weekly, compared to 0.6 per cent among individuals without any mental illness. Cannabis use disorders occurred among 4 per cent of those with mental illness versus 0.4 per cent among those without.

Researchers also noted that, although cannabis use is generally higher among younger people, the association between mental illness and cannabis use was pervasive across most age groups.

They emphasize the importance of screening for frequent and problem cannabis use among those with mental illness, so that targeted prevention and intervention may be employed.

Filed under cannabis mental illness psychatric disorders cannabis misuse health psychology neuroscience science

78 notes

Vitamin P as a potential approach for the treatment of damaged motor neurons
Biologists from the Ruhr-Universität Bochum have explored how to protect neurons that control movements from dying off. In the journal “Molecular and Cellular Neuroscience” they report that the molecule 7,8-Dihydroxyflavone, also known as vitamin P, ensures the survival of motor neurons in culture. It sends the survival signal on another path than the molecule Brain Derived Neurotrophic Factor (BDNF), which was previously considered a candidate for the treatment of motoneuron diseases or after spinal cord damage. “The Brain Derived Neurotrophic Factor only had a limited effect when tested on humans, and even had partially negative consequences”, says Prof. Dr. Stefan Wiese from the RUB Work Group for Molecular Cell Biology. “Therefore we are looking for alternative ways to find new approaches for the treatment of neurodegenerative diseases such as Amyotrophic Lateral Sclerosis.”
Same effect, different mode of action
In previous studies, researchers hypothesised that vitamin P is an analogue of BDNF and thus works in the same way. This theory has been disproved by the team led by Dr. Teresa Tsai and Prof. Stefan Wiese from the Group for Molecular Cell Biology and the Department of Cell Morphology and Molecular Neurobiology headed by Prof. Andreas Faissner. Both substances ensure that isolated motor neurons of the mouse survive in cell culture and grow new processes, but what exactly the molecules trigger at the protein level varies. BDNF activates two signalling pathways, the so-called MAP kinase and PI3K/AKT signal paths. Vitamin P on the other hand makes use only of the latter.
The dose is crucial
However, vitamin P only unfolded its positive effects on the motor neurons in a very small concentration range. “These results show how important an accurate determination of dose and effect is”, says Prof. Wiese. An overdose of vitamin P reduced the survival effect, and over a certain amount, no more positive effects occurred at all. The researchers hope that vitamin P could have less negative side effects than BDNF. “It is easier to use, because vitamin P, in contrast to BDNF, can pass the blood-brain barrier and therefore does not have to be introduced into the cerebrospinal fluid using pumps like BDNF,” says Wiese.

Vitamin P as a potential approach for the treatment of damaged motor neurons

Biologists from the Ruhr-Universität Bochum have explored how to protect neurons that control movements from dying off. In the journal “Molecular and Cellular Neuroscience” they report that the molecule 7,8-Dihydroxyflavone, also known as vitamin P, ensures the survival of motor neurons in culture. It sends the survival signal on another path than the molecule Brain Derived Neurotrophic Factor (BDNF), which was previously considered a candidate for the treatment of motoneuron diseases or after spinal cord damage. “The Brain Derived Neurotrophic Factor only had a limited effect when tested on humans, and even had partially negative consequences”, says Prof. Dr. Stefan Wiese from the RUB Work Group for Molecular Cell Biology. “Therefore we are looking for alternative ways to find new approaches for the treatment of neurodegenerative diseases such as Amyotrophic Lateral Sclerosis.”

Same effect, different mode of action

In previous studies, researchers hypothesised that vitamin P is an analogue of BDNF and thus works in the same way. This theory has been disproved by the team led by Dr. Teresa Tsai and Prof. Stefan Wiese from the Group for Molecular Cell Biology and the Department of Cell Morphology and Molecular Neurobiology headed by Prof. Andreas Faissner. Both substances ensure that isolated motor neurons of the mouse survive in cell culture and grow new processes, but what exactly the molecules trigger at the protein level varies. BDNF activates two signalling pathways, the so-called MAP kinase and PI3K/AKT signal paths. Vitamin P on the other hand makes use only of the latter.

The dose is crucial

However, vitamin P only unfolded its positive effects on the motor neurons in a very small concentration range. “These results show how important an accurate determination of dose and effect is”, says Prof. Wiese. An overdose of vitamin P reduced the survival effect, and over a certain amount, no more positive effects occurred at all. The researchers hope that vitamin P could have less negative side effects than BDNF. “It is easier to use, because vitamin P, in contrast to BDNF, can pass the blood-brain barrier and therefore does not have to be introduced into the cerebrospinal fluid using pumps like BDNF,” says Wiese.

Filed under brain vitamin p motor neurons BDNF blood-brain barrier neuroscience science

226 notes

Speaking a tonal language (such as Cantonese) primes the brain for musical training
Non-musicians who speak tonal languages may have a better ear for learning musical notes, according to Canadian researchers.
Tonal languages, found mainly in Asia, Africa and South America, have an abundance of high and low pitch patterns as part of speech. In these languages, differences in pitch can alter the meaning of a word. Vietnamese, for example, has eleven different vowel sounds and six different tones. Cantonese also has an intricate six-tone system, while English has no tones.
Researchers at Baycrest Health Sciences’ Rotman Research Institute (RRI) in Toronto have found the strongest evidence yet that speaking a tonal language may improve how the brain hears music. While the findings may boost the egos of tonal language speakers who excel in musicianship, they are exciting neuroscientists for another reason: they represent the first strong evidence that music and language – which share overlapping brain structures – have bi-directional benefits!
The findings are published today in PLOS ONE, an international, peer-reviewed open-access science journal.
The benefits of music training for speech and language are already well documented (showing positive influences on speech perception and recognition, auditory working memory, aspects of verbal intelligence, and awareness of the sound structure of spoken words). The reverse – the benefits of language experience for learning music – has largely been unexplored until now.
"For those who speak tonal languages, we believe their brain’s auditory system is already enhanced to allow them to hear musical notes better and detect minute changes in pitch," said lead investigator Gavin Bidelman, who conducted the research as a post-doctoral fellow at Baycrest’s RRI, supported by a GRAMMY Foundation® grant.
"If you pick up an instrument, you may be able to acquire the skills faster to play that instrument because your brain has already built up these auditory perceptual advantages through speaking your native tonal language."
But Bidelman, now assistant professor with the Institute for Intelligent Systems and School of Communication Science & Disorders at the University of Memphis, was quick to dispel the notion that people who speak tonal languages make better musicians. Musicianship requires much more than the sense of hearing and plenty of English-speaking musical icons will put that quick assumption to rest.
That music and language – two key domains of human cognition – can influence each other offers exciting possibilities for devising new approaches to rehabilitation for people with speech and language deficits, said Bidelman.
"If music and language are so intimately coupled, we may be able to design rehabilitation treatments that use musical training to help individuals improve speech-related functions that have been impaired due to age, aphasia or stroke," he suggested. Bidelman added that similar benefits might also work in the opposite direction. Musical listening skills could be improved by designing well-crafted speech and language training programs.
The study
Fifty-four healthy adults in their mid-20s were recruited for the study from the University of Toronto and Greater Toronto Area. They were divided into three groups: English-speaking trained musicians (instrumentalists) and Cantonese-speaking and English-speaking non-musicians. Wearing headphones in a sound-proof lab, participants were tested on their ability to discriminate complex musical notes. They were assessed on measures of auditory pitch acuity and music perception as well as general cognitive ability such as working memory and fluid intelligence (abstract reasoning, thinking quickly).
While the musicians demonstrated superior performance on all auditory measures, the Cantonese non-musicians showed similar performance to musicians on music and cognitive behavioural tasks, testing 15 to 20 percent higher than that of the English-speaking non-musicians.
Bidelman added that not all tonal languages may offer the music listening benefits seen with the Cantonese speakers in his study. Mandarin, for example, has more “curved” tones and the pitch patterns vary with time – which is different from how pitch occurs in music. Musical pitch resembles “stair step, level pitch patterns” which happen to share similarities with the Cantonese language, he explained.

Speaking a tonal language (such as Cantonese) primes the brain for musical training

Non-musicians who speak tonal languages may have a better ear for learning musical notes, according to Canadian researchers.

Tonal languages, found mainly in Asia, Africa and South America, have an abundance of high and low pitch patterns as part of speech. In these languages, differences in pitch can alter the meaning of a word. Vietnamese, for example, has eleven different vowel sounds and six different tones. Cantonese also has an intricate six-tone system, while English has no tones.

Researchers at Baycrest Health Sciences’ Rotman Research Institute (RRI) in Toronto have found the strongest evidence yet that speaking a tonal language may improve how the brain hears music. While the findings may boost the egos of tonal language speakers who excel in musicianship, they are exciting neuroscientists for another reason: they represent the first strong evidence that music and language – which share overlapping brain structures – have bi-directional benefits!

The findings are published today in PLOS ONE, an international, peer-reviewed open-access science journal.

The benefits of music training for speech and language are already well documented (showing positive influences on speech perception and recognition, auditory working memory, aspects of verbal intelligence, and awareness of the sound structure of spoken words). The reverse – the benefits of language experience for learning music – has largely been unexplored until now.

"For those who speak tonal languages, we believe their brain’s auditory system is already enhanced to allow them to hear musical notes better and detect minute changes in pitch," said lead investigator Gavin Bidelman, who conducted the research as a post-doctoral fellow at Baycrest’s RRI, supported by a GRAMMY Foundation® grant.

"If you pick up an instrument, you may be able to acquire the skills faster to play that instrument because your brain has already built up these auditory perceptual advantages through speaking your native tonal language."

But Bidelman, now assistant professor with the Institute for Intelligent Systems and School of Communication Science & Disorders at the University of Memphis, was quick to dispel the notion that people who speak tonal languages make better musicians. Musicianship requires much more than the sense of hearing and plenty of English-speaking musical icons will put that quick assumption to rest.

That music and language – two key domains of human cognition – can influence each other offers exciting possibilities for devising new approaches to rehabilitation for people with speech and language deficits, said Bidelman.

"If music and language are so intimately coupled, we may be able to design rehabilitation treatments that use musical training to help individuals improve speech-related functions that have been impaired due to age, aphasia or stroke," he suggested. Bidelman added that similar benefits might also work in the opposite direction. Musical listening skills could be improved by designing well-crafted speech and language training programs.

The study

Fifty-four healthy adults in their mid-20s were recruited for the study from the University of Toronto and Greater Toronto Area. They were divided into three groups: English-speaking trained musicians (instrumentalists) and Cantonese-speaking and English-speaking non-musicians. Wearing headphones in a sound-proof lab, participants were tested on their ability to discriminate complex musical notes. They were assessed on measures of auditory pitch acuity and music perception as well as general cognitive ability such as working memory and fluid intelligence (abstract reasoning, thinking quickly).

While the musicians demonstrated superior performance on all auditory measures, the Cantonese non-musicians showed similar performance to musicians on music and cognitive behavioural tasks, testing 15 to 20 percent higher than that of the English-speaking non-musicians.

Bidelman added that not all tonal languages may offer the music listening benefits seen with the Cantonese speakers in his study. Mandarin, for example, has more “curved” tones and the pitch patterns vary with time – which is different from how pitch occurs in music. Musical pitch resembles “stair step, level pitch patterns” which happen to share similarities with the Cantonese language, he explained.

Filed under tonal languages pitch patterns music music training brain cognition neuroscience science

134 notes

Study shows humans and apes learn language differently
How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.
Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.
Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.
Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”
To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.
Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.
This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Study shows humans and apes learn language differently

How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.

Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.

Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.

Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”

To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.

Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.

This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Filed under primates language language development grammatical rules linguistics psychology neuroscience science

free counters