Neuroscience

Articles and news from the latest research reports.

66 notes

Older People with Faster Decline In Memory and Thinking Skills May Have Lower Risk of Cancer Death
Older people who are starting to have memory and thinking problems, but do not yet have dementia may have a lower risk of dying from cancer than people who have no memory and thinking problems, according to a study published in the April 9, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.
“Studies have shown that people with Alzheimer’s disease are less likely to develop cancer, but we don’t know the reason for that link,” said study author Julián Benito-León, MD, PhD, of University Hospital 12 of October in Madrid, Spain. “One possibility is that cancer is underdiagnosed in people with dementia, possibly because they are less likely to mention their symptoms or caregivers and doctors are focused on the problems caused by dementia. The current study helps us discount that theory.”
The study involved 2,627 people age 65 and older in Spain who did not have dementia at the start of the study. They took tests of memory and thinking skills at the start of the study and again three years later, and were followed for an average of almost 13 years. The participants were divided into three groups: those whose scores on the thinking tests were declining the fastest, those whose scores improved on the tests, and those in the middle.
During the study, 1,003 of the participants died, including 339 deaths, or 34 percent, among those with the fastest decline in thinking skills and 664 deaths, or 66 percent, among those in the other two groups. A total of 21 percent of those in the group with the fastest decline died of cancer, according to their death certificates, compared to 29 percent of those in the other two groups.
People in the fastest declining group were still 30 percent less likely to die of cancer when the results were adjusted to control for factors such as smoking, diabetes and heart disease, among others.
“We need to understand better the relationship between a disease that causes abnormal cell death and one that causes abnormal cell growth,” Benito-León said. “With the increasing number of people with both dementia and cancer, understanding this association could help us better understand and treat both diseases.”

Older People with Faster Decline In Memory and Thinking Skills May Have Lower Risk of Cancer Death

Older people who are starting to have memory and thinking problems, but do not yet have dementia may have a lower risk of dying from cancer than people who have no memory and thinking problems, according to a study published in the April 9, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.

“Studies have shown that people with Alzheimer’s disease are less likely to develop cancer, but we don’t know the reason for that link,” said study author Julián Benito-León, MD, PhD, of University Hospital 12 of October in Madrid, Spain. “One possibility is that cancer is underdiagnosed in people with dementia, possibly because they are less likely to mention their symptoms or caregivers and doctors are focused on the problems caused by dementia. The current study helps us discount that theory.”

The study involved 2,627 people age 65 and older in Spain who did not have dementia at the start of the study. They took tests of memory and thinking skills at the start of the study and again three years later, and were followed for an average of almost 13 years. The participants were divided into three groups: those whose scores on the thinking tests were declining the fastest, those whose scores improved on the tests, and those in the middle.

During the study, 1,003 of the participants died, including 339 deaths, or 34 percent, among those with the fastest decline in thinking skills and 664 deaths, or 66 percent, among those in the other two groups. A total of 21 percent of those in the group with the fastest decline died of cancer, according to their death certificates, compared to 29 percent of those in the other two groups.

People in the fastest declining group were still 30 percent less likely to die of cancer when the results were adjusted to control for factors such as smoking, diabetes and heart disease, among others.

“We need to understand better the relationship between a disease that causes abnormal cell death and one that causes abnormal cell growth,” Benito-León said. “With the increasing number of people with both dementia and cancer, understanding this association could help us better understand and treat both diseases.”

Filed under memory dementia cancer cognitive decline aging neurology neuroscience science

113 notes

Research shows that a human protein may trigger the Parkinson’s disease

A research led by the Research Institute Vall d’Hebron (VHIR), in which the University of Valencia participated, has shown that pathological forms of the α-synuclein protein present in deceased patients with Parkinson’s disease are able to initiate and spread in mice and primates the neurodegenerative process that typifies this disease. The discovery, published in the March cover of Annals of Neurology, opens the door to the development of new treatments that allow to stop the progression of Parkinson’s disease, aimed at blocking the expression, the pathological conversion and the transmission of this protein.

image

Recent studies have shown that synthetic forms of α-synuclein are toxic for the neurons, both in vitro (cell culture) and in vivo (mice), which can spread from one cell to another. However, until now it was not known if this pathogenic protein synthetic capacity could be extended to the pathological human protein found in patients with Parkinson and, therefore, whether it was relevant for the disease in humans.

In the present study, led by Doctor Miquel Vila, from the group of Neurodegenerative Diseases of the VHIR and CIBERNED member, and in which two other groups of CIBERNED have also participated (the lead by Doctor Isabel Fariñas, University of Valencia, and the led by Doctor José Obeso, CIMA-University of Navarra), as well as a group from the University of Bordeaux in France (Doctor Erwan Bezard), the researchers extracted α-synuclein aggregates of brains of dead patients because of the Parkinson’s disease to inject them into the brains of rodents and primates.

Four months after the injection into mice, and nine months after the injection into monkeys, these animals began to present degeneration of dopaminergic neurons and intracellular cumulus of α-synuclein pathology in these cells, as occurs in the Parkinson’s disease. Months later, the animals also showed cumulus of this protein in other brain remote areas, with a pattern of similar extension to that observed in the brains of patients after years of disease evolution.

According to Doctor Vila, these results indicate that “the pathological aggregates of this protein obtained from patients with the Parkinson’s disease have the ability to initiate and extend the neurodegenerative process that typifies the Parkinson’s disease in mice and primates”. A discovery that, he adds, “provides new insights about the possible mechanisms of initiation and progression of the disease and opens the door to new therapeutic opportunities”. Therefore, the next step is to find out how to stop the progression and spread of the disease, by blocking the transmission of cell to cell of the α-synuclein, as well as regulating the levels of expression and stopping the pathological conversion of this protein.

The Parkinson’s disease

The Parkinson’s disease is the second most common neurodegenerative disease after the Alzheimer’s disease. It is characterized by progressive loss of neurons that produce dopamine in a brain region (the substantia nigra of the ventral midbrain) and the presence in these cells of pathological intracellular aggregates of the α-synuclein protein, called Lewy bodies. The loss of brain dopamine as a consequence of neuronal death results in the typical motor manifestations of the disease, such as muscle stiffness, tremors and slow movement.

The most effective treatment for this disease is the levodopa, a palliative drug that allows to restore the missing dopamine. However, as the disease progresses, the pathological process of neurodegeneration and accumulation of α-synuclein progressively extends beyond the ventral midbrain to other brain areas. As a result, there is a progressive worsening of the patient and the emergence of non-motor clinical manifestations unresponsive to dopaminergic drugs. There is currently no treatment that avoids, delays or halts the progressive evolution of the neurodegenerative process.

(Source: uv.es)

Filed under parkinson's disease neurodegeneration alpha synuclein lewy bodies neuroscience science

265 notes

Blocking DNA repair mechanisms could improve radiation therapy for brain cancer
UT Southwestern Medical Center researchers have demonstrated in both cancer cell lines and in mice that blocking critical DNA repair mechanisms could improve the effectiveness of radiation therapy for highly fatal brain tumors called glioblastomas.
Radiation therapy causes double-strand breaks in DNA that must be repaired for tumors to keep growing. Scientists have long theorized that if they could find a way to block repairs from being made, they could prevent tumors from growing or at least slow down the growth, thereby extending patients’ survival. Blocking DNA repair is a particularly attractive strategy for treating glioblastomas, as these tumors are highly resistant to radiation therapy. In a study, UT Southwestern researchers demonstrated that the theory actually works in the context of glioblastomas.
“This work is informative because the findings show that blocking the repair of DNA double-strand breaks could be a viable option for improving radiation therapy of glioblastomas,” said Dr. Sandeep Burma, Associate Professor of Radiation Oncology in the division of Molecular Radiation Biology at UT Southwestern.
His lab works on understanding basic mechanisms by which DNA breaks are repaired, with the translational objective of improving cancer therapy with DNA damaging agents. Recent research from his lab has demonstrated how a cell makes the choice between two major pathways that are used to repair DNA breaks – non-homologous end joining (NHEJ) and homologous recombination (HR). His lab found that enzymes involved in cell division called cyclin-dependent kinases (CDKs) activate HR by phosphorylating a key protein, EXO1. In this manner, the use of HR is coupled to the cell division cycle, and this has important implications for cancer therapeutics. These findings were published April 7 in Nature Communications.
While the above basic study describes how the cell chooses between NHEJ and HR, a translational study from the Burma lab demonstrates how blocking both repair pathways can improve radiotherapy of glioblastomas. Researchers in the lab first were able to show in glioblastoma cell lines that a drug called NVP-BEZ235, which is in clinical trials for other solid tumors, can also inhibit two key DNA repair enzymes, DNA-PKcs and ATM, which are crucial for NHEJ and HR, respectively. While the drug alone had limited effect, when combined with radiation therapy, the tumor cells could not quickly repair their DNA, stalling their growth.
While excited by the initial findings in cell lines, researchers remained cautious because previous efforts to identify DNA repair inhibitors had not succeded when used in living models – mice with glioblastomas. Drugs developed to treat brain tumors also must cross what’s known as the blood-brain-barrier in living models.
But the NVP-BEZ235 drug could successfully cross the blood-brain-barrier, and when administered to mice with glioblastomas and combined with radiation, the tumor growth in mice was slowed and the mice survived far longer – up to 60 days compared to approximately 10 days with the drug or radiation therapy alone. These findings were published in the March 1 issue of Clinical Cancer Research.
“The consequence is striking,” said Dr. Burma. “If you irradiate the tumors, nothing much happens because they grow right through radiation. Give the drug alone, and again, nothing much happens. But when you give the two together, tumor growth is delayed significantly. The drug has a very striking synergistic effect when given with radiation.”
The combination effect is important because the standard therapy for glioblastomas in humans is radiation therapy, so finding a drug that improves the effectiveness of radiation therapy could have profound clinical importance eventually. For example, such drugs may permit lower doses of X-rays and gamma rays to be used for traditional therapies, thereby causing fewer side effects.
“Radiation is still the mainstay of therapy, so we have to have something that will work with the mainstay of therapy,” Dr. Burma said.
While the findings provide proof that the concept of “radiosensitizing” glioblastomas works in mouse models, additional research and clinical trials will be needed to demonstrate whether the combination of radiation with DNA repair inhibitors would be effective in humans, Dr. Burma cautioned.
“Double-strand DNA breaks are a double-edged sword,” he said. “On one hand, they cause cancer. On the other, we use ionizing radiation and chemotherapy to cause double-strand breaks to treat the disease.”
Another recent publication from his lab highlights this apparent paradox by demonstrating how radiation can actually trigger glioblastomas in mouse models. This research, supported by NASA, is focused on understanding cancer risks from particle radiation, the type faced by astronauts on deep-space missions and now being used in cutting-edge cancer therapies such as proton and carbon ion therapy.
Dr. Burma’s lab uses the high-tech facilities and large particle accelerator of the NASA Space Radiation Laboratory at the Brookhaven National Laboratory in New York to generate heavy ions, which can be used to irradiate glioblastoma-prone mice to test both the cancer-inducing potential of particle radiation as well as its potential therapeutic use.
“Heavy particles cause dense tracks of damage, which are very hard to repair,” Dr. Burma noted. “With gamma or X-rays, which are used in medical therapy, the damage is diffuse and is repaired within a day. If you examine a mouse brain irradiated with heavy particles, the damage is repaired slowly and can last for months.”
These findings, published March 17 in Oncogene, suggest that glioblastoma risk from heavier particles is much higher compared to that from gamma or X-rays. This study is relevant to the medical field, since ionizing radiation, even low doses from CT scans, have been reported to increase the risk of brain tumors, Dr. Burma said.

Blocking DNA repair mechanisms could improve radiation therapy for brain cancer

UT Southwestern Medical Center researchers have demonstrated in both cancer cell lines and in mice that blocking critical DNA repair mechanisms could improve the effectiveness of radiation therapy for highly fatal brain tumors called glioblastomas.

Radiation therapy causes double-strand breaks in DNA that must be repaired for tumors to keep growing. Scientists have long theorized that if they could find a way to block repairs from being made, they could prevent tumors from growing or at least slow down the growth, thereby extending patients’ survival. Blocking DNA repair is a particularly attractive strategy for treating glioblastomas, as these tumors are highly resistant to radiation therapy. In a study, UT Southwestern researchers demonstrated that the theory actually works in the context of glioblastomas.

“This work is informative because the findings show that blocking the repair of DNA double-strand breaks could be a viable option for improving radiation therapy of glioblastomas,” said Dr. Sandeep Burma, Associate Professor of Radiation Oncology in the division of Molecular Radiation Biology at UT Southwestern.

His lab works on understanding basic mechanisms by which DNA breaks are repaired, with the translational objective of improving cancer therapy with DNA damaging agents. Recent research from his lab has demonstrated how a cell makes the choice between two major pathways that are used to repair DNA breaks – non-homologous end joining (NHEJ) and homologous recombination (HR). His lab found that enzymes involved in cell division called cyclin-dependent kinases (CDKs) activate HR by phosphorylating a key protein, EXO1. In this manner, the use of HR is coupled to the cell division cycle, and this has important implications for cancer therapeutics. These findings were published April 7 in Nature Communications.

While the above basic study describes how the cell chooses between NHEJ and HR, a translational study from the Burma lab demonstrates how blocking both repair pathways can improve radiotherapy of glioblastomas. Researchers in the lab first were able to show in glioblastoma cell lines that a drug called NVP-BEZ235, which is in clinical trials for other solid tumors, can also inhibit two key DNA repair enzymes, DNA-PKcs and ATM, which are crucial for NHEJ and HR, respectively. While the drug alone had limited effect, when combined with radiation therapy, the tumor cells could not quickly repair their DNA, stalling their growth.

While excited by the initial findings in cell lines, researchers remained cautious because previous efforts to identify DNA repair inhibitors had not succeded when used in living models – mice with glioblastomas. Drugs developed to treat brain tumors also must cross what’s known as the blood-brain-barrier in living models.

But the NVP-BEZ235 drug could successfully cross the blood-brain-barrier, and when administered to mice with glioblastomas and combined with radiation, the tumor growth in mice was slowed and the mice survived far longer – up to 60 days compared to approximately 10 days with the drug or radiation therapy alone. These findings were published in the March 1 issue of Clinical Cancer Research.

“The consequence is striking,” said Dr. Burma. “If you irradiate the tumors, nothing much happens because they grow right through radiation. Give the drug alone, and again, nothing much happens. But when you give the two together, tumor growth is delayed significantly. The drug has a very striking synergistic effect when given with radiation.”

The combination effect is important because the standard therapy for glioblastomas in humans is radiation therapy, so finding a drug that improves the effectiveness of radiation therapy could have profound clinical importance eventually. For example, such drugs may permit lower doses of X-rays and gamma rays to be used for traditional therapies, thereby causing fewer side effects.

“Radiation is still the mainstay of therapy, so we have to have something that will work with the mainstay of therapy,” Dr. Burma said.

While the findings provide proof that the concept of “radiosensitizing” glioblastomas works in mouse models, additional research and clinical trials will be needed to demonstrate whether the combination of radiation with DNA repair inhibitors would be effective in humans, Dr. Burma cautioned.

“Double-strand DNA breaks are a double-edged sword,” he said. “On one hand, they cause cancer. On the other, we use ionizing radiation and chemotherapy to cause double-strand breaks to treat the disease.”

Another recent publication from his lab highlights this apparent paradox by demonstrating how radiation can actually trigger glioblastomas in mouse models. This research, supported by NASA, is focused on understanding cancer risks from particle radiation, the type faced by astronauts on deep-space missions and now being used in cutting-edge cancer therapies such as proton and carbon ion therapy.

Dr. Burma’s lab uses the high-tech facilities and large particle accelerator of the NASA Space Radiation Laboratory at the Brookhaven National Laboratory in New York to generate heavy ions, which can be used to irradiate glioblastoma-prone mice to test both the cancer-inducing potential of particle radiation as well as its potential therapeutic use.

“Heavy particles cause dense tracks of damage, which are very hard to repair,” Dr. Burma noted. “With gamma or X-rays, which are used in medical therapy, the damage is diffuse and is repaired within a day. If you examine a mouse brain irradiated with heavy particles, the damage is repaired slowly and can last for months.”

These findings, published March 17 in Oncogene, suggest that glioblastoma risk from heavier particles is much higher compared to that from gamma or X-rays. This study is relevant to the medical field, since ionizing radiation, even low doses from CT scans, have been reported to increase the risk of brain tumors, Dr. Burma said.

Filed under brain tumors glioblastoma radiotherapy DNA damage brain cancer neuroscience science

84 notes

DNA Modifications Measured in Blood Signal Related Changes in the Brain

Research linked to stress in mice confirms blood-brain comparison is valid

image

Johns Hopkins researchers say they have confirmed suspicions that DNA modifications found in the blood of mice exposed to high levels of stress hormone — and showing signs of anxiety — are directly related to changes found in their brain tissues.

The proof-of-concept study, reported online ahead of print in the June issue of Psychoneuroendocrinology, offers what the research team calls the first evidence that epigenetic changes that alter the way genes function without changing their underlying DNA sequence — and are detectable in blood — mirror alterations in brain tissue linked to underlying psychiatric diseases.

The new study reports only on so-called epigenetic changes to a single stress response gene called FKBP5, which has been implicated in depression, bipolar disorder and post-traumatic stress disorder. But the researchers say they have discovered the same blood and brain matches in dozens more genes, which regulate many important processes in the brain.

“Many human studies rely on the assumption that disease-relevant epigenetic changes that occur in the brain — which is largely inaccessible and difficult to test — also occur in the blood, which is easily accessible,” says study leader Richard S. Lee, Ph.D., an instructor in the Department of Psychiatry and Behavioral Sciences at the Johns Hopkins University School of Medicine. “This research on mice suggests that the blood can legitimately tell us what is going on in the brain, which is something we were just assuming before, and could lead us to better detection and treatment of mental disorders and for a more empirical way to test whether medications are working.”

For the study, the Johns Hopkins team worked with mice with a rodent version of Cushing’s disease, which is marked by the overproduction and release of cortisol, the primary stress hormone also called glucocorticoid. For four weeks, the mice were given different doses of stress hormones in their drinking water to assess epigenetic changes to FKBP5. The researchers took blood samples weekly to measure the changes and then dissected the brains at the end of the month to study what changes were occurring in the hippocampus as a result of glucocorticoid exposure. The hippocampus, in both mice and humans, is vital to memory formation, information storage and organizational abilities.

The measurements showed that the more stress hormones the mice got, the greater the epigenetic changes in the blood and brain tissue, although the scientists say the brain changes occurred in a different part of the gene than expected. This was what made finding the blood-brain connection very challenging, Lee says.

Also, the more stress hormone, the more RNA from the FKBP5 gene was expressed in the blood and brain, and the greater the association with depression. However, it was the underlying epigenetic changes that proved to be more robust. This is important, because while RNA levels may return to normal after stress hormone levels decrease or change due to small fluctuations in hormone levels, epigenetic changes persist, reflect overall stress hormone exposure and predict how much RNA will be made when stress hormone levels increase.

The team of researchers used an epigenetic assay previously developed in their laboratory that requires just one drop of blood to accurately assess overall exposure to stress hormone over 30 days. Elevated levels of stress hormone exposure are considered a risk factor for mental illness in humans and other mammals.

(Source: hopkinsmedicine.org)

Filed under stress DNA methylation psychiatric disorders epigenetics glucocorticoid tissue neuroscience science

189 notes

Google Glass puts the focus on Parkinson’s
The next generation of wearable computing is being trialled for the first time to evaluate its potential to support people with Parkinson’s.
Experts at Newcastle University are investigating Google Glass as an assistive aid to help people with Parkinson’s retain their independence for longer.
Glass is a wearable computer being developed by Google. Likened to the kind of technology fictionalised in the Hollywood Blockbuster Minority Report, at first glance Glass appears to be no more than a pair of designer glasses. But the system works like a hands-free smartphone, displaying information on the lens of the Glass. The technology is voice-operated and linked to the internet.
Not currently available outside the US, the five pairs of Glass at Newcastle University were donated by Google to allow researchers to test how they could be used to support people with long-term conditions.
Initial studies by the team - who are based in the University’s Digital Interaction Group in Culture Lab, part of the School of Computing Science - have focussed on the acceptability of Glass. They have been working with a group of Parkinson’s volunteers aged between 46-70 years.
Now they are working on the next stage of the project, using the technology to provide discreet prompts linked to key behaviours typical of Parkinson’s, such as reminding the individual to speak up or to swallow to prevent drooling. Glass can also be used as a personal reminder for things such as medication and appointments.
The team will also be exploring how the motion sensors in Glass can be used to support people with ‘freezing’, a behaviour caused by motor blocking a common symptom of Parkinson’s.
Led by Dr John Vines, PhD student Roisin McNaney and Dr Ivan Poliakov, this is the first UK trial of Glass. Presenting their initial findings later this month at the ACM Human Factors in Computing Systems (CHI) 2014 conference in Toronto, Canada, the team will show how emerging technologies can potentially be used to support people with progressive diseases such as Parkinson’s and dementia.
Read more

Google Glass puts the focus on Parkinson’s

The next generation of wearable computing is being trialled for the first time to evaluate its potential to support people with Parkinson’s.

Experts at Newcastle University are investigating Google Glass as an assistive aid to help people with Parkinson’s retain their independence for longer.

Glass is a wearable computer being developed by Google. Likened to the kind of technology fictionalised in the Hollywood Blockbuster Minority Report, at first glance Glass appears to be no more than a pair of designer glasses. But the system works like a hands-free smartphone, displaying information on the lens of the Glass. The technology is voice-operated and linked to the internet.

Not currently available outside the US, the five pairs of Glass at Newcastle University were donated by Google to allow researchers to test how they could be used to support people with long-term conditions.

Initial studies by the team - who are based in the University’s Digital Interaction Group in Culture Lab, part of the School of Computing Science - have focussed on the acceptability of Glass. They have been working with a group of Parkinson’s volunteers aged between 46-70 years.

Now they are working on the next stage of the project, using the technology to provide discreet prompts linked to key behaviours typical of Parkinson’s, such as reminding the individual to speak up or to swallow to prevent drooling. Glass can also be used as a personal reminder for things such as medication and appointments.

The team will also be exploring how the motion sensors in Glass can be used to support people with ‘freezing’, a behaviour caused by motor blocking a common symptom of Parkinson’s.

Led by Dr John Vines, PhD student Roisin McNaney and Dr Ivan Poliakov, this is the first UK trial of Glass. Presenting their initial findings later this month at the ACM Human Factors in Computing Systems (CHI) 2014 conference in Toronto, Canada, the team will show how emerging technologies can potentially be used to support people with progressive diseases such as Parkinson’s and dementia.

Read more

Filed under google glass parkinson's disease psychology technology neuroscience science

257 notes

Researchers uncover why there is a mapping between pitch and elevation
Have you ever wondered why most natural languages invariably use the same spatial attributes – high versus low – to describe auditory pitch? Or why, throughout the history of musical notation, high notes have been represented high on the staff? According to a team of neuroscientists from Bielefeld University, the Max Planck Institute for Biological Cybernetics in Tübingen and the Bernstein Center Tübingen, high pitched sounds feel ‘high’ because, in our daily lives, sounds coming from high elevations are indeed more likely to be higher in pitch. This study has just appeared in the science journal PNAS.
Dr. Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and found that, due to the convoluted shape of the outer ear – the pinna – sounds coming from high positions in space are filtered in such a way that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space.
The results from these three lines of evidence were highly convergent, suggesting that all such diverse phenomena as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation ultimately reflect the adaptation of human hearing to the statistics of natural auditory scenes. ‘These results are especially fascinating, because they do not just explain the origin of the mapping between frequency and elevation,’ says Parise, ‘they also suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide valuable guidelines for using pitch to develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prostheses, and more immersive virtual auditory environments.’
The mapping between pitch and elevation has often been considered to be metaphorical, and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation, such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between naturally occurring sensory signals.
Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as that yellow–reddish colors are associated with a warm temperature or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact reflect statistical regularities to be found in the natural environment.

Researchers uncover why there is a mapping between pitch and elevation

Have you ever wondered why most natural languages invariably use the same spatial attributes – high versus low – to describe auditory pitch? Or why, throughout the history of musical notation, high notes have been represented high on the staff? According to a team of neuroscientists from Bielefeld University, the Max Planck Institute for Biological Cybernetics in Tübingen and the Bernstein Center Tübingen, high pitched sounds feel ‘high’ because, in our daily lives, sounds coming from high elevations are indeed more likely to be higher in pitch. This study has just appeared in the science journal PNAS.

Dr. Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and found that, due to the convoluted shape of the outer ear – the pinna – sounds coming from high positions in space are filtered in such a way that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space.

The results from these three lines of evidence were highly convergent, suggesting that all such diverse phenomena as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation ultimately reflect the adaptation of human hearing to the statistics of natural auditory scenes. ‘These results are especially fascinating, because they do not just explain the origin of the mapping between frequency and elevation,’ says Parise, ‘they also suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide valuable guidelines for using pitch to develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prostheses, and more immersive virtual auditory environments.’

The mapping between pitch and elevation has often been considered to be metaphorical, and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation, such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between naturally occurring sensory signals.

Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as that yellow–reddish colors are associated with a warm temperature or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact reflect statistical regularities to be found in the natural environment.

Filed under sound localization pitch frequency–elevation mapping acoustics neuroscience science

233 notes

Lipid levels during prenatal brain development impact autism
In a groundbreaking York University study, researchers have found that abnormal levels of lipid molecules in the brain can affect the interaction between two key neural pathways in early prenatal brain development, which can trigger autism. And, environmental causes such as exposure to chemicals in some cosmetics and common over-the-counter medication can affect the levels of these lipids, according to the researchers.
“We have found that the abnormal level of a lipid molecule called Prostaglandin E2 in the brain can affect the function of Wnt proteins. It is important because this can change the course of early embryonic development,” explains Professor Dorota Crawford in the Faculty of Health and a member of the York Autism Alliance Research Group.
This is the first time research shows evidence for cross-talk between PGE2 and Wnt signalling in neuronal stem cells, according to the peer reviewed study published at Cell Communication and Signaling. 
Lead researcher and York U doctoral student Christine Wong adds, “Using real-time imaging microscopy, we determined that higher levels of PGE2 can change Wnt-dependent behaviour of neural stem cells by increasing cell migration or proliferation. As a result, this could affect how the brain is organized and wired.  Moreover, we found that an elevated level of PGE2 can increase expression of Wnt-regulated genes — Ctnnb1, Ptgs2, Ccnd1, and Mmp9. “Interestingly, all these genes have been previously implicated in various autism studies.”
Autism is considered to be the primary disorder of brain development with symptoms ranging from mild to severe and including repetitive behaviour, deficits in social interaction, and impaired language. It is four times more prevalent in boys than in girls and the incidence continues to rise. The US Center for Disease Control and Prevention (CDC) data from 2010 estimates that 1 in 68 children now has autism.
“The statistics are alarming. It’s 30 per cent higher than the previous estimate of 1 in 88 children, up from only two years earlier. Perhaps we can no longer attribute this rise in autism incidence to better diagnostic tools or awareness of autism,” notes Crawford. “It’s even more apparent from the recent literature that the environment might have a greater impact on vulnerable genes, particularly in pregnancy. Our study provides some molecular evidence that the environment likely disrupts certain events occurring in early brain development and contributes to autism.”
According to Crawford, genes don’t undergo significant changes in evolution, so even though genetic factors are the main cause, environmental factors such as insufficient dietary supplementations of fatty acids, exposures to infections, various chemicals or drugs can change gene expression and contribute to autism.

Lipid levels during prenatal brain development impact autism

In a groundbreaking York University study, researchers have found that abnormal levels of lipid molecules in the brain can affect the interaction between two key neural pathways in early prenatal brain development, which can trigger autism. And, environmental causes such as exposure to chemicals in some cosmetics and common over-the-counter medication can affect the levels of these lipids, according to the researchers.

“We have found that the abnormal level of a lipid molecule called Prostaglandin E2 in the brain can affect the function of Wnt proteins. It is important because this can change the course of early embryonic development,” explains Professor Dorota Crawford in the Faculty of Health and a member of the York Autism Alliance Research Group.

This is the first time research shows evidence for cross-talk between PGE2 and Wnt signalling in neuronal stem cells, according to the peer reviewed study published at Cell Communication and Signaling.

Lead researcher and York U doctoral student Christine Wong adds, “Using real-time imaging microscopy, we determined that higher levels of PGE2 can change Wnt-dependent behaviour of neural stem cells by increasing cell migration or proliferation. As a result, this could affect how the brain is organized and wired.  Moreover, we found that an elevated level of PGE2 can increase expression of Wnt-regulated genes — Ctnnb1, Ptgs2, Ccnd1, and Mmp9. “Interestingly, all these genes have been previously implicated in various autism studies.”

Autism is considered to be the primary disorder of brain development with symptoms ranging from mild to severe and including repetitive behaviour, deficits in social interaction, and impaired language. It is four times more prevalent in boys than in girls and the incidence continues to rise. The US Center for Disease Control and Prevention (CDC) data from 2010 estimates that 1 in 68 children now has autism.

“The statistics are alarming. It’s 30 per cent higher than the previous estimate of 1 in 88 children, up from only two years earlier. Perhaps we can no longer attribute this rise in autism incidence to better diagnostic tools or awareness of autism,” notes Crawford. “It’s even more apparent from the recent literature that the environment might have a greater impact on vulnerable genes, particularly in pregnancy. Our study provides some molecular evidence that the environment likely disrupts certain events occurring in early brain development and contributes to autism.”

According to Crawford, genes don’t undergo significant changes in evolution, so even though genetic factors are the main cause, environmental factors such as insufficient dietary supplementations of fatty acids, exposures to infections, various chemicals or drugs can change gene expression and contribute to autism.

Filed under brain development autism prostaglandin e2 stem cells genetics neuroscience science

258 notes

Memory Accuracy and Strength Can Be Manipulated During Sleep
The sense of smell might seem intuitive, almost something you take for granted. But researchers from NYU Langone Medical Center have found that memory of specific odors depends on the ability of the brain to learn, process and recall accurately and effectively during slow-wave sleep — a deep sleep characterized by slow brain waves.
The sense of smell is one of the first things to fail in neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and schizophrenia. Indeed, down the road, if more can be learned from better understanding of how the brain processes odors, researchers believe it could lead to novel therapies that target specific neurons in the brain, perhaps enhancing memory consolidation and memory accuracy.
Reporting in the Journal of Neuroscience online April 9, researchers in the lab of Donald A. Wilson, PhD, a professor in the departments of Child and Adolescent Psychiatry and Neuroscience and Physiology at NYU Langone, and a research scientist at the NYU-affiliated Nathan Kline Institute for Psychiatric Research, showed in experiments with rats that odor memory was strengthened when odors sensed the previous day were replayed during sleep. Memories deepened more when odor reinforcement occurred during sleep than when rats were awake.
When the memory of a specific odor learned when the rats were awake was replayed during slow-wave sleep, they achieved a stronger memory for that odor the next day, compared to rats that received no replay, or only received replay when they were awake.
However, when the research team exposed the rats to replay during sleep of an odor pattern that they had not previously learned, the rats had false memories to many different odors. When the research team pharmacologically prevented neurons from communicating to each other during slow-wave sleep, the accuracy of memory of the odor was also impaired.
The rats were initially trained to recognize odors through conditioning. Using electrodes in the olfactory bulb, a part of the brain responsible for perceiving smells, the researchers stimulated different smell perceptions, according to precise patterns of electrical stimulation. Then, by replaying the patterns electrically, they were able to test the effects of slow-wave sleep manipulation.
Replay of learned electrical odors during slow-wave sleep enhanced the memory for those odors. When the learned smells were replayed while the rats were awake, the strength of the memory decreased. Finally, when a false pattern that the rat never learned was incorporated, the rats could not discriminate the smell accurately from the learned odor.
“Our findings confirm the importance of brain activity during sleep for both memory strength and accuracy,” says Dr. Wilson, the study’s senior author. “What we think is happening is that during slow-wave sleep, neurons in the brain communicate with each other, and in doing so, strengthen their connections, permitting storage of specific information.”
Dr. Wilson says these findings are the first to demonstrate that memory accuracy, not just memory strength, is altered during short-wave sleep. In future research, Dr. Wilson and his team hope to examine how sleep disorders affect memory and perception.

Memory Accuracy and Strength Can Be Manipulated During Sleep

The sense of smell might seem intuitive, almost something you take for granted. But researchers from NYU Langone Medical Center have found that memory of specific odors depends on the ability of the brain to learn, process and recall accurately and effectively during slow-wave sleep — a deep sleep characterized by slow brain waves.

The sense of smell is one of the first things to fail in neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and schizophrenia. Indeed, down the road, if more can be learned from better understanding of how the brain processes odors, researchers believe it could lead to novel therapies that target specific neurons in the brain, perhaps enhancing memory consolidation and memory accuracy.

Reporting in the Journal of Neuroscience online April 9, researchers in the lab of Donald A. Wilson, PhD, a professor in the departments of Child and Adolescent Psychiatry and Neuroscience and Physiology at NYU Langone, and a research scientist at the NYU-affiliated Nathan Kline Institute for Psychiatric Research, showed in experiments with rats that odor memory was strengthened when odors sensed the previous day were replayed during sleep. Memories deepened more when odor reinforcement occurred during sleep than when rats were awake.

When the memory of a specific odor learned when the rats were awake was replayed during slow-wave sleep, they achieved a stronger memory for that odor the next day, compared to rats that received no replay, or only received replay when they were awake.

However, when the research team exposed the rats to replay during sleep of an odor pattern that they had not previously learned, the rats had false memories to many different odors. When the research team pharmacologically prevented neurons from communicating to each other during slow-wave sleep, the accuracy of memory of the odor was also impaired.

The rats were initially trained to recognize odors through conditioning. Using electrodes in the olfactory bulb, a part of the brain responsible for perceiving smells, the researchers stimulated different smell perceptions, according to precise patterns of electrical stimulation. Then, by replaying the patterns electrically, they were able to test the effects of slow-wave sleep manipulation.

Replay of learned electrical odors during slow-wave sleep enhanced the memory for those odors. When the learned smells were replayed while the rats were awake, the strength of the memory decreased. Finally, when a false pattern that the rat never learned was incorporated, the rats could not discriminate the smell accurately from the learned odor.

“Our findings confirm the importance of brain activity during sleep for both memory strength and accuracy,” says Dr. Wilson, the study’s senior author. “What we think is happening is that during slow-wave sleep, neurons in the brain communicate with each other, and in doing so, strengthen their connections, permitting storage of specific information.”

Dr. Wilson says these findings are the first to demonstrate that memory accuracy, not just memory strength, is altered during short-wave sleep. In future research, Dr. Wilson and his team hope to examine how sleep disorders affect memory and perception.

Filed under memory learning olfactory bulb sleep smell perception neuroscience science

131 notes

From Learning in Infancy to Planning Ahead in Adulthood: Sleep’s Vital Role for Memory

Babies and young children make giant developmental leaps all of the time. Sometimes, it seems, even overnight they figure out how to recognize certain shapes or what the word “no” means no matter who says it. It turns out that making those leaps could be a nap away: New research finds that infants who nap are better able to apply lessons learned to new skills, while preschoolers are better able to retain learned knowledge after napping.

image

“Sleep plays a crucial role in learning from early in development,” says Rebecca Gómez of the University of Arizona. She will be presenting her new work, which looks specifically at how sleep enables babies and young children to learn language over time, at the Cognitive Neuroscience Society (CNS) annual meeting in Boston today, as part of a symposium on sleep and memory.

We want to show that sleep is not just a necessary evil for the organism to stay functional,” says Susanne Diekelmann of the University of Tübingen in Germany who is chairing the symposium. “Sleep is an active state that is essential for the formation of lasting memories.”

A growing body of research shows how memories become reactivated during sleep, and new work is shedding light on exactly when and how memories get stored and reactivated. “Sleep is a highly selective state that preferentially strengthens memories that are relevant for our future behavior,” Diekelmann says. “Sleep can also abstract general rules from single experiences, which helps us to deal more efficiently with similar situations in the future.”

Read more

Filed under sleep learning memory infants neuroscience science

664 notes

Language Structure… You’re Born with It
Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.
THE STUDY
While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.
BLA, ShBA, LBA
Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?
THE EXPERIMENT
These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).
Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.
It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Language Structure… You’re Born with It

Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.

THE STUDY

While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.

BLA, ShBA, LBA

Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?

THE EXPERIMENT

These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).

Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.

It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Filed under language language acquisition speech perception phonology linguistics neuroscience science

free counters