Neuroscience

Articles and news from the latest research reports.

Posts tagged science

115 notes

Presence or absence of early language delay alters anatomy of the brain in autism
A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.
The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.
Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.
The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).
Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.
“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”
He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”
Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.
“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”
“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Presence or absence of early language delay alters anatomy of the brain in autism

A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.

The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.

Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.

The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).

Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.

“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”

He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”

Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.

“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”

“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Filed under autism language language development brain volume individual differences neuroscience science

147 notes

Brain Wave May Be Used to Detect What People Have Seen, Recognize

Brain activity can be used to tell whether someone recognizes details they encountered in normal, daily life, which may have implications for criminal investigations and use in courtrooms, new research shows.

image

The findings, published in Psychological Science, a journal of the Association for Psychological Science, suggest that a particular brain wave, known as P300, could serve as a marker that identifies places, objects, or other details that a person has seen and recognizes from everyday life.

Research using EEG recordings of brain activity has shown that the P300 brain wave tends to be large when a person recognizes a meaningful item among a list of nonmeaningful items. Using P300, researchers can give a subject a test called the Concealed Information Test (CIT) to try to determine whether they recognize information that is related to a crime or other event.

Most studies investigating P300 and recognition have been conducted in lab settings that are far removed from the kinds of information a real witness or suspect might be exposed to. This new study marks an important advance, says lead research John B. Meixner of Northwestern University, because it draws on details from activities in participants’ normal, daily lives.

“Much like a real crime, our participants made their own decisions and were exposed to all of the distracting information in the world,” he explains.

“Perhaps the most surprising finding was the extent to which we could detect very trivial details from a subject’s day, such as the color of umbrella that the participant had used,” says Meixner. “This precision is exciting for the future because it indicates that relatively peripheral crime details, such as physical features of the crime scene, might be usable in a real-world CIT — though we still need to do much more work to learn about this.”

To achieve a more realistic CIT, Meixner and co-author J. Peter Rosenfeld outfitted 24 college student participants with small cameras that recorded both video and sound — the students wore the cameras clipped to their clothes for 4 hours as they went about their day.

For half of the students, the researchers used the recordings to identify details specific to each person’s day, which became “probe” items for that person. The researchers also came up with corresponding, “irrelevant” items that the student had not encountered — if the probe item was a specific grocery store, for example, the irrelevant items might include other grocery stores.

For the other half of the students, the “probe” items related to details or items they had not encountered, but which were instead drawn from the recordings of other participants. The researchers wanted to simulate a real investigation, in which a suspect with knowledge of a crime would be shown the same crime-related details as a suspect who may have no crime-related knowledge.

The next day, all of the students returned to the lab and were shown a series of words that described different details or items (i.e., the probe and irrelevant items), while their brain activity was recorded via EEG.

The results showed that the P300 was larger for probe items than for irrelevant items, but only for the students who had actually seen or encountered the probe.

Further analyses revealed that P300 responses effectively distinguished probe items from irrelevant items on the level of each individual participant, suggesting that it is a robust and reliable marker of recognition.

These findings have implications for memory research, but they may also have real-world application in the domain of criminal law given that some countries, like Japan and Israel, use the CIT in criminal investigations.

“One reason that the CIT has not been used in the US is that the test may not meet the criteria to be admissible in a courtroom,” says Meixner. “Our work may help move the P300-based CIT one step closer to admissibility by demonstrating the test’s validity and reliability in a more realistic context.”

Meixner, Rosenfeld, and colleagues plan on investigating additional factors that may impact detection, including whether images from the recordings may be even more effective at eliciting recognition than descriptive words – preliminary data suggest this may be the case.

Filed under memory eyewitness memory brain activity neuroimaging P300 psychology neuroscience science

175 notes

Dying brain cells cue new brain cells to grow in songbird
Brain cells that multiply to help birds sing their best during breeding season are known to die back naturally later in the year. For the first time researchers have described the series of events that cues new neuron growth each spring, and it all appears to start with a signal from the expiring cells the previous fall that primes the brain to start producing stem cells.
If scientists can further tap into the process and understand how those signals work, it might lead to ways to exploit these signals and encourage replacement of cells in human brains that have lost neurons naturally because of aging, severe depression or Alzheimer’s disease, said Tracy Larson, a University of Washington doctoral student in biology. She’s lead author of a paper in the Sept. 23 Journal of Neuroscience on brain cell birth that follows natural brain cell death.
Neuroscientists have long known that new neurons are generated in the adult brains of many animals, but the birth of new neurons – or neurogenesis – appears to be limited in mammals and humans, especially where new neurons are generated after there’s been a blow to the head, stroke or some other physical loss of brain cells, Larson said. That process, referred to as “regenerative” neurogenesis, has been studied in mammals since the 1990s.
This is the first published study to examine the brain’s ability to replace cells that have been lost naturally, Larson said.
“Many neurodegenerative disorders are not injury-induced,” the co-authors write, “so it is critical to determine if and how reactive neurogenesis occurs under non-injury-induced neurodegenerative conditions.”
The researchers worked with Gambel’s white-crowned sparrows, a medium-sized species 7 inches (18 centimeters) long that breeds in Alaska, then winters in California and Mexico. Sometimes in flocks of more than 100 birds, they can be so plentiful in parts of California that they are considered pests. The ones in this work came from Eastern Washington.
Like most songbirds, Gambel’s white-crowned sparrows experience growth in the area of the brain that controls song output during the breeding season when a superior song helps them attract mates and define their territories. At the end of the season, probably because having extra cells exacts a toll in terms of energy and steroids they require, the cells begin dying naturally and the bird’s song degrades.
Gambel’s white-crowned sparrows are particularly good to work with because their breeding cycle is closely tied to the amount of sunlight they receive. Give them 20 hours of light in the lab, along with the right increase of steroids, and they are ready to breed. Cut the light to eight to 12 hours and taper the steroids, the breeding behavior ends.
“As the hormone levels decrease, the cells in the part of the brain controlling song no longer have the signal to ‘stay alive,’” Larson said. “Those cells undergo programmed cell death – or cell suicide as some call it. As those cells die it is likely they are releasing some kind of signal that somehow gets transmitted to the stem cells that reside in the brain. Whatever that signal is then triggers those cells to divide and replace the loss of the cell that sent the signal to begin with.”
The next spring, all that’s needed is for steroids to ramp up and new cells start to proliferate in the song center of the brain.
“This paper doesn’t describe the exact nature of the signals that stimulate proliferation,” Larson said. “We’re just describing the phenomenon that there is this connection between cells dying and this stem cell proliferation. Finding the signal is the next step.”
“Tracy really nailed this down by going in and blocking cell death at the end of the breeding season,” said Eliot Brenowitz, UW professor of psychology and of biology, and co-author on the paper. “There are chemicals you can use to turn off the cell suicide pathway. When this was done, far fewer stem cells divided. You don’t get that big uptick in new neurons being born. That’s important because it shows there’s something about the cells dying that turns on the replacement process.’
“There’s no reason to think what goes on in a bird brain doesn’t also go on in mammal brains, in human brains,” Brenowitz said. “As far as we know, the molecules are the same, the pathways are the same, the hormones are the same. That’s the ultimate purpose of all this, to identify these molecular mechanisms that will be of use in repairing human brains.”
In mammals, the area of the brain that controls the sense of smell and the one that is thought to have a role in memories can produce tiny numbers of new brain cells but it is not understood how or why. The numbers of new cells is so low that trying to identify and quantify whether dying cells are being replaced and if so, the steps that are involved, is much more difficult than when using a songbird like Gambel’s white-crowned sparrow, Larson and Brenowitz said.

Dying brain cells cue new brain cells to grow in songbird

Brain cells that multiply to help birds sing their best during breeding season are known to die back naturally later in the year. For the first time researchers have described the series of events that cues new neuron growth each spring, and it all appears to start with a signal from the expiring cells the previous fall that primes the brain to start producing stem cells.

If scientists can further tap into the process and understand how those signals work, it might lead to ways to exploit these signals and encourage replacement of cells in human brains that have lost neurons naturally because of aging, severe depression or Alzheimer’s disease, said Tracy Larson, a University of Washington doctoral student in biology. She’s lead author of a paper in the Sept. 23 Journal of Neuroscience on brain cell birth that follows natural brain cell death.

Neuroscientists have long known that new neurons are generated in the adult brains of many animals, but the birth of new neurons – or neurogenesis – appears to be limited in mammals and humans, especially where new neurons are generated after there’s been a blow to the head, stroke or some other physical loss of brain cells, Larson said. That process, referred to as “regenerative” neurogenesis, has been studied in mammals since the 1990s.

This is the first published study to examine the brain’s ability to replace cells that have been lost naturally, Larson said.

“Many neurodegenerative disorders are not injury-induced,” the co-authors write, “so it is critical to determine if and how reactive neurogenesis occurs under non-injury-induced neurodegenerative conditions.”

The researchers worked with Gambel’s white-crowned sparrows, a medium-sized species 7 inches (18 centimeters) long that breeds in Alaska, then winters in California and Mexico. Sometimes in flocks of more than 100 birds, they can be so plentiful in parts of California that they are considered pests. The ones in this work came from Eastern Washington.

Like most songbirds, Gambel’s white-crowned sparrows experience growth in the area of the brain that controls song output during the breeding season when a superior song helps them attract mates and define their territories. At the end of the season, probably because having extra cells exacts a toll in terms of energy and steroids they require, the cells begin dying naturally and the bird’s song degrades.

Gambel’s white-crowned sparrows are particularly good to work with because their breeding cycle is closely tied to the amount of sunlight they receive. Give them 20 hours of light in the lab, along with the right increase of steroids, and they are ready to breed. Cut the light to eight to 12 hours and taper the steroids, the breeding behavior ends.

“As the hormone levels decrease, the cells in the part of the brain controlling song no longer have the signal to ‘stay alive,’” Larson said. “Those cells undergo programmed cell death – or cell suicide as some call it. As those cells die it is likely they are releasing some kind of signal that somehow gets transmitted to the stem cells that reside in the brain. Whatever that signal is then triggers those cells to divide and replace the loss of the cell that sent the signal to begin with.”

The next spring, all that’s needed is for steroids to ramp up and new cells start to proliferate in the song center of the brain.

“This paper doesn’t describe the exact nature of the signals that stimulate proliferation,” Larson said. “We’re just describing the phenomenon that there is this connection between cells dying and this stem cell proliferation. Finding the signal is the next step.”

“Tracy really nailed this down by going in and blocking cell death at the end of the breeding season,” said Eliot Brenowitz, UW professor of psychology and of biology, and co-author on the paper. “There are chemicals you can use to turn off the cell suicide pathway. When this was done, far fewer stem cells divided. You don’t get that big uptick in new neurons being born. That’s important because it shows there’s something about the cells dying that turns on the replacement process.’

“There’s no reason to think what goes on in a bird brain doesn’t also go on in mammal brains, in human brains,” Brenowitz said. “As far as we know, the molecules are the same, the pathways are the same, the hormones are the same. That’s the ultimate purpose of all this, to identify these molecular mechanisms that will be of use in repairing human brains.”

In mammals, the area of the brain that controls the sense of smell and the one that is thought to have a role in memories can produce tiny numbers of new brain cells but it is not understood how or why. The numbers of new cells is so low that trying to identify and quantify whether dying cells are being replaced and if so, the steps that are involved, is much more difficult than when using a songbird like Gambel’s white-crowned sparrow, Larson and Brenowitz said.

Filed under songbirds brain cells neurogenesis cell death neuroscience science

142 notes

Taste memory

Have you ever eaten something totally new and it made you sick? Don’t give up; if you try the same food in a different place, your brain will be more “forgiving” of the new attempt. In a new study conducted by the Sagol Department of Neurobiology at the University of Haifa, researchers found for the first time that there is a link between the areas of the brain responsible for taste memory in a negative context and those areas in the brain responsible for processing the memory of the time and location of the sensory experience. When we experience a new taste without a negative context, this link doesn’t exist.

image

The area of the brain responsible for storing memories of new tastes is the taste cortex, found in a relatively insulated area of the human brain known as the insular cortex. The area responsible for formulating a memory of the place and time of the experience (the episode) is the hippocampus. Until now, researchers assumed that there was no direct connection between these areas – i.e., the processing of information about a taste is not related to the time or the place one experiences the taste. The accepted thinking was that a negative experience – for example, being exposed to a bad taste – would be negative in the same way anywhere, and the brain would create a memory of the taste itself, divorced from the time or place.

But in this new study, conducted by doctoral student Adaikkan Chinnakkaruppan in the laboratory of Prof. Kobi Rosenblum of the Sagol Department of Neurobiology at the University of Haifa, in cooperation with the Riken Institute, the leading brain research institute in Tokyo, the researchers demonstrate for the first time that there is a functional link between the two brain regions.

In the study the researchers sought to examine the relationship between the taste cortex (which is responsible for taste memory), and three different areas in the hippocampus: CA1, which is responsible for encoding the concept of space (where we are located); DG, the area responsible for encoding the time relationship between events; and CA3, responsible for filling in missing information. To do this the researchers took ordinary mice and mice that were genetically engineered by their Japanese colleagues such that these three areas of the brain functioned normally but were lacking plasticity, which did not allow new memories reliant on them to be created.

“In brain research, the manipulation we do must be very delicate and precise, otherwise the changes can make the entire experiment irrelevant to proving or refuting the research hypothesis,” said Prof. Rosenblum.

The mice were exposed to two new tastes, one that caused stomach pains (to mimic exposure to toxic food) and another that didn’t cause that feeling. By comparing the two groups it emerged that when the new taste was not accompanied by an association with toxic food, there was no difference between the normal mice and those whose various functional areas in the hippocampus didn’t allow plasticity. But when the taste caused a negative feeling, there was clear involvement of the CA1 area, which is responsible for encoding the space.

“The significance of this is that the moment we go back to the same place at which we experienced the taste associated with a bad feeling, subconsciously the negative memory will be much stronger than if we come to taste the same taste in a totally different place,” explained Prof. Rosenblum. Similarly, the DG area, which is responsible for encoding the time between incidents, was involved the more time that passed between the new taste and the stomach discomfort. “This means that even during a simple associative taste, the brain operates the hippocampus to produce an integrated experience that includes general information about the time between events and their location,” he said.

The findings, which were recently published in the Journal of Neuroscience, expose the complexity and richness of the simple sensory experiences that are engraved in our brains and that in most cases we aren’t even aware of. Moreover, the study can help explain behavioral results and the difficulty in producing memories when certain areas of the brain become dysfunctional following and illness or accident. The better we understand the encoding of simple sensory experiences in the brain and the link between the feeling, time and place of the experiences; we will better understand the complex process of creating memories and storing them in our brains.

(Source: newmedia-eng.haifa.ac.il)

Filed under taste taste learning hippocampus insular cortex plasticity neuroscience science

207 notes

Blood test may help determine who is at risk for psychosis
A study led by University of North Carolina at Chapel Hill researchers represents an important step forward in the accurate diagnosis of people who are experiencing the earliest stages of psychosis.
Psychosis includes hallucinations or delusions that define the development of severe mental disorders such as schizophrenia. Schizophrenia emerges in late adolescence and early adulthood and affects about 1 in every 100 people. In severe cases, the impact on a young person can be a life compromised, and the burden on family members can be almost as severe.
The study published in the journal Schizophrenia Bulletin reports preliminary results showing that a blood test, when used in psychiatric patients experiencing symptoms that are considered to be indicators of a high risk for psychosis, identifies those who later went on to develop psychosis.
“The blood test included a selection of 15 measures of immune and hormonal system imbalances as well as evidence of oxidative stress,” said Diana O. Perkins, MD, MPH, professor of psychiatry in the UNC School of Medicine and corresponding author of the study. She is also medical director of UNC’s Outreach and Support Intervention Services (OASIS) program for schizophrenia.
“While further research is required before this blood test could be clinically available, these results provide evidence regarding the fundamental nature of schizophrenia, and point towards novel pathways that could be targets for preventative interventions,” Perkins said.
Clark D. Jeffries, PhD, bioinformatics scientist at the UNC-based Renaissance Computing Institute (RENCI), is a co-author of the study, which was conducted as part of the North American Prodrome Longitudinal Study (NAPLS), an international effort to understand risk factors and mechanisms for development of psychotic disorders. 
“Modern, computer-based methods can readily discover seemingly clear patterns from nonsensical data,” said Jeffries. “Added to that, scientific results from studies of complex disorders like schizophrenia can be confounded by many hidden dependencies. Thus, stringent testing is necessary to build a useful classifier. We did that.”
The study concludes that the multiplex blood assay, if independently replicated and if integrated with studies of other classes of biomarkers, has the potential to be of high value in the clinical setting.
(Image: Shutterstock)

Blood test may help determine who is at risk for psychosis

A study led by University of North Carolina at Chapel Hill researchers represents an important step forward in the accurate diagnosis of people who are experiencing the earliest stages of psychosis.

Psychosis includes hallucinations or delusions that define the development of severe mental disorders such as schizophrenia. Schizophrenia emerges in late adolescence and early adulthood and affects about 1 in every 100 people. In severe cases, the impact on a young person can be a life compromised, and the burden on family members can be almost as severe.

The study published in the journal Schizophrenia Bulletin reports preliminary results showing that a blood test, when used in psychiatric patients experiencing symptoms that are considered to be indicators of a high risk for psychosis, identifies those who later went on to develop psychosis.

“The blood test included a selection of 15 measures of immune and hormonal system imbalances as well as evidence of oxidative stress,” said Diana O. Perkins, MD, MPH, professor of psychiatry in the UNC School of Medicine and corresponding author of the study. She is also medical director of UNC’s Outreach and Support Intervention Services (OASIS) program for schizophrenia.

“While further research is required before this blood test could be clinically available, these results provide evidence regarding the fundamental nature of schizophrenia, and point towards novel pathways that could be targets for preventative interventions,” Perkins said.

Clark D. Jeffries, PhD, bioinformatics scientist at the UNC-based Renaissance Computing Institute (RENCI), is a co-author of the study, which was conducted as part of the North American Prodrome Longitudinal Study (NAPLS), an international effort to understand risk factors and mechanisms for development of psychotic disorders. 

“Modern, computer-based methods can readily discover seemingly clear patterns from nonsensical data,” said Jeffries. “Added to that, scientific results from studies of complex disorders like schizophrenia can be confounded by many hidden dependencies. Thus, stringent testing is necessary to build a useful classifier. We did that.”

The study concludes that the multiplex blood assay, if independently replicated and if integrated with studies of other classes of biomarkers, has the potential to be of high value in the clinical setting.

(Image: Shutterstock)

Filed under oxidative stress psychosis schizophrenia blood test inflammation neuroscience science

214 notes

Neuroscientists challenge long-held understanding of the sense of touch
Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.
Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.
"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."
Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.
In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.
Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.
Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.
In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.
Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.
"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.
Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.
Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.
"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Neuroscientists challenge long-held understanding of the sense of touch

Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.

Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.

"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."

Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.

In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.

Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.

Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.

In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.

Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.

"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.

Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.

Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.

"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Filed under sense of touch perception somatosensory cortex neuroscience science

36 notes

Statin Use Following Hemorrhagic Stroke Associated with Improved Survival

Patients who were treated with a statin in the hospital after suffering from a hemorrhagic stroke were significantly more likely to survive than those who were not, according to a study published today in JAMA Neurology. This study was conducted by the same researchers who recently discovered that the use of cholesterol-lowering statins can improve survival in victims of ischemic stroke.

Ischemic stroke is caused by a constriction or obstruction of a blood vessel that blocks blood from reaching areas of the brain, while hemorrhagic stroke, also known as intracerebral hemorrhage, is bleeding in the brain.

“Some previous research has suggested that treating patients with statins after they suffer hemorrhagic stroke may increase their long-term risk of continued bleeding,” said lead author Alexander Flint, MD, PhD, of the Kaiser Permanente Department of Neuroscience in Redwood City, Calif. “Yet the findings of our study suggest that stopping statin treatments for these patients may carry substantial risks.”

The study included 3,481 individuals who were admitted to any of 20 Kaiser Permanente hospitals in Northern California with a hemorrhagic stroke over a 10-year period. Researchers looked at patient survival and discharge 30 days after the stroke.

Patients treated with a statin while in the hospital were more likely to be alive 30 days after suffering a hemorrhagic stroke than those who were not treated with a statin — 81.6 percent versus 61.3 percent. Patients treated with a statin while in the hospital were also more likely to be discharged to home or an acute rehabilitation facility than those who were not — 51.1 percent compared to 35.0 percent.

Patients whose statin therapy was discontinued — that is, patients taking a statin as an outpatient prior to experiencing a hemorrhagic stroke who did not receive a statin as an inpatient — had a mortality rate of 57.8 percent compared with a mortality rate of 18.9 percent for patients using a statin before and during hospitalization.

The researchers concluded that statin use is strongly associated with improved outcomes after hemorrhagic stroke, and that discontinuing statin use is strongly associated with worsened outcomes after hemorrhagic stroke.

(Source: share.kaiserpermanente.org)

Filed under stroke statin intracerebral hemorrhage neuroscience science

94 notes

Brainwave Test Could Improve Autism Diagnosis and Classification
A new study by researchers at Albert Einstein College of Medicine of Yeshiva University suggests that measuring how fast the brain responds to sights and sounds could help in objectively classifying people on the autism spectrum and may help diagnose the condition earlier. The paper was published today in the online edition of the Journal of Autism and Developmental Disabilities.
The U.S. Centers for Disease Control and Prevention estimates that 1 in 68 children has been identified with an autism spectrum disorder (ASD). The signs and symptoms of ASD vary significantly from person to person, ranging from mild social and communication difficulties to profound cognitive impairments.
“One of the challenges in autism is that we don’t know how to classify patients into subgroups or even what those subgroups might be,” said study leader Sophie Molholm, Ph.D., associate professor in the Dominick P. Purpura Department of Neuroscience and the Muriel and Harold Block Faculty Scholar in Mental Illness in the department of pediatrics at Einstein. “This has greatly limited our understanding of the disorder and how to treat it.”
Autism is diagnosed based on a patient’s behavioral characteristics and symptoms. “These assessments can be highly subjective and require a tremendous amount of clinical expertise,” said Dr. Molholm. “We clearly need a more objective way to diagnose and classify this disorder.”
An earlier study by Dr. Molholm and colleagues suggested that brainwave electroencephalogram (EEG) recordings could potentially reveal how severely ASD individuals are affected. That study found that children with ASD process sensory information—such as sound, touch and vision—less rapidly than typically developing children do.
The current study was intended to see whether sensory processing varies along the autism spectrum. Forty-three ASD children aged 6 to 17 were presented with either a simple auditory tone, a visual image (red circle), or a tone combined with an image, and instructed to press a button as soon as possible after hearing the tone, seeing the image or seeing and hearing the two stimuli together. Continuous EEG recordings were made via 70 scalp electrodes to determine how fast the children’s brains were processing the stimuli.
The speed with which the subjects processed auditory signals strongly correlated with the severity of their symptoms: the more time required for an ASD individual to process the auditory signals, the more severe that person’s autistic symptoms. “This finding is in line with studies showing that, in people with ASD, the microarchitecture in the brain’s auditory center differs from that of typically developing children,” Dr. Molholm said.
The study also found a significant though weaker correlation between the speed of processing combined audio-visual signals and ASD severity. No link was observed between visual processing and ASD severity.
“This is a first step toward developing a biomarker of autism severity—an objective way to assess someone’s place on the ASD spectrum,” said Dr. Molholm. “Using EEG recordings in this way might also prove useful for objectively evaluating the effectiveness of ASD therapies.”
In addition, EEG recordings might help diagnose ASD earlier. “Early diagnosis allows for earlier treatment—which we know increases the likelihood of a better outcome,” said Dr. Molholm. “But currently, fewer than 15 percent of children with ASD are diagnosed before age 4. We might be able to adapt this technology to allow for early ASD detection and therapy for a much larger percentage of children.”

Brainwave Test Could Improve Autism Diagnosis and Classification

A new study by researchers at Albert Einstein College of Medicine of Yeshiva University suggests that measuring how fast the brain responds to sights and sounds could help in objectively classifying people on the autism spectrum and may help diagnose the condition earlier. The paper was published today in the online edition of the Journal of Autism and Developmental Disabilities.

The U.S. Centers for Disease Control and Prevention estimates that 1 in 68 children has been identified with an autism spectrum disorder (ASD). The signs and symptoms of ASD vary significantly from person to person, ranging from mild social and communication difficulties to profound cognitive impairments.

“One of the challenges in autism is that we don’t know how to classify patients into subgroups or even what those subgroups might be,” said study leader Sophie Molholm, Ph.D., associate professor in the Dominick P. Purpura Department of Neuroscience and the Muriel and Harold Block Faculty Scholar in Mental Illness in the department of pediatrics at Einstein. “This has greatly limited our understanding of the disorder and how to treat it.”

Autism is diagnosed based on a patient’s behavioral characteristics and symptoms. “These assessments can be highly subjective and require a tremendous amount of clinical expertise,” said Dr. Molholm. “We clearly need a more objective way to diagnose and classify this disorder.”

An earlier study by Dr. Molholm and colleagues suggested that brainwave electroencephalogram (EEG) recordings could potentially reveal how severely ASD individuals are affected. That study found that children with ASD process sensory information—such as sound, touch and vision—less rapidly than typically developing children do.

The current study was intended to see whether sensory processing varies along the autism spectrum. Forty-three ASD children aged 6 to 17 were presented with either a simple auditory tone, a visual image (red circle), or a tone combined with an image, and instructed to press a button as soon as possible after hearing the tone, seeing the image or seeing and hearing the two stimuli together. Continuous EEG recordings were made via 70 scalp electrodes to determine how fast the children’s brains were processing the stimuli.

The speed with which the subjects processed auditory signals strongly correlated with the severity of their symptoms: the more time required for an ASD individual to process the auditory signals, the more severe that person’s autistic symptoms. “This finding is in line with studies showing that, in people with ASD, the microarchitecture in the brain’s auditory center differs from that of typically developing children,” Dr. Molholm said.

The study also found a significant though weaker correlation between the speed of processing combined audio-visual signals and ASD severity. No link was observed between visual processing and ASD severity.

“This is a first step toward developing a biomarker of autism severity—an objective way to assess someone’s place on the ASD spectrum,” said Dr. Molholm. “Using EEG recordings in this way might also prove useful for objectively evaluating the effectiveness of ASD therapies.”

In addition, EEG recordings might help diagnose ASD earlier. “Early diagnosis allows for earlier treatment—which we know increases the likelihood of a better outcome,” said Dr. Molholm. “But currently, fewer than 15 percent of children with ASD are diagnosed before age 4. We might be able to adapt this technology to allow for early ASD detection and therapy for a much larger percentage of children.”

Filed under autism ASD EEG brainwaves neuroscience science

68 notes

(Image caption: A neuron in which the axon originates at a dendrite. Signals arriving at this dendrites become more efficiently forwarded than signals input elsewhere. Credit: Alexei V. Egorov, 2014)
Communication without detours
Certain nerve cells take a shortcut for the transmission of information: signals are not conducted via the cell`s center, but around it like on a bypass road. The previously unknown nerve cell shape is now presented in the journal “Neuron" by a research team from Heidelberg, Mannheim and Bonn.
Nerve cells communicate by using electrical signals. Via widely ramified cell structures—the  dendrites—, they receive signals from other neurons and then transmit them over a thin cell extension—the axon—to other nerve cells. Axon and dendrites are usually interconnected by the neuron’s cell body. A team of scientists at the Bernstein Center Heidelberg-Mannheim, Heidelberg University, and the University of Bonn has now discovered neurons in which the axon arises directly from one of the dendrites. Similar to taking a bypass road, the signal transmission is thus facilitated within the cell.
“Input signals at this dendrite do not need not be propagated across the cell body,” explains Christian Thome of the Bernstein Center Heidelberg-Mannheim and Heidelberg University, one of the two first authors of the study. For their analyses, the scientists specifically colored the places of origin of axons of so-called pyramidal cells in the hippocampus. This brain region is involved in memory processes. The surprising result: “We found that in more than half of the cells, the axon does not emerge from the cell body, but arises from a lower dendrite,” Thome says.
The researchers then studied the effect of signals received at this special dendrite. For this purpose, they injected a certain form of the neural transmitter substance glutamate into the brain tissue of mice that can be activated by light pulses. A high-resolution microscope allowed the neuroscientists to direct the light beam directly to a specific dendrite. By the subsequent activation of the messenger substance, they simulated an exciting input signal.
“Our measurements indicate that dendrites that are directly connected to the axon, actively propagate even small input stimuli and activate the neuron,” says second first author Tony Kelly, a member of the Sonderforschungsbereich (SFB) 1089 at the University of Bonn. A computer simulation of the scientists predicts that this effect is particularly pronounced when the information flow from other dendrites to the axon is suppressed by inhibitory input signals at the cell body.
“That way, information transmitted by this special dendrite influences the behavior of the nerve cell more than input from any other dendrite,” Kelly says. In a future step, the researchers attempt to figure out which biological function is actually strengthened through the specific dendrite—and what therefore might be the reason for the unusual shape of these neurons.

(Image caption: A neuron in which the axon originates at a dendrite. Signals arriving at this dendrites become more efficiently forwarded than signals input elsewhere. Credit: Alexei V. Egorov, 2014)

Communication without detours

Certain nerve cells take a shortcut for the transmission of information: signals are not conducted via the cell`s center, but around it like on a bypass road. The previously unknown nerve cell shape is now presented in the journal “Neuron" by a research team from Heidelberg, Mannheim and Bonn.

Nerve cells communicate by using electrical signals. Via widely ramified cell structures—the  dendrites—, they receive signals from other neurons and then transmit them over a thin cell extension—the axon—to other nerve cells. Axon and dendrites are usually interconnected by the neuron’s cell body. A team of scientists at the Bernstein Center Heidelberg-Mannheim, Heidelberg University, and the University of Bonn has now discovered neurons in which the axon arises directly from one of the dendrites. Similar to taking a bypass road, the signal transmission is thus facilitated within the cell.

“Input signals at this dendrite do not need not be propagated across the cell body,” explains Christian Thome of the Bernstein Center Heidelberg-Mannheim and Heidelberg University, one of the two first authors of the study. For their analyses, the scientists specifically colored the places of origin of axons of so-called pyramidal cells in the hippocampus. This brain region is involved in memory processes. The surprising result: “We found that in more than half of the cells, the axon does not emerge from the cell body, but arises from a lower dendrite,” Thome says.

The researchers then studied the effect of signals received at this special dendrite. For this purpose, they injected a certain form of the neural transmitter substance glutamate into the brain tissue of mice that can be activated by light pulses. A high-resolution microscope allowed the neuroscientists to direct the light beam directly to a specific dendrite. By the subsequent activation of the messenger substance, they simulated an exciting input signal.

“Our measurements indicate that dendrites that are directly connected to the axon, actively propagate even small input stimuli and activate the neuron,” says second first author Tony Kelly, a member of the Sonderforschungsbereich (SFB) 1089 at the University of Bonn. A computer simulation of the scientists predicts that this effect is particularly pronounced when the information flow from other dendrites to the axon is suppressed by inhibitory input signals at the cell body.

“That way, information transmitted by this special dendrite influences the behavior of the nerve cell more than input from any other dendrite,” Kelly says. In a future step, the researchers attempt to figure out which biological function is actually strengthened through the specific dendrite—and what therefore might be the reason for the unusual shape of these neurons.

Filed under hippocampus nerve cells pyramidal cells dendrites axons neuroscience science

102 notes

Evidence Supports Deep Brain Stimulation for Obsessive-Compulsive Disorder

Available research evidence supports the use of deep brain stimulation (DBS) for patients with obsessive-compulsive disorder (OCD) who don’t respond to other treatments, concludes a review in the October issue of Neurosurgery, official journal of the Congress of Neurological Surgeons (CNS). The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

image

Based on evidence, two specific bilateral DBS techniques are recommended for treatment of carefully selected patients with OCD, according to a new clinical practice guideline endorsed by the CNS and the American Association of Neurological Surgeons. While calling for further research in key areas, Dr. Clement Hamani of Toronto Western Hospital and coauthors emphasize that patients with OCD symptoms that don’t respond to other treatments should continue to have access to DBS.

Deep Brain Stimulation for OCD—What’s the Evidence?

Dr. Hamani led a multispecialty expert group in performing a systematic review of research on the effectiveness of DBS for OCD. Deep brain stimulation—placement of electrodes in specific areas of the brain, followed by electrical stimulation of those areas—has become an important treatment for patients with Parkinson’s disease and other movement disorders.

Although many patients with OCD respond well to medications and/or psychotherapy, 40 to 60 percent continue to experience symptoms despite treatment. Over the past decade, a growing number of reports have suggested that DBS may be an effective alternative in these “medically refractory” cases.

Dr. Hamani and colleagues were tasked with analyzing the supporting evidence and developing an initial clinical practice guideline for the use of DBS for patients with OCD. The review and guideline development process was sponsored by the American Society of Stereotactic and Functional Neurosurgery and the CNS. Out of more than 350 papers, the reviewers identified seven high-quality studies evaluating DBS for OCD.

Based on that evidence, they conclude that bilateral stimulation (on both sides of the brain) of two brain “targets”—areas called the subthalamic nucleus and the nucleus accumbens—can be regarded as effective treatments for OCD. In controlled clinical trials, both techniques improved OCD symptoms by around 30 percent on a standard rating scale.

While Research Proceeds, well-selected treatment-resistant severe OCD Patients Should Have Access to DBS

That evidence forms the basis for a clinical guideline stating that bilateral DBS is a “reasonable therapeutic option” for patients with severe OCD that does not respond to other treatments. The guideline also notes that there is “insufficient evidence” supporting the use of any type of unilateral DBS target (one side of the brain) for OCD.

The review highlights the difficulties of studying the effectiveness of DBS for OCD—because most patients respond to medical treatment, studies of this highly specialized treatment typically include only small numbers of patients. Dr. Hamani and coauthors identify some priorities for future research: particularly to identify the most effective brain targets and the subgroups of patients most likely to benefit.

Despite the limited evidence base, DBS therapy for OCD has been approved by the Food and Drug Administration under a humanitarian device exemption. Dr. Hamani and coauthors note that various safeguards are in place to ensure appropriate use, and prevent overuse, of DBS for OCD.

While research continues, they believe that functional neurosurgeons should continue to work with other specialists to ensure that patients with severe, medically refractory OCD continue to have access to potentially beneficial DBS therapy.

(Source: wolterskluwerhealth.com)

Filed under OCD deep brain stimulation nucleus accumbens DBS neuroscience science

free counters