Neuroscience

Articles and news from the latest research reports.

178 notes

Teens’ Self-Consciousness Linked With Specific Brain, Physiological Responses
Teenagers are famously self-conscious, acutely aware and concerned about what their peers think of them. A new study reveals that this self-consciousness is linked with specific physiological and brain responses that seem to emerge in adolescence.
“Our study identifies adolescence as a unique period of the lifespan in which self-conscious emotion, physiological reactivity, and activity in specific brain areas converge and peak in response to being evaluated by others,” says psychological scientist and lead researcher Leah Somerville of Harvard University.
The findings, published in Psychological Science, a journal of the Association for Psychological Science, suggest that teens’ sensitivity to social evaluation might be explained by shifts in physiological and brain function during adolescence, in addition to the numerous sociocultural changes that take place during the teen years.
Somerville and colleagues wanted to investigate whether just being looked at — a minimal social-evaluation situation — might register with greater importance, arousal, and intensity for adolescents than for either children or adults. The researchers hypothesized that late-developing regions of the brain, such as the medial prefrontal cortex (MPFC), could play a unique role in the way teens monitor these types of social evaluative contexts.
The researchers had 69 participants, ranging in age from 8 to almost 23 years old, come to the lab and complete measures that gauged emotional, physiological, and neural responses to social evaluation.
They told the participants that they would be testing a new video camera embedded in the head coil of a functional MRI scanner. The participants watched a screen indicating whether the camera was “off,” “warming up,” or “on”, and were told that a same-sex peer of about the same age would be watching the video feed and would be able to see them when the camera was on. In reality, there was no camera in the MRI machine.
The consistency and strength of the resulting data took the researchers by surprise:
“We were concerned about whether simply being looked at was a strong enough ‘social evaluation’ to evoke emotional, physiological and neural responses,” says Somerville. “Our findings suggest that being watched, and to some extent anticipating being watched, were sufficient to elicit self-conscious emotional responses at each level of measurement.”
Specifically, participants’ self-reported embarrassment, physiological arousal, and MPFC activation showed reactivity to social evaluation that seemed to converge and peak during adolescence.
Adolescent participants also showed increased functional connectivity between the MPFC and striatum, an area of the brain that mediates motivated behaviors and actions. Somerville and colleagues speculate that the MPFC-striatum pathway may be a route by which social evaluative contexts influence behavior. The link may provide an initial clue as to why teens often engage in riskier behaviors when they’re with their peers.

Teens’ Self-Consciousness Linked With Specific Brain, Physiological Responses

Teenagers are famously self-conscious, acutely aware and concerned about what their peers think of them. A new study reveals that this self-consciousness is linked with specific physiological and brain responses that seem to emerge in adolescence.

“Our study identifies adolescence as a unique period of the lifespan in which self-conscious emotion, physiological reactivity, and activity in specific brain areas converge and peak in response to being evaluated by others,” says psychological scientist and lead researcher Leah Somerville of Harvard University.

The findings, published in Psychological Science, a journal of the Association for Psychological Science, suggest that teens’ sensitivity to social evaluation might be explained by shifts in physiological and brain function during adolescence, in addition to the numerous sociocultural changes that take place during the teen years.

Somerville and colleagues wanted to investigate whether just being looked at — a minimal social-evaluation situation — might register with greater importance, arousal, and intensity for adolescents than for either children or adults. The researchers hypothesized that late-developing regions of the brain, such as the medial prefrontal cortex (MPFC), could play a unique role in the way teens monitor these types of social evaluative contexts.

The researchers had 69 participants, ranging in age from 8 to almost 23 years old, come to the lab and complete measures that gauged emotional, physiological, and neural responses to social evaluation.

They told the participants that they would be testing a new video camera embedded in the head coil of a functional MRI scanner. The participants watched a screen indicating whether the camera was “off,” “warming up,” or “on”, and were told that a same-sex peer of about the same age would be watching the video feed and would be able to see them when the camera was on. In reality, there was no camera in the MRI machine.

The consistency and strength of the resulting data took the researchers by surprise:

“We were concerned about whether simply being looked at was a strong enough ‘social evaluation’ to evoke emotional, physiological and neural responses,” says Somerville. “Our findings suggest that being watched, and to some extent anticipating being watched, were sufficient to elicit self-conscious emotional responses at each level of measurement.”

Specifically, participants’ self-reported embarrassment, physiological arousal, and MPFC activation showed reactivity to social evaluation that seemed to converge and peak during adolescence.

Adolescent participants also showed increased functional connectivity between the MPFC and striatum, an area of the brain that mediates motivated behaviors and actions. Somerville and colleagues speculate that the MPFC-striatum pathway may be a route by which social evaluative contexts influence behavior. The link may provide an initial clue as to why teens often engage in riskier behaviors when they’re with their peers.

Filed under adolescence self-consciousness prefrontal cortex social cognition psychology neuroscience science

53 notes

IVF for male infertility linked to increased risk of intellectual disability and autism in children

In the first study to compare all available IVF treatments and the risk of neurodevelopmental disorders in children, researchers find that IVF treatments for the most severe forms of male infertility are associated with an increased risk of intellectual disability and autism in children.

image

Autism and intellectual disability remain a rare outcome of IVF, and whilst some of the risk is associated with the risk of multiple births, the study provides important evidence for parents and clinicians on the relative risks of modern IVF treatments.

Published in JAMA today, the study is the largest of its kind and was led by researchers at King’s College London (UK), Karolinska Institutet (Sweden) and Mount Sinai School of Medicine in New York (USA).

By using anonymous data from the Swedish national registers, researchers analysed more than 2.5 million birth records from 1982 and 2007 and followed-up whether children had a clinical diagnosis of autism or  intellectual disability (defined as having an IQ below 70) up until 2009. Of the 2.5m children, 1.2% (30,959) were born following IVF. Of the 6,959 diagnosed with autism, 103 were born after IVF; of the 15,830 with intellectual disability, 180 were born after IVF. Multiple pregnancies are a known risk factor for pre-term birth and some neurodevelopmental disorders, so the researchers also compared single to multiple births.

Sven Sandin, co-author of the study from King’s College London’s Institute of Psychiatry says: “IVF treatments are vastly different in terms of their complexity. When we looked at IVF treatments combined, we found there was no overall increased risk for autism, but a small increased risk of intellectual disability. When we separated the different IVF treatments, we found that ‘traditional’ IVF is safe, but that IVF involving ICSI, which is specifically recommended for paternal infertility is associated with an increased risk of both intellectual disability and autism in children.”

Compared to spontaneous conception, children born from any IVF treatment were not at an increased risk of autism, but were at a small increased risk of intellectual disability (18% increase – from 39.8 to 46.3 per 100,000 person years).  However, the risk increase disappeared when multiple births were taken into account.

Secondly, the researchers compared all 6 different types of IVF procedures available in Sweden – whether fresh or frozen embryos were used; if intracytoplasmic sperm injection (ICSI) was used, and if so, whether sperm was ejaculated or surgically extracted. Developed in 1992, ICSI is recommended for male infertility and is now used in about half of all IVF treatments. The procedure involves injecting a single sperm directly into an egg, rather than fertilization happening in a dish, as in standard IVF.

Children born after IVF treatments with ICSI (with either fresh or frozen embryos) were at an increased risk of intellectual disability (51% increase – 62 to 93 per 100,000). This association was even higher when a preterm birth also occurred (73% increase – 96 to 167 per 100,000). Even when multiple and pre-term births were taken into account, IVF treatment with ICSI and fresh embryos was associated with an increased risk of intellectual disability (66% increase for singleton birth, term birth following ICSI with fresh embryos– 48 to 76 per 100,000).

Children born after IVF with ICSI using surgically extracted sperm and fresh embryos were at an increased risk of autism (360% increase - 29 to 136 per 100,000) but the association disappeared when multiple births were taken into account.

(Source: kcl.ac.uk)

Filed under autism intellectual disability IVF neurodevelopmental disorders neuroscience science

48 notes

Drug improves cognitive function in mouse model of Down syndrome

An existing FDA-approved drug improves cognitive function in a mouse model of Down syndrome, according to a new study by researchers at the Stanford University School of Medicine.

The drug, an asthma medication called formoterol, strengthened nerve connections in the hippocampus, a brain center used for spatial navigation, paying attention and forming new memories, the study said. It also improved contextual learning, in which the brain integrates spatial and sensory information.

Both hippocampal function and contextual learning, which are impaired in Down syndrome, depend on the brain having a good supply of the neurotransmitter norepinephrine. This neurotransmitter sends its signal via several types of receptors on the neurons, including a group called beta-2 adrenergic receptors.

“This study provides the initial proof-of-concept that targeting beta-2 adrenergic receptors for treatment of cognitive dysfunction in Down syndrome could be an effective strategy,” said Ahmad Salehi, MD, PhD, the study’s senior author and a clinical associate professor of psychiatry and behavioral sciences. The study was published online July 2 in Biological Psychiatry.

Down syndrome, which is caused by an extra copy of chromosome 21, results in both physical and cognitive problems. While many of the physical issues, such as vulnerability to heart problems, can now be treated, no treatments exist for poor cognitive function. As a result, children with Down syndrome fall behind their peers’ cognitive development. In addition, adults with Down syndrome develop Alzheimer’s-type pathology in their brains by age 40. Down syndrome affects about 400,000 people in the United States and 6 million worldwide.

In prior Down syndrome research, scientists have seen deterioration of the brain center that manufactures norepinephrine in both people with Down syndrome and its mouse model. Earlier work by Salehi’s team found that giving a norepinephrine precursor could improve cognitive function in a mouse model genetically engineered to mimic Down syndrome.

(Source: med.stanford.edu)

Filed under down syndrome hippocampus norepinephrine contextual learning beta-2 adrenergic receptor neuroscience science

85 notes

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects 
How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?
It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.
Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.
The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.
Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.
"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."
Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.
"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.
The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.
Integrating parts into wholes
In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.
"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."
Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.
Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.
The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.
Dogma in the field is that V4 neurons all exhibit translation invariance.
"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.
For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.
"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."
A deeper mystery to be solved
The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”
On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.
"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.
"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.
"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects

How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?

It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.

Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.

The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.

Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.

"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."

Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.

"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.

The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.

Integrating parts into wholes

In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.

"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."

Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.

Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.

The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.

Dogma in the field is that V4 neurons all exhibit translation invariance.

"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.

For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.

"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."

A deeper mystery to be solved

The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”

On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.

"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.

"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.

"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Filed under visual system visual stimuli visual cortex neurons neuroscience science

51 notes

Irreversible tissue loss seen within 40 days of spinal cord injury
The rate and extent of damage to the spinal cord and brain following spinal cord injury have long been a mystery. Now, a joint research effort between the University of Zurich, University Hospital Balgrist and colleagues from University College London have found evidence that patients already have irreversible tissue loss in the spinal cord within 40 days of injury. Using a new imaging measurement technique the impact of therapeutic treatments and rehabilitative interventions can be now determined more quickly and directly than before.
A spinal cord injury changes the functional state and structure of the spinal cord and the brain. For example, the patients’ ability to walk or move their hands can become restricted. How quickly such degenerative changes develop, however, has remained a mystery until now. The assumption was that it took years for patients with a spinal cord injury to also display anatomical changes in the spinal cord and brain above the injury site. For the first time, researchers from the University of Zurich and the Uniklinik Balgrist, along with English colleagues from University College London (UCL), now demonstrate that these changes already occur within 40 days of acute spinal cord injury.
Spinal cord depletes rapidly
The scientists studied 13 patients with acute spinal cord injuries every three months for a year using novel MRI (magnetic resonance imaging) protocols. They discovered that the diameter of the spinal cord had rapidly decreased and was already seven percent smaller after twelve months. A lesser volume decline was also evident in the corticospinal tract, a tract indispensable for motor control, and nerve cells in the sensorimotor cortex. The extent of the degenerative changes coincided with the clinical outcome. “Patients with a greater tissue loss above the injury site recovered less effectively than those with less changes,” explains Patrick Freund, the investigator responsible for the study at the Paraplegic Center Balgrist.
Gaining insights into effect of therapies
Treatments targeting the injured spinal cord have entered clinical trials. Gaining insights into mechanisms of repair and recovery within the first year are crucial. Thanks to the use of the new neuroimaging protocols, Freund says, we now have the possibility of displaying the effect of therapeutic treatments on the central nervous system and of rehabilitative measures more quickly. Consequently, the effect of new therapies can also be recorded more rapidly.
“This study is an excellent example of the value of combining the complementary expertise of the two universities,” says UCL’s Dean of Brain Sciences, Professor Alan Thompson, who is one of the senior authors of the study. “It provides exciting new insights into the complications of spinal cord trauma and gives us the possibility of identifying both imaging biomarkers and therapeutic targets.”
The findings are the result of a new three-year neuroscience partnership between the Neuroscience Centre Zurich (ZNZ) and UCL.
Literature:
Patrick Freund, Nikolaus Weiskopf, John Ashburner, Katharina Wolf, Reto Sutter, Daniel R Altmann, Karl Friston, Alan Thompson, Armin Curt. MRI investigation of the sensorimotor cortex and corticospinal tract after acute spinal cord injury: a prospective longitudinal study. The Lancet Neurology. July 2, 2013.

Irreversible tissue loss seen within 40 days of spinal cord injury

The rate and extent of damage to the spinal cord and brain following spinal cord injury have long been a mystery. Now, a joint research effort between the University of Zurich, University Hospital Balgrist and colleagues from University College London have found evidence that patients already have irreversible tissue loss in the spinal cord within 40 days of injury. Using a new imaging measurement technique the impact of therapeutic treatments and rehabilitative interventions can be now determined more quickly and directly than before.

A spinal cord injury changes the functional state and structure of the spinal cord and the brain. For example, the patients’ ability to walk or move their hands can become restricted. How quickly such degenerative changes develop, however, has remained a mystery until now. The assumption was that it took years for patients with a spinal cord injury to also display anatomical changes in the spinal cord and brain above the injury site. For the first time, researchers from the University of Zurich and the Uniklinik Balgrist, along with English colleagues from University College London (UCL), now demonstrate that these changes already occur within 40 days of acute spinal cord injury.

Spinal cord depletes rapidly

The scientists studied 13 patients with acute spinal cord injuries every three months for a year using novel MRI (magnetic resonance imaging) protocols. They discovered that the diameter of the spinal cord had rapidly decreased and was already seven percent smaller after twelve months. A lesser volume decline was also evident in the corticospinal tract, a tract indispensable for motor control, and nerve cells in the sensorimotor cortex. The extent of the degenerative changes coincided with the clinical outcome. “Patients with a greater tissue loss above the injury site recovered less effectively than those with less changes,” explains Patrick Freund, the investigator responsible for the study at the Paraplegic Center Balgrist.

Gaining insights into effect of therapies

Treatments targeting the injured spinal cord have entered clinical trials. Gaining insights into mechanisms of repair and recovery within the first year are crucial. Thanks to the use of the new neuroimaging protocols, Freund says, we now have the possibility of displaying the effect of therapeutic treatments on the central nervous system and of rehabilitative measures more quickly. Consequently, the effect of new therapies can also be recorded more rapidly.

“This study is an excellent example of the value of combining the complementary expertise of the two universities,” says UCL’s Dean of Brain Sciences, Professor Alan Thompson, who is one of the senior authors of the study. “It provides exciting new insights into the complications of spinal cord trauma and gives us the possibility of identifying both imaging biomarkers and therapeutic targets.”

The findings are the result of a new three-year neuroscience partnership between the Neuroscience Centre Zurich (ZNZ) and UCL.

Literature:

Patrick Freund, Nikolaus Weiskopf, John Ashburner, Katharina Wolf, Reto Sutter, Daniel R Altmann, Karl Friston, Alan Thompson, Armin Curt. MRI investigation of the sensorimotor cortex and corticospinal tract after acute spinal cord injury: a prospective longitudinal study. The Lancet Neurology. July 2, 2013.

Filed under spinal cord spinal cord injury neuroimaging corticospinal tract sensorimotor cortex tissue neuroscience science

111 notes

Researchers discover a gene’s key role in building the developing brain’s scaffolding
The gene, Arl13b, is necessary for the proper construction of the cerebral cortex. The finding offers new insights on normal brain development and illuminates some of the factors behind Joubert’s syndrome, a rare neurological disorder.
Researchers have pinpointed the role of a gene known as Arl13b in guiding the formation and proper placement of neurons in the early stages of brain development. Mutations in the gene could help explain brain malformations often seen in neurodevelopmental disorders.
The research, led by a team at the University of North Carolina School of Medicine, was published June 30 in the journal Nature Neuroscience.
“We wanted to get a better sense of how the cerebral cortex is constructed,” said senior study author Eva Anton, PhD, a professor in the Department of Cell Biology and Physiology and a member of the UNC Neuroscience Center. “The cells we studied — radial glial cells — provide a scaffolding for the formation of the brain by making neurons and guiding them to where they have to go. This is the first step in the formation of functional neuronal circuitry in the brain. This study gives us new information about the mechanisms involved in that process.”
The researchers became interested in the Arl13b gene because of its expression in a part of the cell called primary cilium and its association with a rare neurological disorder known as Joubert syndrome. The syndrome is characterized by brain malformations and autism like features.
“In addition to helping us understand an important cellular mechanism involved in normal brain development, this study may offer an explanation for some of the malformations seen in Joubert syndrome patients,” said Anton. Although there is no immediate clinical application for these patients, the study does help illuminate the factors behind the disease. “It shows what may have gone wrong in some of those patients that led to the malformations,” said Anton.
The cerebral cortex, the brain’s “gray matter,” is responsible for higher-order functions such as memory and consciousness. Like the scaffolding builders use to move people and materials during construction, radial glial cells provide an instructive matrix to create the basic structural features of the cerebral cortex. Mistakes in the formation and development of radial glial cells can translate into structural problems in the brain as it develops, said Anton.
Both mice and humans have the Arl13b gene. The researchers generated a series of mice with mutations on the Arl13b gene at different developmental stages to track the mutations’ effects on brain development. They discovered that the gene is crucial to the radial glial cells’ ability to sense signals through an appendage called the primary cilium. Without this signaling capability, the radial glia were unable to organize into an instructive scaffold capable of orchestrating the orderly formation of cerebral cortex. “The cilia in these cells play an important role in the initial setup of this scaffolding,” said Anton. “Without a functioning Arl13b gene, the cells were not able to determine polarity and formed haphazardly. As a result, they formed a malformed cerebral cortex with ectopic clusters of neurons, instead of the orderly layers of neurons with appropriate connectivity that would be expected, in the developing brain.

Researchers discover a gene’s key role in building the developing brain’s scaffolding

The gene, Arl13b, is necessary for the proper construction of the cerebral cortex. The finding offers new insights on normal brain development and illuminates some of the factors behind Joubert’s syndrome, a rare neurological disorder.

Researchers have pinpointed the role of a gene known as Arl13b in guiding the formation and proper placement of neurons in the early stages of brain development. Mutations in the gene could help explain brain malformations often seen in neurodevelopmental disorders.

The research, led by a team at the University of North Carolina School of Medicine, was published June 30 in the journal Nature Neuroscience.

“We wanted to get a better sense of how the cerebral cortex is constructed,” said senior study author Eva Anton, PhD, a professor in the Department of Cell Biology and Physiology and a member of the UNC Neuroscience Center. “The cells we studied — radial glial cells — provide a scaffolding for the formation of the brain by making neurons and guiding them to where they have to go. This is the first step in the formation of functional neuronal circuitry in the brain. This study gives us new information about the mechanisms involved in that process.”

The researchers became interested in the Arl13b gene because of its expression in a part of the cell called primary cilium and its association with a rare neurological disorder known as Joubert syndrome. The syndrome is characterized by brain malformations and autism like features.

“In addition to helping us understand an important cellular mechanism involved in normal brain development, this study may offer an explanation for some of the malformations seen in Joubert syndrome patients,” said Anton. Although there is no immediate clinical application for these patients, the study does help illuminate the factors behind the disease. “It shows what may have gone wrong in some of those patients that led to the malformations,” said Anton.

The cerebral cortex, the brain’s “gray matter,” is responsible for higher-order functions such as memory and consciousness. Like the scaffolding builders use to move people and materials during construction, radial glial cells provide an instructive matrix to create the basic structural features of the cerebral cortex. Mistakes in the formation and development of radial glial cells can translate into structural problems in the brain as it develops, said Anton.

Both mice and humans have the Arl13b gene. The researchers generated a series of mice with mutations on the Arl13b gene at different developmental stages to track the mutations’ effects on brain development. They discovered that the gene is crucial to the radial glial cells’ ability to sense signals through an appendage called the primary cilium. Without this signaling capability, the radial glia were unable to organize into an instructive scaffold capable of orchestrating the orderly formation of cerebral cortex. “The cilia in these cells play an important role in the initial setup of this scaffolding,” said Anton. “Without a functioning Arl13b gene, the cells were not able to determine polarity and formed haphazardly. As a result, they formed a malformed cerebral cortex with ectopic clusters of neurons, instead of the orderly layers of neurons with appropriate connectivity that would be expected, in the developing brain.

Filed under brain development cerebral cortex neural circuitry gray matter neurodevelopmental disorders neuroscience science

87 notes

Children with delayed motor skills struggle more socially

Studies have shown that children with autism often struggle socially and now new research suggests that a corresponding lack of motor skills – including catching and throwing – may further contribute to that social awkwardness.

The findings, published in the July issue of Adapted Physical Activity Quarterly, add to the growing body of research highlighting the link between autism and motor skill deficits.

Lead author Megan MacDonald is an assistant professor in the College of Public Health and Human Sciences at Oregon State University. She is an expert on the movement skills of children with autism spectrum disorder.

In the study, researchers looked a group of young people ages 6 to 15 diagnosed with autism spectrum disorder. All 35 of the students were considered high-functioning and attended typical classrooms. The researchers looked at two types of motor skills – “object-control” motor skills, which involve more precise action such as catching or throwing – and “locomotion” skills, such as running or walking. Students who struggled with object-control motor skills were more likely to have more severe social and communication skills than those who tested higher on the motor skills test.

“So much of the focus on autism has been on developing social skills, and that is very crucial,” MacDonald said. “Yet we also know there is a link between motor skills and autism, and how deficits in these physical skills play into this larger picture is not clearly understood.”

Developing motor skills can be crucial for children because students often “mask” their inability to participate in basic physical activities. A student with autism may not be participating on the playground because of a lack of social skills, but the child may also be unsure of his or her physical ability to play in these activities.

“Something which seems as simple as learning to ride a bike can be crucial for a child with autism,” MacDonald said. “Being able to ride a bike means more independence and autonomy. They can ride to the corner store or ride to a friend’s house. Those kind of small victories are huge.”

She said the ability to run, jump, throw and catch isn’t just for athletic kids – physical activity is linked not only to health, but to social skills and mental well-being.

“I often show people photos of what I like to do in my spare time – canoeing, hiking, snowshoeing, and then point out that these require relatively proficient motor skills,” she said. “But that is not why I do those things. I’m doing it because I’m with my friends and having fun.”

MacDonald said the positive news for parents and educators is that motor skills can be taught.

“We have programs and interventions that we know work, and have measurable impact on motor skill development,” MacDonald said. “We need to make sure we identify the issue and get a child help as early as possible.”

(Source: oregonstate.edu)

Filed under motor skills autism social skills psychology neuroscience science

35 notes

Researchers have discovered a new proteasome regulatory mechanism

The results of the study may bear significance in the treatment of Alzheimer’s disease and cancer

Dysfunction of the ubiquitin-proteasome system is related to many severe neurodegenerative diseases, such as Alzheimer’s and Parkinson’s diseases, and certain types of cancer. Such dysfunction is also believed to be related to some degenerative muscle diseases.

The proteasome is a large protein complex that maintains cellular protein balance by degrading and destroying damaged or expired proteins. The ubiquitin is a small protein that labels proteins for destruction for the proteasome. If the system does not work effectively enough, expired and damaged proteins accumulate in the cell. If the system is overly active, it destroys necessary proteins in addition to unnecessary ones. In both cases, cell function is disturbed, and the cell may even die.

Proteasome activity is believed to decrease with ageing. However, not much is yet known about how proteasome activity is regulated in an aging multicellular organism. The research team of Academy Research Fellow, Docent Carina Holmberg-Still has discovered an important proteasome regulatory mechanism. The study was published in Cell Reports, a highly esteemed scientific journal.

"We examined whether proteasome activity is affected by insulin/IGF-1 signalling [IIS], which regulates aging in many organisms. The results show that decreased IIS increases proteasome activity," says Holmberg-Still.

image

Proteasome activity was studied in C. elegans, a free-living roundworm. Decreased IIS increases proteasome activity through the FOXO transcription factor DAF-16 and the UBH-4 enzyme. DAF-16 represses the expression of ubh-4 in certain cell types. The ubh-4 enzyme slows proteasome activity, which means that its repression accelerates proteasome activity.

"Using a cell culture model, we proved that the same mechanism works in human cells," says Holmberg-Still. When the expression of the uchl5 enzyme – the human equivalent of ubh-4 – was decreased, proteasome activity and the degradation of harmful proteins increased.

"Our study shows that the effect of ageing and the related signalling pathway on proteasome activity is tissue-specific. This was a new and interesting discovery that bears great significance in terms of treatment opportunities," says researcher Olli Matilainen, who prepared his dissertation in Holmberg-Still’s research team.

The identification of proteins that regulate proteasome activity and an understanding of the regulatory mechanism offer new opportunities in treating diseases that involve proteasome dysfunction. According to Holmberg-Still, proteins that regulate proteasome activity are particularly interesting in terms of medicine development.

"An ability to accelerate proteasome activity could be beneficial in the treatment of neurodegenerative diseases. Targeted proteasome inhibitors would be useful in the treatment of cancer – general proteasome inhibitors are already used as cancer medication to some extent, but they often have harmful side effects, because they cannot be targeted to a specific tissue."

Holmberg-Still’s team continues to investigate tissue-specific mechanisms that regulate proteasome activity. The team collaborates with clinical researchers to confirm whether its research results can be refined for clinical use.

(Source: eurekalert.org)

Filed under proteasome alzheimer's disease neurodegenerative diseases uchl5 enzyme neuroscience science

88 notes

Hearing loss from loud blasts may be treatable
Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.
Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.
“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)
“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.
The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.
More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.
The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.
Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.
Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.
In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.
After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.
“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.
“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”
Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.
“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”
Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.
“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

Hearing loss from loud blasts may be treatable

Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.

Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.

“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)

“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.

The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.

More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.

The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.

Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.

Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.

In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.

After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.

“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.

“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”

Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.

“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”

Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.

“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

Filed under hearing hearing loss animal model nerve cells cochlea inner ear hair cells neuroscience science

62 notes

Researchers Discover New Way to Block Inflammation in Alzheimer’s, Atherosclerosis and Type-2 Diabetes

Researchers at NYU Langone Medical Center have discovered a mechanism that triggers chronic inflammation in Alzheimer’s, atherosclerosis and type-2 diabetes. The results, published today in Nature Immunology, suggest a common biochemical thread to multiple diseases and point the way to a new class of therapies that could treat chronic inflammation in these non-infectious diseases without crippling the immune system. Alzheimer’s, atherosclerosis and type-2 diabetes—diseases associated with aging and inflammation—affect more than 100 million Americans.

When the body encounters a pathogen, it unleashes a rush of chemicals known as cytokines that draws immune cells to the site of infection and causes inflammation. Particulate matter in the body, such as the cholesterol crystals associated with vascular disease and the amyloid plaques that form in the brain in Alzheimer’s disease, can also cause inflammation but the exact mechanism of action remains unclear. Researchers previously thought that these crystals and plaques accumulate outside of cells, and that macrophages—immune cells that scavenge debris in the body—induce inflammation as they attempt to clear them.

“We’ve discovered that the mechanism causing chronic inflammation in these diseases is actually very different,” says Kathryn J. Moore, PhD, senior author of the study and associate professor of medicine and cell biology, Leon H. Charney Division of Cardiology at NYU Langone Medical Center.

The researchers found that particulate matter does not linger on the outside of cells. Instead, a receptor called CD36 present on macrophages draws the soluble forms of these particles inside the cell where they are transformed into substances that trigger an inflammatory response. Says Dr. Moore, “What we found is that CD36 binds soluble cholesterol and protein matter associated with these diseases, pulls them inside the cell, and then transforms them. The resulting insoluble crystals and amyloid damage the macrophage and trigger a powerful cytokine, called interleukin-1B, linked to a chronic inflammatory response.”

These findings hold exciting clinical implications.When the researchers blocked the CD36 receptor in mice with atherosclerosis (in which cholesterol thickens the arteries), the cytokine response declined, fewer cholesterol crystals formed in plaques, and inflammation decreased. Consequently, atherosclerosis also abated.

Other less-targeted strategies to control inflammation may hamper the immune response, but the CD36 strategy spares certain cytokines to fight off pathogens, while blocking CD36’s ability to trigger interleukin-1B.

“Our findings identify CD36 as a central regulator of the immune response in these conditions and suggest that blocking CD36 might be a common therapeutic option for all three diseases,” says Dr. Moore.

(Source: communications.med.nyu.edu)

Filed under inflammation chronic inflammation Type II diabetes cytokines interleukin-1B neuroscience science

free counters