Neuroscience

Articles and news from the latest research reports.

354 notes

No Two People Smell the Same
A difference at the smallest level of DNA — one amino acid on one gene —  can determine whether you find a given smell pleasant. A different amino acid on the same gene in your friend’s body could mean he finds the same odor offensive, according to researchers at Duke University.
There are about 400 genes coding for the receptors in our noses, and according to the 1000 Genomes Project, there are more than 900,000 variations of those genes. These receptors control the sensors that determine how we smell odors. A given odor will activate a suite of receptors in the nose, creating a specific signal for the brain. 
But the receptors don’t work the same for all of us, said Hiroaki Matsunami, Ph.D., associate professor of molecular genetics and microbiology at the Duke University School of Medicine. In fact, when comparing the receptors in any two people, they should be about 30 percent different, said Matsunami, who is also a member of the Neurobiology Graduate Program and the Duke Institute for Brain Sciences. 
"There are many cases when you say you like the way something smells and other people don’t. That’s very common," Matsunami said. But what the researchers found is that no two people smell things the same way. "We found that individuals can be very different at the receptor levels, meaning that when we smell something, the receptors that are activated can be very different (from one person to the next) depending on your genome."
The study didn’t look at the promoter regions of the genes, which are highly variable, or gene copy number variation, which is very high in odor receptors, so the 30 percent figure for the difference between individuals is probably conservative, Matsunami said.
While researchers had earlier identified the genes that encode for odor receptors, it has been a mystery how the receptors are activated, Matsunami said. To determine what turns the receptors on, his team cloned more than 500 receptors each from 20 people that had slight variations of only one or two amino acids and systematically exposed them to odor molecules that might excite the receptors. 
By exposing each receptor to a very small concentration — 1, 10, or 100 micromoles — of 73 odorants, such as vanillin or guaiacol, the group was able to identify 27 receptors that had a significant response to at least one odorant. This finding, published in the December issue of Nature Neuroscience, doubles the number of known odorant-activated receptors, bringing the number to 40.
Matsunami said this research could have a big impact for the flavors, fragrance, and food industries.
"These manufacturers all want to know a rational way to produce new chemicals of interest, whether it’s a new perfume or new-flavored ingredient, and right now there’s no scientific basis for doing that," he said. "To do that, we need to know which receptors are being activated by certain chemicals and the consequences of those activations in terms of how we feel and smell."

No Two People Smell the Same

A difference at the smallest level of DNA — one amino acid on one gene —  can determine whether you find a given smell pleasant. A different amino acid on the same gene in your friend’s body could mean he finds the same odor offensive, according to researchers at Duke University.

There are about 400 genes coding for the receptors in our noses, and according to the 1000 Genomes Project, there are more than 900,000 variations of those genes. These receptors control the sensors that determine how we smell odors. A given odor will activate a suite of receptors in the nose, creating a specific signal for the brain. 

But the receptors don’t work the same for all of us, said Hiroaki Matsunami, Ph.D., associate professor of molecular genetics and microbiology at the Duke University School of Medicine. In fact, when comparing the receptors in any two people, they should be about 30 percent different, said Matsunami, who is also a member of the Neurobiology Graduate Program and the Duke Institute for Brain Sciences. 

"There are many cases when you say you like the way something smells and other people don’t. That’s very common," Matsunami said. But what the researchers found is that no two people smell things the same way. "We found that individuals can be very different at the receptor levels, meaning that when we smell something, the receptors that are activated can be very different (from one person to the next) depending on your genome."

The study didn’t look at the promoter regions of the genes, which are highly variable, or gene copy number variation, which is very high in odor receptors, so the 30 percent figure for the difference between individuals is probably conservative, Matsunami said.

While researchers had earlier identified the genes that encode for odor receptors, it has been a mystery how the receptors are activated, Matsunami said. To determine what turns the receptors on, his team cloned more than 500 receptors each from 20 people that had slight variations of only one or two amino acids and systematically exposed them to odor molecules that might excite the receptors. 

By exposing each receptor to a very small concentration — 1, 10, or 100 micromoles — of 73 odorants, such as vanillin or guaiacol, the group was able to identify 27 receptors that had a significant response to at least one odorant. This finding, published in the December issue of Nature Neuroscience, doubles the number of known odorant-activated receptors, bringing the number to 40.

Matsunami said this research could have a big impact for the flavors, fragrance, and food industries.

"These manufacturers all want to know a rational way to produce new chemicals of interest, whether it’s a new perfume or new-flavored ingredient, and right now there’s no scientific basis for doing that," he said. "To do that, we need to know which receptors are being activated by certain chemicals and the consequences of those activations in terms of how we feel and smell."

Filed under olfaction odor receptors smell perception genetics psychology neuroscience

181 notes

Brain structure shows affinity with numbers
The structure of the brain shows the way in which we process numbers. People either do this spatially or non-spatially. A study by Florian Krause from the Donders Institute in Nijmegen shows for the first time that these individual differences have a structural basis in the brain. The Journal of Cognitive Neuroscience published the results in an early access version of the article.
People who process numbers spatially do this using an imaginary horizontal line along which the numbers are arranged from low to high, left to right. A non-spatial representation is also possible, by comparing numbers to other magnitudes such as force or luminosity.
Different grey matter volumes
Florian Krause identified this predisposition to spatial or non-spatial number processing in MRI scans of test subjects. He discovered differences in grey matter volume, which contains the cell bodies of nerve cells, in two specific locations. Spatially oriented brains have an above-average grey matter volume in the right precuneus, a small area of the brain associated with processing visual-spatial information. Non-spatially oriented brains have more grey matter in the left angular gyrus, an area associated with semantic and conceptual processing.
Spatial numbers
For a long time, scientists thought that everyone processed numbers predominantly in a spatial way. Krause demonstrates that this is not the case. In his own words: ‘Our current study stresses the importance of non-spatial number representations. This is important since researchers in the field tend to focus mainly on spatial representations. Personally, I think that numbers are understood in terms of our body experiences. We use information about size in real life to understand number size in our heads.’
Classifying numbers
The thirty people taking part in the study were put into an MRI scanner and were shown numbers between 1 and 9 (except 5). In two consecutive judgement tasks, they had to classify the presented digits as odd or even. Both tasks differed only in the required response: in the spatial task subjects had to click with their index finger or middle finger to classify the digits, and in the non-spatial task they applied either a small or a large force on a pressure sensor with their thumb. Both tests were carried out using the right hand. Importantly, participants coupled the spatial response as well as the force response to the size of the presented number, as they responded faster with a left or soft press for small numbers and with a right or hard press for large numbers. Krause worked out those couplings for each subject, and compared the scores with the information from their brain scan.
Potential benefits for teaching maths 
At present, maths is largely taught on the basis of a spatial number processing. ‘People with a non-spatial representation of numbers would probably benefit from a different approach to maths teaching’, says Krause. ‘It is possible to let pupils experience the size of numbers in a non-spatial way. This could involve expressing numbers with your body while doing simple arithmetics, for example.’ Krause is planning several new studies to explore the scientific basis of methods like these in more detail.

Brain structure shows affinity with numbers

The structure of the brain shows the way in which we process numbers. People either do this spatially or non-spatially. A study by Florian Krause from the Donders Institute in Nijmegen shows for the first time that these individual differences have a structural basis in the brain. The Journal of Cognitive Neuroscience published the results in an early access version of the article.

People who process numbers spatially do this using an imaginary horizontal line along which the numbers are arranged from low to high, left to right. A non-spatial representation is also possible, by comparing numbers to other magnitudes such as force or luminosity.

Different grey matter volumes

Florian Krause identified this predisposition to spatial or non-spatial number processing in MRI scans of test subjects. He discovered differences in grey matter volume, which contains the cell bodies of nerve cells, in two specific locations. Spatially oriented brains have an above-average grey matter volume in the right precuneus, a small area of the brain associated with processing visual-spatial information. Non-spatially oriented brains have more grey matter in the left angular gyrus, an area associated with semantic and conceptual processing.

Spatial numbers

For a long time, scientists thought that everyone processed numbers predominantly in a spatial way. Krause demonstrates that this is not the case. In his own words: ‘Our current study stresses the importance of non-spatial number representations. This is important since researchers in the field tend to focus mainly on spatial representations. Personally, I think that numbers are understood in terms of our body experiences. We use information about size in real life to understand number size in our heads.’

Classifying numbers

The thirty people taking part in the study were put into an MRI scanner and were shown numbers between 1 and 9 (except 5). In two consecutive judgement tasks, they had to classify the presented digits as odd or even. Both tasks differed only in the required response: in the spatial task subjects had to click with their index finger or middle finger to classify the digits, and in the non-spatial task they applied either a small or a large force on a pressure sensor with their thumb. Both tests were carried out using the right hand. Importantly, participants coupled the spatial response as well as the force response to the size of the presented number, as they responded faster with a left or soft press for small numbers and with a right or hard press for large numbers. Krause worked out those couplings for each subject, and compared the scores with the information from their brain scan.

Potential benefits for teaching maths

At present, maths is largely taught on the basis of a spatial number processing. ‘People with a non-spatial representation of numbers would probably benefit from a different approach to maths teaching’, says Krause. ‘It is possible to let pupils experience the size of numbers in a non-spatial way. This could involve expressing numbers with your body while doing simple arithmetics, for example.’ Krause is planning several new studies to explore the scientific basis of methods like these in more detail.

Filed under gray matter angular gyrus neuroimaging numerical cognition spatial processing neuroscience science

119 notes

Optogenetics as good as electrical stimulation
Neuroscientists are eagerly, but not always successfully, looking for proof that optogenetics – a celebrated technique that uses pulses of visible light to genetically alter brain cells to be excited or silenced – can be as successful in complex and large brains as it has been in rodent models.
A new study in the journal Current Biology may be the most definitive demonstration yet that the technique can work in nonhuman primates as well as, or even a little better than, the tried-and-true method of perturbing brain circuits with small bursts of electrical current. Brown University researchers directly compared the two techniques to test how well they could influence the visual decision-making behavior of two primates.
“For most of my colleagues in neuroscience to say ‘I’ll be able to incorporate [optogenetics] into my daily work with nonhuman primates,’ you have to get beyond ‘It does seem to sort of work’,” said study senior author David Sheinberg, professor of neuroscience professor affiliated with the Brown Institute for Brain Science. “In our comparison, one of the nice things is that in some ways we found quite analogous effects between electrical and optical [stimulation] but in the optical case it seemed more focused.”
Ultimately if it consistently proves safe and effective in the large, complex brains of primates, optogenetics could eventually be used in humans where it could provide a variety of potential diagnostic and therapeutic benefits.
Evidence in sight
With that in mind, Sheinberg, lead author Ji Dai and second author Daniel Brooks designed their experiments to determine whether and how much optical or electrical stimulation in a particular area of the brain called the lateral intraparietal area (LIP) would affect each subject’s decision making when presented with a choice between a target and a similar-looking, distracting character.
“This is an area of the brain involved in registering the location of salient objects in the visual world,” said Sheinberg who added that the experimental task was more cognitively sophisticated than those tested in optogenetics experiments in nonhuman primates before.
The main task for the subjects was to fixate on a central point in middle of the screen and then to look toward the letter “T” when it appeared around the edge of the screen. In some trials, they had to decide quickly between the T and a similar looking “+” or “†” character presented on opposite ends of the screen. They were rewarded if they glanced toward the T.
Before beginning those trials, the researchers had carefully placed a very thin combination sensor of an optical fiber and an electrode amid a small population of cells in the LIP of each subject. Then they mapped where on the screen an object should be in order for them to detect a response in those cells. They called that area the receptive field. With this information, they could then look to see what difference either optical or electrical stimulation of those cells would have on the subject’s inclination to look when the T or the distracting character appeared at various locations in visual space.
They found that stimulating with either method increased both subjects’ accuracy in choosing the target when it appeared in their receptive field. They also found the primates became less accurate when the distracting character appeared in their receptive field. Generally accuracy was unchanged when neither character was in the receptive field.
In other words, the stimulation of a particular group of LIP cells significantly biased the subjects to look at objects that appeared in the receptive field associated with those cells. Either stimulation method could therefore make the subjects more accurate or effectively distract them from making the right choice.
The magnitude of the difference made by either stimulation method compared to no stimulation were small, but statistically significant. When the T was in the receptive field, one research subject became 10 percentage points more accurate (80 percent vs. 70 percent) when optically stimulated and eight points more accurate when electrically stimulated. The subject was five points less accurate (73 percent vs. 78 percent) with optical stimulation and six percentage points less accurate with electrical stimulation when the distracting character was in the receptive field.
The other subject showed similar differences. In all, the two primates made thousands of choices over scores of sessions between the T and the distracting character with either kind of stimulation or none. Compared head-to-head in a statistical analysis, electrical and optical stimulation showed essentially similar effects in biasing the decisions.
Optical advantages
Although the two methods performed at parity on the main measure of accuracy, the optogenetic method had a couple of advantages, Sheinberg said.
Electrical stimulation appeared to be less precise in the cells it reached, a possibility suggested by a reduction in electrically stimulated subjects’ reaction time when the T appeared outside the receptive field. Optogenetic stimulation, Sheinberg said, did not produce such unintended effects.
Electrical stimulation also makes simultaneous electrical recording very difficult, Sheinberg said. That makes it hard to understand what neurons do when they are stimulated. Optogenetics, he said, allows for easier simultaneous electrical recording of neural activity.
Sheinberg said he is encouraged about using optogenetics to investigate even more sophisticated questions of cognition.
“Our goal is to be able to now expand this and use it again as a daily tool to probe circuits in more complicated paradigms,” Sheinberg said.
He plans a new study in which his group will look at memory of visual cues in the LIP.

Optogenetics as good as electrical stimulation

Neuroscientists are eagerly, but not always successfully, looking for proof that optogenetics – a celebrated technique that uses pulses of visible light to genetically alter brain cells to be excited or silenced – can be as successful in complex and large brains as it has been in rodent models.

A new study in the journal Current Biology may be the most definitive demonstration yet that the technique can work in nonhuman primates as well as, or even a little better than, the tried-and-true method of perturbing brain circuits with small bursts of electrical current. Brown University researchers directly compared the two techniques to test how well they could influence the visual decision-making behavior of two primates.

“For most of my colleagues in neuroscience to say ‘I’ll be able to incorporate [optogenetics] into my daily work with nonhuman primates,’ you have to get beyond ‘It does seem to sort of work’,” said study senior author David Sheinberg, professor of neuroscience professor affiliated with the Brown Institute for Brain Science. “In our comparison, one of the nice things is that in some ways we found quite analogous effects between electrical and optical [stimulation] but in the optical case it seemed more focused.”

Ultimately if it consistently proves safe and effective in the large, complex brains of primates, optogenetics could eventually be used in humans where it could provide a variety of potential diagnostic and therapeutic benefits.

Evidence in sight

With that in mind, Sheinberg, lead author Ji Dai and second author Daniel Brooks designed their experiments to determine whether and how much optical or electrical stimulation in a particular area of the brain called the lateral intraparietal area (LIP) would affect each subject’s decision making when presented with a choice between a target and a similar-looking, distracting character.

“This is an area of the brain involved in registering the location of salient objects in the visual world,” said Sheinberg who added that the experimental task was more cognitively sophisticated than those tested in optogenetics experiments in nonhuman primates before.

The main task for the subjects was to fixate on a central point in middle of the screen and then to look toward the letter “T” when it appeared around the edge of the screen. In some trials, they had to decide quickly between the T and a similar looking “+” or “†” character presented on opposite ends of the screen. They were rewarded if they glanced toward the T.

Before beginning those trials, the researchers had carefully placed a very thin combination sensor of an optical fiber and an electrode amid a small population of cells in the LIP of each subject. Then they mapped where on the screen an object should be in order for them to detect a response in those cells. They called that area the receptive field. With this information, they could then look to see what difference either optical or electrical stimulation of those cells would have on the subject’s inclination to look when the T or the distracting character appeared at various locations in visual space.

They found that stimulating with either method increased both subjects’ accuracy in choosing the target when it appeared in their receptive field. They also found the primates became less accurate when the distracting character appeared in their receptive field. Generally accuracy was unchanged when neither character was in the receptive field.

In other words, the stimulation of a particular group of LIP cells significantly biased the subjects to look at objects that appeared in the receptive field associated with those cells. Either stimulation method could therefore make the subjects more accurate or effectively distract them from making the right choice.

The magnitude of the difference made by either stimulation method compared to no stimulation were small, but statistically significant. When the T was in the receptive field, one research subject became 10 percentage points more accurate (80 percent vs. 70 percent) when optically stimulated and eight points more accurate when electrically stimulated. The subject was five points less accurate (73 percent vs. 78 percent) with optical stimulation and six percentage points less accurate with electrical stimulation when the distracting character was in the receptive field.

The other subject showed similar differences. In all, the two primates made thousands of choices over scores of sessions between the T and the distracting character with either kind of stimulation or none. Compared head-to-head in a statistical analysis, electrical and optical stimulation showed essentially similar effects in biasing the decisions.

Optical advantages

Although the two methods performed at parity on the main measure of accuracy, the optogenetic method had a couple of advantages, Sheinberg said.

Electrical stimulation appeared to be less precise in the cells it reached, a possibility suggested by a reduction in electrically stimulated subjects’ reaction time when the T appeared outside the receptive field. Optogenetic stimulation, Sheinberg said, did not produce such unintended effects.

Electrical stimulation also makes simultaneous electrical recording very difficult, Sheinberg said. That makes it hard to understand what neurons do when they are stimulated. Optogenetics, he said, allows for easier simultaneous electrical recording of neural activity.

Sheinberg said he is encouraged about using optogenetics to investigate even more sophisticated questions of cognition.

“Our goal is to be able to now expand this and use it again as a daily tool to probe circuits in more complicated paradigms,” Sheinberg said.

He plans a new study in which his group will look at memory of visual cues in the LIP.

Filed under optogenetics neural circuit electrical stimulation lateral intraparietal area neuroscience science

108 notes

Sniffing Out Danger: Rutgers Scientists Say Fearful Memories Can Trigger Heightened Sense of Smell

Most people – including scientists – assumed we can’t just sniff out danger.

It was thought that we become afraid of an odor – such as leaking gas – only after information about a scary scent is processed by our brain.

image

But neuroscientists at Rutgers University studying the olfactory – sense of smell – system in mice have discovered that this fear reaction can occur at the sensory level, even before the brain has the opportunity to interpret that the odor could mean trouble.

In a new study published today in Science, John McGann, associate professor of behavioral and systems neuroscience in the Department of Psychology, and his colleagues, report that neurons in the noses of laboratory animals reacted more strongly to threatening odors before the odor message was sent to the brain.

“What is surprising is that we tend to think of learning as something that only happens deep in the brain after conscious awareness,” says McGann whose laboratory studies the sense of smell. “But now we see how the nervous system can become especially sensitive to threatening stimuli and that fear-learning can affect the signals passing from sensory organs to the brain.”

McGann and students Marley Kass and Michelle Rosenthal made this discovery by using light to observe activity in the brains of genetically engineered mice through a window in the mouse’s skull. They found that those mice that received an electric shock simultaneously with a specific odor showed an enhanced response to the smell in the cells in the nose, before the message was delivered to the neurons in the brain.

This new research – which indicates that fearful memories can influence the senses – could help to better understand conditions like Post Traumatic Stress Disorder, in which feelings of anxiety and fear exist even though an individual is no longer in danger.

“We know that anxiety disorders like PTSD can sometimes be triggered by smell, like the smell of diesel exhaust for a soldier,” says McGann who received funding from the National Institute of Mental Health and the National Institute on Deafness and Other Communication Disorders for this research. “What this study does is gives us a new way of thinking about how this might happen.”

In their study, the scientists also discovered a heightened sensitivity to odors in the mice traumatized by shock. When these mice smelled the odor associated with the electrical shocks, the amount of neurotransmitter – chemicals that carry communications between nerve cells – released from the olfactory nerve into the brain was as big as if the odor were four times stronger than it actually was.

This created mice whose brains were hypersensitive to the fear-associated odors. Before now, scientists did not think that reward or punishment could influence how the sensory organs process information.

The next step in the continuing research, McGann says, is to determine whether the hypersensitivity to threatening odors can be reversed by using exposure therapy to teach the mice that the electrical shock is no longer associated with a specific odor. This could help develop a better understanding of fear learning that might someday lead to new therapeutic treatments for anxiety disorders in humans, he says.

(Source: news.rutgers.edu)

Filed under olfactory system memory fear learning anxiety disorders neuroscience science

166 notes

ucsdhealthsciences:

Brain Trauma Raises Risk of Later PTSD in Active-Duty MarinesDeployment-related injuries are biggest predictor, but not the only factor   
In a novel study of U.S. Marines investigating the association between traumatic brain injury (TBI) and the risk of post-traumatic stress disorder (PTSD) over time, a team of scientists led by researchers from the Veterans Affairs San Diego Healthcare System and University of California, San Diego School of Medicine report that TBIs suffered during active-duty deployment to Iraq and Afghanistan were the greatest predictor for subsequent PTSD, but found pre-deployment PTSD symptoms and high combat intensity were also significant factors.
The findings are published in the December 11 online issue of JAMA Psychiatry.
The team, headed by principal investigator Dewleen G. Baker, MD, research director at the VA Center of Excellence for Stress and Mental Health, professor in the Department of Psychiatry at UC San Diego and a practicing psychiatrist in the VA San Diego Healthcare System, analyzed 1,648 active-duty Marines and Navy servicemen from four infantry battalions of the First Marine Division based at Camp Pendleton in north San Diego County. The servicemen were evaluated approximately one month before a scheduled 7-month deployment to Iraq or Afghanistan, one week after deployment had concluded, and again three and six months later.
PTSD is a psychiatric condition in which stress reactions become abnormal, chronic and may worsen over time. The condition is linked to depression, suicidal tendencies, substance abuse, memory and cognition dysfunction and other health problems.
The servicemen were assessed at each evaluation using the Clinician-Administered PTSD Scale or CAPS, a structured interview widely employed to diagnose PTSD and severity. Researchers asked about any head injuries sustained prior to joining the service and any head injuries sustained during deployment from a blast or explosion, vehicle accident, fall or head wound from a bullet or fragment.
Traumatic brain injuries are common. At least 1.7 million Americans annually sustain a TBI, with an estimated 5 million Americans living with TBI-related disabilities. More than half (56.8 percent) of the servicemen reported a TBI prior to deployment; almost a fifth (19.8 percent) reported a TBI during deployment. The vast majority of deployment-related TBIs (87.2 percent) were deemed mild, with less than 24 hours of post-traumatic amnesia. Of the 117 Marines whose TBI resulted in lost consciousness, 111 said it was less than 30 minutes.
More here

ucsdhealthsciences:

Brain Trauma Raises Risk of Later PTSD in Active-Duty Marines
Deployment-related injuries are biggest predictor, but not the only factor   

In a novel study of U.S. Marines investigating the association between traumatic brain injury (TBI) and the risk of post-traumatic stress disorder (PTSD) over time, a team of scientists led by researchers from the Veterans Affairs San Diego Healthcare System and University of California, San Diego School of Medicine report that TBIs suffered during active-duty deployment to Iraq and Afghanistan were the greatest predictor for subsequent PTSD, but found pre-deployment PTSD symptoms and high combat intensity were also significant factors.

The findings are published in the December 11 online issue of JAMA Psychiatry.

The team, headed by principal investigator Dewleen G. Baker, MD, research director at the VA Center of Excellence for Stress and Mental Health, professor in the Department of Psychiatry at UC San Diego and a practicing psychiatrist in the VA San Diego Healthcare System, analyzed 1,648 active-duty Marines and Navy servicemen from four infantry battalions of the First Marine Division based at Camp Pendleton in north San Diego County. The servicemen were evaluated approximately one month before a scheduled 7-month deployment to Iraq or Afghanistan, one week after deployment had concluded, and again three and six months later.

PTSD is a psychiatric condition in which stress reactions become abnormal, chronic and may worsen over time. The condition is linked to depression, suicidal tendencies, substance abuse, memory and cognition dysfunction and other health problems.

The servicemen were assessed at each evaluation using the Clinician-Administered PTSD Scale or CAPS, a structured interview widely employed to diagnose PTSD and severity. Researchers asked about any head injuries sustained prior to joining the service and any head injuries sustained during deployment from a blast or explosion, vehicle accident, fall or head wound from a bullet or fragment.

Traumatic brain injuries are common. At least 1.7 million Americans annually sustain a TBI, with an estimated 5 million Americans living with TBI-related disabilities. More than half (56.8 percent) of the servicemen reported a TBI prior to deployment; almost a fifth (19.8 percent) reported a TBI during deployment. The vast majority of deployment-related TBIs (87.2 percent) were deemed mild, with less than 24 hours of post-traumatic amnesia. Of the 117 Marines whose TBI resulted in lost consciousness, 111 said it was less than 30 minutes.

More here

321 notes

Even when test scores go up, some cognitive abilities don’t

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

image

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.

“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, just by doing a lot of studying and being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

Measuring cognition

This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.

The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.

The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.

In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.

“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.

Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.

However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills. 

The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.

Implications for education

Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.

He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.

“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.

Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.

(Source: web.mit.edu)

Filed under crystallized intelligence fluid intelligence cognition learning psychology neuroscience science

81 notes

Dietary Amino Acids Relieve Sleep Problems after Traumatic Brain Injury in Animals

Scientists who fed a cocktail of key amino acids to mice improved sleep disturbances caused by brain injuries in the animals. These new findings suggest a potential dietary treatment for millions of people affected by traumatic brain injury (TBI)—a condition that is currently untreatable.

image

“If this type of dietary treatment is proved to help patients recover function after traumatic brain injury, it could become an important public health benefit,” said study co-leader Akiva S. Cohen, Ph.D., a neuroscientist at The Children’s Hospital of Philadelphia (CHOP).

Cohen is the co-senior author of the animal TBI study appearing today in Science Translational Medicine. He collaborated with two experts in sleep medicine: co-senior author Allan I. Pack, M.D., Ph.D., director of the Center for Sleep and Circadian Neurobiology in the Perelman School of Medicine at the University of Pennsylvania; and first author Miranda M. Lim, M.D., Ph.D., formerly at the Penn Sleep Center, and now on faculty at the Portland VA Medical Center and Oregon Health and Science University.

Every year in the U.S., an estimated 2 million people suffer a TBI, accounting for a major cause of disability across all age groups. Although 75 percent of reported TBI cases are milder forms such as concussion, even concussion may cause chronic neurological impairments, including cognitive, motor and sleep problems.

“Sleep disturbances, such as excessive daytime sleepiness and nighttime insomnia, disrupt quality of life and can delay cognitive recovery in patients with TBI,” said Lim, a neurologist and sleep medicine specialist. Although physicians can relieve the dangerous swelling that occurs after a severe TBI, there are no existing treatments to address the underlying brain damage associated with neurobehavioral problems such as impaired memory, learning and sleep patterns.

Cohen and team investigate the use of selected branched chain amino acids (BCAA)—precursors of the neurotransmitters glutamate and GABA, which are involved in communication among neurons and help to maintain a normal balance in brain activity. His research team previously showed that a BCAA diet restored cognitive ability in brain-injured mice. The current study was the first to analyze sleep-wake patterns in an animal model.

Comparing mice with experimentally induced mild TBI to uninjured mice, the scientists found the injured mice were unable to stay awake for long periods of time. The injured mice had lower activity among orexin neurons, which help to maintain the animals’ wakefulness. This is similar to results in human studies showing decreased orexin levels in the spinal fluid after TBI.

In the current study, the dietary therapy restored the orexin neurons to a normal activity level and improved wakefulness in the brain-injured mice. EEG recordings also showed improved brain wave patterns among the mice that consumed the BCAA diet.

“These results in an animal model provide a proof-of-principle for investigating this dietary intervention as a treatment for TBI patients,” said Cohen. “If a dietary supplement can improve sleeping and waking patterns as well as cognitive problems, it could help brain-injured patients regain crucial functions.” Cohen cautioned that current evidence does not support TBI patients medicating themselves with commercially available amino acids.

(Source: chop.edu)

Filed under TBI brain injury amino acids sleep glutamate neurons neuroscience science

131 notes

Sleep-Deprived Mice Show Connections Among Lack of Shut-eye, Diabetes, Age
Sleep, or the lack of it, seems to affect just about every aspect of human physiology. Yet, the molecular pathways through which sleep deprivation wreaks its detrimental effects on the body remain poorly understood. Although numerous studies have looked at the consequences of sleep deprivation on the brain, comparatively few have directly tested its effects on peripheral organs.
During sleep deprivation cells upregulate the UPR – the unfolded protein response – a process where misfolded proteins get refolded or degraded.
Five years ago, researchers at the Perelman School of Medicine, University of Pennsylvania, showed that the UPR is an adaptive response to stress induced by sleep deprivation and is impaired in the brains of old mice. Those findings suggested that inadequate sleep in the elderly, who normally experience sleep disturbances, could exacerbate an already-impaired protective response to protein misfolding that happens in aging cells. Protein misfolding and clumping is associated with many diseases such as Alzheimer’s and Parkinson’s, noted Nirinjini Naidoo, Ph.D., research associate professor in the Division of Sleep Medicine in that study.
Naidoo is also senior author of a follow-up study in Aging Cell this month that shows, for the first time, an effect of sleep deprivation on the UPR in peripheral tissue, in this case, the pancreas. They showed that stress in pancreatic cells due to sleep deprivation may contribute to the loss or dysfunction of these cells important to maintaining proper blood sugar levels, and that these functions may be exacerbated by normal aging.
“The combined effect of aging and sleep deprivation resulted in a loss of control of blood sugar reminiscent of pre-diabetes in mice,” says Naidoo. “We hypothesize that older humans might be especially susceptible to the effects of sleep deprivation on the disruption of glucose homeostasis via cell stress.”
Working with Penn colleague Joe Baur, Ph.D., assistant professor of Physiology, Naidoo started a collaboration to look at the relationship of sleep deprivation, the UPR, and metabolic response with age. Other researchers had suggested that the death of beta cells associated with type 2 diabetes may be due to stress in a cell compartment called the endoplasmic reticulum (ER). The UPR is one part of the quality control system in the ER, where some proteins are made.
Knowing this, Naidoo and Baur asked if sleep deprivation (SD) causes ER stress in the pancreas, via an increase in protein misfolding, and in turn, how this relates to aging.
The team examined tissues in mice for cellular stress following acute SD, and they also looked for cellular stress in aging mice. Their results show that both age and SD combine to induce cellular stress in the pancreas.
Older mice fared markedly worse when subjected to sleep deprivation. Pancreas tissue from older mice or from young animals subjected to sleep deprivation exhibited signs of protein misfolding, yet both were able to maintain insulin secretion and control blood sugar levels. Pancreas tissue from acutely sleep-deprived aged animals exhibited a marked increase in CHOP, a protein associated with cell death, suggesting a maladaptive response to cellular stress with age that was amplified by sleep deprivation.
Acute sleep deprivation caused increased plasma glucose levels in both young and old animals. However, this change was not overtly related to stress in beta cells, since plasma insulin levels were not lower following acute lack of sleep.
Accordingly, young animals subjected to acute sleep deprivation remained tolerant to a glucose challenge. In a chronic sleep deprivation experiment, young mice were sensitized to insulin and had improved control of their blood sugar, whereas aged animals became hyperglycemic and failed to maintain appropriate plasma insulin concentrations.
While changes in insulin secretion are unlikely to play a major role in the acute effects of SD, cellular stress in pancreatic tissue suggests that chronic SD may contribute to the loss or dysfunction of endocrine cells, and that these effects may be exacerbated by normal aging, say the researchers.

Sleep-Deprived Mice Show Connections Among Lack of Shut-eye, Diabetes, Age

Sleep, or the lack of it, seems to affect just about every aspect of human physiology. Yet, the molecular pathways through which sleep deprivation wreaks its detrimental effects on the body remain poorly understood. Although numerous studies have looked at the consequences of sleep deprivation on the brain, comparatively few have directly tested its effects on peripheral organs.

During sleep deprivation cells upregulate the UPR – the unfolded protein response – a process where misfolded proteins get refolded or degraded.

Five years ago, researchers at the Perelman School of Medicine, University of Pennsylvania, showed that the UPR is an adaptive response to stress induced by sleep deprivation and is impaired in the brains of old mice. Those findings suggested that inadequate sleep in the elderly, who normally experience sleep disturbances, could exacerbate an already-impaired protective response to protein misfolding that happens in aging cells. Protein misfolding and clumping is associated with many diseases such as Alzheimer’s and Parkinson’s, noted Nirinjini Naidoo, Ph.D., research associate professor in the Division of Sleep Medicine in that study.

Naidoo is also senior author of a follow-up study in Aging Cell this month that shows, for the first time, an effect of sleep deprivation on the UPR in peripheral tissue, in this case, the pancreas. They showed that stress in pancreatic cells due to sleep deprivation may contribute to the loss or dysfunction of these cells important to maintaining proper blood sugar levels, and that these functions may be exacerbated by normal aging.

“The combined effect of aging and sleep deprivation resulted in a loss of control of blood sugar reminiscent of pre-diabetes in mice,” says Naidoo. “We hypothesize that older humans might be especially susceptible to the effects of sleep deprivation on the disruption of glucose homeostasis via cell stress.”

Working with Penn colleague Joe Baur, Ph.D., assistant professor of Physiology, Naidoo started a collaboration to look at the relationship of sleep deprivation, the UPR, and metabolic response with age. Other researchers had suggested that the death of beta cells associated with type 2 diabetes may be due to stress in a cell compartment called the endoplasmic reticulum (ER). The UPR is one part of the quality control system in the ER, where some proteins are made.

Knowing this, Naidoo and Baur asked if sleep deprivation (SD) causes ER stress in the pancreas, via an increase in protein misfolding, and in turn, how this relates to aging.

The team examined tissues in mice for cellular stress following acute SD, and they also looked for cellular stress in aging mice. Their results show that both age and SD combine to induce cellular stress in the pancreas.

Older mice fared markedly worse when subjected to sleep deprivation. Pancreas tissue from older mice or from young animals subjected to sleep deprivation exhibited signs of protein misfolding, yet both were able to maintain insulin secretion and control blood sugar levels. Pancreas tissue from acutely sleep-deprived aged animals exhibited a marked increase in CHOP, a protein associated with cell death, suggesting a maladaptive response to cellular stress with age that was amplified by sleep deprivation.

Acute sleep deprivation caused increased plasma glucose levels in both young and old animals. However, this change was not overtly related to stress in beta cells, since plasma insulin levels were not lower following acute lack of sleep.

Accordingly, young animals subjected to acute sleep deprivation remained tolerant to a glucose challenge. In a chronic sleep deprivation experiment, young mice were sensitized to insulin and had improved control of their blood sugar, whereas aged animals became hyperglycemic and failed to maintain appropriate plasma insulin concentrations.

While changes in insulin secretion are unlikely to play a major role in the acute effects of SD, cellular stress in pancreatic tissue suggests that chronic SD may contribute to the loss or dysfunction of endocrine cells, and that these effects may be exacerbated by normal aging, say the researchers.

Filed under alzheimer's disease aging sleep sleep deprivation diabetes neuroscience science

81 notes

Staying ahead of Huntington’s disease

Huntington’s disease is a devastating, incurable disorder that results from the death of certain neurons in the brain. Its symptoms show as progressive changes in behavior and movements.

image

The neurodegenerative disease is caused by a defect in the huntingtin gene (Htt) that causes an abnormal expansion in a part of DNA, called a CAG codon or triplet that codes for the amino acid glutamine. A healthy version of the Htt gene has between 20 and 23 CAG triplets. The mutational expansion in Htt can lead to long repeats of the CAG triplet, resulting in the mutant protein having a long sequence of several glutamine residues called a polyglutamine tract. This CAG triplet expansion in unrelated genes is the root of at least nine neurodegenerative disorders, including Huntington’s disease.

Rohit Pappu, PhD, professor of biomedical engineering at Washington University in St. Louis, and his colleagues in the School of Engineering & Applied Science and in the School of Medicine, are working to understand how expanded polyglutamine tracts form the types of supramolecular structures that are presumed to be toxic to neurons – a feature that polyglutamine expansions share with proteins associated with Alzheimer’s disease and Parkinson’s disease.

In recent work, Pappu and his research team showed that the amino acid sequences on either side of the polyglutamine tract within Htt can act as natural gatekeepers because they control the fundamental ability of polyglutamine tracts to form structures that are implicated in cellular toxicity. The results were published in PNAS Early Edition Nov.25.

“These are progressive onset disorders,” Pappu says. “The longer the polyglutamine tract gets, the more severe the disease, and the symptoms worsen with age. Our results are exciting because it means that any success we have in mimicking the effects of naturally occurring gatekeepers would be a significant step forward. And mechanistic studies are important in this regard because they enable us to learn from nature’s own strategies.

“Previous studies from other labs showed that the toxic effects of polyglutamine expansions are tempered by the sequence contexts of polyglutamine tracts in Htt, not just the lengths of the polyglutamine tracts”, Pappu says.

He and his research team focused on understanding the effects of sequence stretches that lie on either side of the polyglutamine tract in Htt.  The results show that the N-terminal stretch accelerates the formation of ordered structures that are presumed to be benign to cells, whereas the C-terminal stretch slows the overall transition into structures that are expected to create trouble for cells, suggesting that these naturally occurring sequences behave as gatekeepers. 

“It appears that where polyglutamine stretches are of functional importance, nature has ensured that they are flanked by gatekeeping sequences,” Pappu says.

Pappu and his team are now working to find way s to mimic the effects of the N- and C-terminal flanking sequences from Htt. His team is working closely with Marc Diamond, MD, the David Clayson Professor of Neurology at the School of Medicine, to understand how naturally occurring proteins interact with flanking sequences and see if they can coopt them to ameliorate the toxic functions in the polyglutamine expansions.

(Source: engineering.wustl.edu)

Filed under huntington's disease neurodegenerative diseases neurodegeneration neurons neuroscience science

115 notes

Study Raises Questions about Longstanding Forensic Identification Technique
Forensic experts have long used the shape of a person’s skull to make positive identifications of human remains. But those findings may now be called into question, since a new study from North Carolina State University shows that there is not enough variation in skull shapes to make a positive ID.
“In a lot of cases, murder victims or the victims of disasters are from lower socioeconomic backgrounds and don’t have extensive dental records we can use to make a match,” says Dr. Ann Ross, a forensic expert and professor of anthropology at NC State who is senior author of a paper on the new study. “But those people may have been in car accidents or other incidents that led them to have their skulls X-rayed in emergency rooms or elsewhere. And those skull X-rays have often been used to make IDs. I’ve done it myself.
“But now we’ve tried to validate this technique, and our research shows that the shape of the skull isn’t enough to make a positive ID,” Ross says.
At issue is the “cranial vault outline,” not the “face” of the skull. The cranial vault outline is the profile of the skull when viewed from the side, running from just above the bridge of the nose to the point where the skull and neck meet.
For the study, the researchers surveyed 106 members of the American Academy of Forensic Sciences. Survey participants were asked to evaluate 14 antemortem X-rays and five postmortem X-rays. Participants were then asked to match the 5 postmortem X-rays with the appropriate antemortem X-ray, effectively establishing a positive ID.
But the researchers found that only 47 percent of the participants made accurate identifications on all five skulls. Participants who have Ph.D.s did slightly better, with 56 percent of them getting all five correct. (The test has been made available here so that anyone can take it.)
“This doesn’t mean that cranial vault outlines aren’t useful,” says Ashley Maxwell, lead author of the paper and a former graduate student at NC State. “For example, outlines can be valuable if teeth or other features are missing or have been destroyed. But it does mean that cranial vault outlines shouldn’t be given too much weight.
“The more characteristics we can take into account, such as facial features and cranial vault outlines, the more accurate we can be,” Maxwell says.

Study Raises Questions about Longstanding Forensic Identification Technique

Forensic experts have long used the shape of a person’s skull to make positive identifications of human remains. But those findings may now be called into question, since a new study from North Carolina State University shows that there is not enough variation in skull shapes to make a positive ID.

“In a lot of cases, murder victims or the victims of disasters are from lower socioeconomic backgrounds and don’t have extensive dental records we can use to make a match,” says Dr. Ann Ross, a forensic expert and professor of anthropology at NC State who is senior author of a paper on the new study. “But those people may have been in car accidents or other incidents that led them to have their skulls X-rayed in emergency rooms or elsewhere. And those skull X-rays have often been used to make IDs. I’ve done it myself.

“But now we’ve tried to validate this technique, and our research shows that the shape of the skull isn’t enough to make a positive ID,” Ross says.

At issue is the “cranial vault outline,” not the “face” of the skull. The cranial vault outline is the profile of the skull when viewed from the side, running from just above the bridge of the nose to the point where the skull and neck meet.

For the study, the researchers surveyed 106 members of the American Academy of Forensic Sciences. Survey participants were asked to evaluate 14 antemortem X-rays and five postmortem X-rays. Participants were then asked to match the 5 postmortem X-rays with the appropriate antemortem X-ray, effectively establishing a positive ID.

But the researchers found that only 47 percent of the participants made accurate identifications on all five skulls. Participants who have Ph.D.s did slightly better, with 56 percent of them getting all five correct. (The test has been made available here so that anyone can take it.)

“This doesn’t mean that cranial vault outlines aren’t useful,” says Ashley Maxwell, lead author of the paper and a former graduate student at NC State. “For example, outlines can be valuable if teeth or other features are missing or have been destroyed. But it does mean that cranial vault outlines shouldn’t be given too much weight.

“The more characteristics we can take into account, such as facial features and cranial vault outlines, the more accurate we can be,” Maxwell says.

Filed under cranial vault outline x-rays neuroimaging forensics neuroscience science

free counters