Neuroscience

Articles and news from the latest research reports.

511 notes

(Image caption: This is the happiness equation, where t is the trial number, w0 is a constant term, other weights w capture the influence of different event types, 0 ≤ γ ≤ 1 is a forgetting factor that makes events in more recent trials more influential than those in earlier trials, CRj is the CR if chosen instead of a gamble on trial j, EVj is the EV of a gamble (average reward for the gamble) if chosen on trial j, and RPEj is the RPE on trial j contingent on choice of the gamble. The RPE is equal to the reward received minus the expectation in that trial EVj. If the CR was chosen, then EVj = 0 and RPEj = 0; if the gamble was chosen, then CRj = 0. The variables in the equation are quantities that the neuromodulator dopamine has been associated with in previous neuroscience studies. Credit: Robb Rutledge, UCL)
Equation to predict happiness
The happiness of over 18,000 people worldwide has been predicted by a mathematical equation developed by researchers at UCL, with results showing that moment-to-moment happiness reflects not just how well things are going, but whether things are going better than expected.
The new equation accurately predicts exactly how happy people will say they are from moment to moment based on recent events, such as the rewards they receive and the expectations they have during a decision-making task. Scientists found that overall wealth accumulated during the experiment was not a good predictor of happiness. Instead, moment-to-moment happiness depended on the recent history of rewards and expectations. These expectations depended, for example, on whether the available options could lead to good or bad outcomes.
The study, published in the Proceedings of the National Academy of Sciences, investigated the relationship between happiness and reward, and the neural processes that lead to feelings that are central to our conscious experience, such as happiness. Before now, it was known that life events affect an individual’s happiness but not exactly how happy people will be from moment to moment as they make decisions and receive outcomes resulting from those decisions, something the new equation can predict.
Scientists believe that quantifying subjective states mathematically could help doctors better understand mood disorders, by seeing how self-reported feelings fluctuate in response to events like small wins and losses in a smartphone game. A better understanding of how mood is determined by life events and circumstances, and how that differs in people suffering from mood disorders, will hopefully lead to more effective treatments.
Research examining how and why happiness changes from moment to moment in individuals could also assist governments who deploy population measures of wellbeing to inform policy, by providing quantitative insight into what the collected information means. This is especially relevant to the UK following the launch of the National Wellbeing Programme in 2010 and subsequent annual reports by the Office for National Statistics on ‘Measuring National Wellbeing’.
For the study, 26 subjects completed a decision-making task in which their choices led to monetary gains and losses, and they were repeatedly asked to answer the question ‘how happy are you right now?’. The participant’s neural activity was also measured during the task using functional MRI and from these data, scientists built a computational model in which self-reported happiness was related to recent rewards and expectations. The model was then tested on 18,420 participants in the game ‘What makes me happy?’ in a smartphone app developed at UCL called 'The Great Brain Experiment'. Scientists were surprised to find that the same equation could be used to predict how happy subjects would be while they played the smartphone game, even though subjects could win only points and not money.
Lead author of the study, Dr Robb Rutledge (UCL Wellcome Trust Centre for Neuroimaging and the new Max Planck UCL Centre for Computational Psychiatry and Ageing), said: “We expected to see that recent rewards would affect moment-to-moment happiness but were surprised to find just how important expectations are in determining happiness. In real-world situations, the rewards associated with life decisions such as starting a new job or getting married are often not realised for a long time, and our results suggest expectations related to these decisions, good and bad, have a big effect on happiness.
"Life is full of expectations - it would be difficult to make good decisions without knowing, for example, which restaurant you like better. It is often said that you will be happier if your expectations are lower. We find that there is some truth to this: lower expectations make it more likely that an outcome will exceed those expectations and have a positive impact on happiness. However, expectations also affect happiness even before we learn the outcome of a decision. If you have plans to meet a friend at your favourite restaurant, those positive expectations may increase your happiness as soon as you make the plan. The new equation captures these different effects of expectations and allows happiness to be predicted based on the combined effects of many past events.
"It’s great that the data from the large and varied population using The Great Brain Experiment smartphone app shows that the same happiness equation applies to thousands people worldwide playing our game, as with our much smaller laboratory-based experiments which demonstrate the tremendous value of this approach for studying human well-being on a large scale."
The team used functional MRI to demonstrate that neural signals during decisions and outcomes in the task in an area of the brain called the striatum can be used to predict changes in moment-to-moment happiness. The striatum has a lot of connections with dopamine neurons, and signals in this brain area are thought to depend at least partially on dopamine. These results raise the possibility that dopamine may play a role in determining happiness.

(Image caption: This is the happiness equation, where t is the trial number, w0 is a constant term, other weights w capture the influence of different event types, 0 ≤ γ ≤ 1 is a forgetting factor that makes events in more recent trials more influential than those in earlier trials, CRj is the CR if chosen instead of a gamble on trial j, EVj is the EV of a gamble (average reward for the gamble) if chosen on trial j, and RPEj is the RPE on trial j contingent on choice of the gamble. The RPE is equal to the reward received minus the expectation in that trial EVj. If the CR was chosen, then EVj = 0 and RPEj = 0; if the gamble was chosen, then CRj = 0. The variables in the equation are quantities that the neuromodulator dopamine has been associated with in previous neuroscience studies. Credit: Robb Rutledge, UCL)

Equation to predict happiness

The happiness of over 18,000 people worldwide has been predicted by a mathematical equation developed by researchers at UCL, with results showing that moment-to-moment happiness reflects not just how well things are going, but whether things are going better than expected.

The new equation accurately predicts exactly how happy people will say they are from moment to moment based on recent events, such as the rewards they receive and the expectations they have during a decision-making task. Scientists found that overall wealth accumulated during the experiment was not a good predictor of happiness. Instead, moment-to-moment happiness depended on the recent history of rewards and expectations. These expectations depended, for example, on whether the available options could lead to good or bad outcomes.

The study, published in the Proceedings of the National Academy of Sciences, investigated the relationship between happiness and reward, and the neural processes that lead to feelings that are central to our conscious experience, such as happiness. Before now, it was known that life events affect an individual’s happiness but not exactly how happy people will be from moment to moment as they make decisions and receive outcomes resulting from those decisions, something the new equation can predict.

Scientists believe that quantifying subjective states mathematically could help doctors better understand mood disorders, by seeing how self-reported feelings fluctuate in response to events like small wins and losses in a smartphone game. A better understanding of how mood is determined by life events and circumstances, and how that differs in people suffering from mood disorders, will hopefully lead to more effective treatments.

Research examining how and why happiness changes from moment to moment in individuals could also assist governments who deploy population measures of wellbeing to inform policy, by providing quantitative insight into what the collected information means. This is especially relevant to the UK following the launch of the National Wellbeing Programme in 2010 and subsequent annual reports by the Office for National Statistics on ‘Measuring National Wellbeing’.

For the study, 26 subjects completed a decision-making task in which their choices led to monetary gains and losses, and they were repeatedly asked to answer the question ‘how happy are you right now?’. The participant’s neural activity was also measured during the task using functional MRI and from these data, scientists built a computational model in which self-reported happiness was related to recent rewards and expectations. The model was then tested on 18,420 participants in the game ‘What makes me happy?’ in a smartphone app developed at UCL called 'The Great Brain Experiment'. Scientists were surprised to find that the same equation could be used to predict how happy subjects would be while they played the smartphone game, even though subjects could win only points and not money.

Lead author of the study, Dr Robb Rutledge (UCL Wellcome Trust Centre for Neuroimaging and the new Max Planck UCL Centre for Computational Psychiatry and Ageing), said: “We expected to see that recent rewards would affect moment-to-moment happiness but were surprised to find just how important expectations are in determining happiness. In real-world situations, the rewards associated with life decisions such as starting a new job or getting married are often not realised for a long time, and our results suggest expectations related to these decisions, good and bad, have a big effect on happiness.

"Life is full of expectations - it would be difficult to make good decisions without knowing, for example, which restaurant you like better. It is often said that you will be happier if your expectations are lower. We find that there is some truth to this: lower expectations make it more likely that an outcome will exceed those expectations and have a positive impact on happiness. However, expectations also affect happiness even before we learn the outcome of a decision. If you have plans to meet a friend at your favourite restaurant, those positive expectations may increase your happiness as soon as you make the plan. The new equation captures these different effects of expectations and allows happiness to be predicted based on the combined effects of many past events.

"It’s great that the data from the large and varied population using The Great Brain Experiment smartphone app shows that the same happiness equation applies to thousands people worldwide playing our game, as with our much smaller laboratory-based experiments which demonstrate the tremendous value of this approach for studying human well-being on a large scale."

The team used functional MRI to demonstrate that neural signals during decisions and outcomes in the task in an area of the brain called the striatum can be used to predict changes in moment-to-moment happiness. The striatum has a lot of connections with dopamine neurons, and signals in this brain area are thought to depend at least partially on dopamine. These results raise the possibility that dopamine may play a role in determining happiness.

Filed under happiness reward decision making neural activity neuroimaging striatum dopamine mathematical equation neuroscience science

103 notes

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)
Complexity of eye-hand coordination
People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.
"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."
The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.
"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."
A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.
"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."
Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”
He often describes the behavior of high level ping pong players to explain how it works.
"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.
"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.
"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."
Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.
Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.
To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.
"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."
As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.
"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."
School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)

Complexity of eye-hand coordination

People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.

"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."

The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.

"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."

A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.

"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."

Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”

He often describes the behavior of high level ping pong players to explain how it works.

"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.

"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.

"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."

Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.

Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.

To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.

"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."

As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.

"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."

School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

Filed under eye-hand coordination eye movements parietal cortex prosthetics neural activity psychology neuroscience science

170 notes

Brain tumour cells found circulating in blood
German scientists have discovered rogue brain tumour cells in patient blood samples, challenging the idea that this type of cancer doesn’t generally spread beyond the brain.
Researchers from the University Medical Center Hamburg-Eppendorf, in Hamburg, found that patients with an aggressive form of brain tumour known as glioblastoma multiforme sometimes have tumour cells circulating in their blood.
The discovery could help doctors improve the way they monitor how the disease progresses, and could have implications for treatment.
Read more

Brain tumour cells found circulating in blood

German scientists have discovered rogue brain tumour cells in patient blood samples, challenging the idea that this type of cancer doesn’t generally spread beyond the brain.

Researchers from the University Medical Center Hamburg-Eppendorf, in Hamburg, found that patients with an aggressive form of brain tumour known as glioblastoma multiforme sometimes have tumour cells circulating in their blood.

The discovery could help doctors improve the way they monitor how the disease progresses, and could have implications for treatment.

Read more

Filed under glioblastoma multiforme brain cancer blood-brain barrier tumour cells neuroscience science

89 notes

Making sense of scents
For many animals, making sense of the clutter of sensory stimuli is often a matter or literal life or death.
Exactly how animals separate objects of interest, such as food sources or the scent of predators, from background information, however, remains largely unknown. Even the extent to which animals can make such distinctions, and how differences between scents might affect the process were largely a mystery – until now.
A new study, described in an August 3 paper in Nature Neuroscience, a team of researchers led by Venkatesh Murthy, Professor of Molecular and Cellular Biology, showed that while mice can be trained to detect specific odorants embedded in random mixtures, their performance drops steadily with increasing background components. The team included Dan Rokni, Vikrant Kapoor and Vivian Hemmelder, all from Harvard University.
"There is a continuous stream of information constantly arriving at our senses, coming from many different sources," Murthy said. "The classic example would be a cocktail party – though it may be noisy, and there may be many people talking, we are able to focus our attention on one person, while ignoring the background noise.
"Is the same also true for smells?" he continued. "We are bombarded with many smells all jumbled up. Can we pick out one smell "object" – the smell of jasmine, for example, amidst a riot of other smells? Our experience tells us indeed we can, but how do we pick out the ones that we need to pay attention to, and what are the limitations?"
To find answers to those, and other, questions, Murthy and colleagues turned to mice.
After training mice to detect specific scents, researchers presented the animals with a combination of smells – sometimes including the “target” scent, sometimes not. Though previous studies had suggested animals are poor at individual smells, and instead perceived the mixture as a single smell, their findings showed that mice were able to identify when a target scent was present with 85 percent accuracy or better.
"Although the mice do well overall, they perform progressively poorer when the number of background odors increases," Murthy explained.
Understanding why, however, meant first overcoming a problem particular to olfaction.
While the relationship between visual stimuli is relatively easy to understand – differences in color can be easily described as differences in the wavelength of light – no such system exists to describe how two odors relate to each other. Instead, the researchers sought to describe scents according to how they activated neurons in the brain.
Using fluorescent proteins, they created images that show how each of 14 different odors stimulated neurons in the olfactory bulb. What they found, Murthy said, was that the ability of mice to identify a particular smell was markedly diminished if background smells activated the same neurons as the target odor.
"Each odor gives rise to a particular spatial pattern of neural responses," Murthy said. "When the spatial pattern of the background odors overlapped with the target odor, the mice did much more poorly at detecting the target. Therefore, the difficulty of picking out a particular smell among a jumble of other odors, depends on how much the background interferes with your target smell. So, we were able to give a neural explanation for how well you can solve the cocktail party problem.
"This study is interesting because it first shows that smells are not always perceived as one whole object – they can be broken down into their pieces," he added. "This is perhaps not a surprise – there are in fact coffee or wine specialists that can detect faint whiffs of particular elements within the complex mixture of flavors in each coffee or wine. But by doing these studies in mice, we can now get a better understanding of how the brain does this. One can also imagine that understanding how this is done may also allow us to build artificial olfactory systems that can detect specific chemicals in the air that are buried amidst a plethora of other odors."

Making sense of scents

For many animals, making sense of the clutter of sensory stimuli is often a matter or literal life or death.

Exactly how animals separate objects of interest, such as food sources or the scent of predators, from background information, however, remains largely unknown. Even the extent to which animals can make such distinctions, and how differences between scents might affect the process were largely a mystery – until now.

A new study, described in an August 3 paper in Nature Neuroscience, a team of researchers led by Venkatesh Murthy, Professor of Molecular and Cellular Biology, showed that while mice can be trained to detect specific odorants embedded in random mixtures, their performance drops steadily with increasing background components. The team included Dan Rokni, Vikrant Kapoor and Vivian Hemmelder, all from Harvard University.

"There is a continuous stream of information constantly arriving at our senses, coming from many different sources," Murthy said. "The classic example would be a cocktail party – though it may be noisy, and there may be many people talking, we are able to focus our attention on one person, while ignoring the background noise.

"Is the same also true for smells?" he continued. "We are bombarded with many smells all jumbled up. Can we pick out one smell "object" – the smell of jasmine, for example, amidst a riot of other smells? Our experience tells us indeed we can, but how do we pick out the ones that we need to pay attention to, and what are the limitations?"

To find answers to those, and other, questions, Murthy and colleagues turned to mice.

After training mice to detect specific scents, researchers presented the animals with a combination of smells – sometimes including the “target” scent, sometimes not. Though previous studies had suggested animals are poor at individual smells, and instead perceived the mixture as a single smell, their findings showed that mice were able to identify when a target scent was present with 85 percent accuracy or better.

"Although the mice do well overall, they perform progressively poorer when the number of background odors increases," Murthy explained.

Understanding why, however, meant first overcoming a problem particular to olfaction.

While the relationship between visual stimuli is relatively easy to understand – differences in color can be easily described as differences in the wavelength of light – no such system exists to describe how two odors relate to each other. Instead, the researchers sought to describe scents according to how they activated neurons in the brain.

Using fluorescent proteins, they created images that show how each of 14 different odors stimulated neurons in the olfactory bulb. What they found, Murthy said, was that the ability of mice to identify a particular smell was markedly diminished if background smells activated the same neurons as the target odor.

"Each odor gives rise to a particular spatial pattern of neural responses," Murthy said. "When the spatial pattern of the background odors overlapped with the target odor, the mice did much more poorly at detecting the target. Therefore, the difficulty of picking out a particular smell among a jumble of other odors, depends on how much the background interferes with your target smell. So, we were able to give a neural explanation for how well you can solve the cocktail party problem.

"This study is interesting because it first shows that smells are not always perceived as one whole object – they can be broken down into their pieces," he added. "This is perhaps not a surprise – there are in fact coffee or wine specialists that can detect faint whiffs of particular elements within the complex mixture of flavors in each coffee or wine. But by doing these studies in mice, we can now get a better understanding of how the brain does this. One can also imagine that understanding how this is done may also allow us to build artificial olfactory systems that can detect specific chemicals in the air that are buried amidst a plethora of other odors."

Filed under olfactory system olfaction scents animal model neurons neuroscience science

218 notes

A little video gaming ‘produces well-adjusted children’
Playing video games for a short period each day could have a small but positive impact on child development, a study by Oxford University suggests.
Scientists found young people who spent less than an hour a day engaged in video games were better adjusted than those who did not play at all.
But children who used consoles for more than three hours reported lower satisfaction with their lives overall.
The research is published in the journal Pediatrics.
Read more

A little video gaming ‘produces well-adjusted children’

Playing video games for a short period each day could have a small but positive impact on child development, a study by Oxford University suggests.

Scientists found young people who spent less than an hour a day engaged in video games were better adjusted than those who did not play at all.

But children who used consoles for more than three hours reported lower satisfaction with their lives overall.

The research is published in the journal Pediatrics.

Read more

Filed under video games children psychosocial adjustment social interaction psychology neuroscience science

142 notes

Small DNA modifications predict brain’s threat response

The tiny addition of a chemical mark atop a gene that is well known for its involvement in clinical depression and posttraumatic stress disorder can affect the way a person’s brain responds to threats, according to a new study by Duke University researchers.

The results, which appear online August 3 in Nature Neuroscience, go beyond genetics to help explain why some individuals may be more vulnerable than others to stress and stress-related psychiatric disorders.

The study focused on the serotonin transporter, a molecule that regulates the amount of serotonin signaling between brain cells and is a major target for treatment of depression and mood disorders. In the 1990s, scientists discovered that differences in the DNA sequence of the serotonin transporter gene seemed to give some individuals exaggerated responses to stress, including the development of depression.

image

(Image caption: An artist’s conception shows how molecules called methyl groups attach to a specific stretch of DNA, changing expression of the serotonin transporter gene in a way that ultimately shapes individual differences in the brain’s reactivity to threat. The methyl groups in this diagram are overlaid on the amygdala of the brain, where threat perception occurs. Credit: Annchen Knodt, Duke University)

Sitting on top of the serotonin transporter’s DNA (and studding the entire genome), are chemical marks called methyl groups that help regulate where and when a gene is active, or expressed. DNA methylation is one form of epigenetic modification being studied by scientists trying to understand how the same genetic code can produce so many different cells and tissues as well as differences between individuals as closely related as twins.

In looking for methylation differences, “we decided to start with the serotonin transporter because we know a lot about it biologically, pharmacologically, behaviorally, and it’s one of the best characterized genes in neuroscience,” said senior author Ahmad Hariri, a professor of psychology and neuroscience and member of the Duke Institute for Brain Sciences.

"If we’re going to make claims about the importance of epigenetics in the human brain, we wanted to start with a gene that we have a fairly good understanding of," Hariri said.

This work is part of the ongoing Duke Neurogenetics Study (DNS), a comprehensive study linking genes, brain activity and other biological markers to risk for mental illness in young adults.

The group performed non-invasive brain imaging in the first 80 college-aged participants of the DNS, showing them pictures of angry or fearful faces and watching the responses of a deep brain region called the amygdala, which helps shape our behavioral and biological responses to threat and stress.

The team also measured the amount of methylation on serotonin transporter DNA isolated from the participants’ saliva, in collaboration with Karestan Koenen at Columbia University’s Mailman School of Public Health in New York.

The greater the methylation of an individual’s serotonin transporter gene, the greater the reactivity of the amygdala, the study found. Increased amygdala reactivity may in turn contribute to an exaggerated stress response and vulnerability to stress-related disorders.

To the group’s surprise, even small methylation variations between individuals were sufficient to create differences between individuals’ amygdala reactivity, said lead author Yuliya Nikolova, a graduate student in Hariri’s group. The amount of methylation was a better predictor of amygdala activity than DNA sequence variation, which had previously been associated with risk for depression and anxiety.

The team was excited about the discovery but also cautious, Hariri said, because there have been many findings in genetics that were never replicated.

That’s why they jumped at the chance to look for the same pattern in a different set of participants, this time in the Teen Alcohol Outcomes Study (TAOS) at the University of Texas Health Science Center at San Antonio.

Working with TAOS director, Douglas Williamson, the group again measured amygdala reactivity to angry and fearful faces as well as methylation of the serotonin transporter gene isolated from blood in 96 adolescents between 11 and 15 years old. The analyses revealed an even stronger link between methylation and amygdala reactivity.

"Now over 10 percent of the differences in amygdala function mapped onto these small differences in methylation," Hariri said. The DNS study had found just under 7 percent.

Taking the study one step further, the group also analyzed patterns of methylation in the brains of dead people in collaboration with Etienne Sibille at the University of Pittsburgh, now at the Centre for Addiction and Mental Health in Toronto.

Once again, they saw that methylation of a single spot in the serotonin transporter gene was associated with lower levels of serotonin transporter expression in the amygdala.

"That’s when we thought, ‘Alright, this is pretty awesome,’" Hariri said.

Hariri said the work reveals a compelling mechanistic link: Higher methylation is generally associated with less reading of the gene, and that’s what they saw. He said methylation dampens expression of the gene, which then affects amygdala reactivity, presumably by altering serotonin signaling.

The researchers would now like to see how methylation of this specific bit of DNA affects the brain. In particular, this region of the gene might serve as a landing place for cellular machinery that binds to the DNA and reads it, Nikolova said.

The group also plans to look at methylation patterns of other genes in the serotonin system that may contribute to the brain’s response to threatening stimuli.

The fact that serotonin transporter methylation patterns were similar in saliva, blood and brain also suggests that these patterns may be passed down through generations rather than acquired by individuals based on their own experiences.

Hariri said he hopes that other researchers looking for biomarkers of mental illness will begin to consider methylation above and beyond DNA sequence-based variation and across different tissues.

(Source: eurekalert.org)

Filed under methylation serotonin serotonin transporter amygdala DNA sequence neuroscience science

189 notes

(Image caption: Brain image showing activity in the amygdala, the area of the brain involved with emotion. The amydgala was more active during the graphic scenarios only when the harm being described was intentional. Credit: Marois Lab / Vanderbilt)
Fault trumps gruesome evidence when it comes to meting out punishment
Issues of crime and punishment, vengeance and justice date back to the dawn of human history, but it is only in the last few years that scientists have begun exploring the basic nature of the complex neural processes in the brain that underlie these fundamental behaviors.
Now a new brain imaging study – published online Aug. 3 by the journal Nature Neuroscience – has identified the brain mechanisms that underlie our judgment of how severely a person who has harmed another should be punished. Specifically, the study determined how the area of the brain that determines whether such an act was intentional or unintentional trumps the emotional urge to punish the person, however gruesome the harm may be.
“A fundamental aspect of the human experience is the desire to punish harmful acts, even when the victim is a perfect stranger. Equally important, however, is our ability to put the brakes on this impulse when we realize the harm was done unintentionally,” said Rene Marois, the Vanderbilt University professor of psychology who headed the research team. “This study helps us begin to elucidate the neural circuitry that permits this type of regulation.”
The study
In the experiment, the brains of 30 volunteers (20 male, 10 female, average age 23 years) were imaged using functional MRI (fMRI) while they read a series of brief scenarios that described how the actions of a protagonist named John brought harm to either Steve or Mary. The scenarios depicted four different levels of harm: death, maiming, physical assault and property damage. In half of them, the harm was clearly identified as intentional and in half it was clearly identified as unintentional.
Two versions of each scenario were created: one with a factual description of the harm and the other with a graphic description. For example, in a mountain climbing scenario where John cuts Steve’s rope, the factual version states, “Steve falls 100 feet to the ground below. Steve experiences significant bodily harm from the fall and he dies from his injuries shortly after impact.” And the graphic version reads, “Steve plummets to the rocks below. Nearly every bone in his body is broken upon impact. Steve’s screams are muffled by thick, foamy blood flowing from his mouth as he bleeds to death.”
After reading each scenario the participants were asked to list how much punishment John deserved on a scale from zero (no punishment) to nine (most severe punishment the subject endorsed).
Analysis of the responses
When the responses were analyzed, the researchers found that the manner in which the harmful consequences of an action are described significantly influences the level of punishment that people consider appropriate: When the harm was described in a graphic or lurid fashion then people set the punishment level higher than when it was described matter-of-factly. However, this higher punishment level only applied when the participants considered the resulting harm to be intentional. When they considered it to be unintentional, the way it was described didn’t have any effect.
“What we’ve shown is that manipulations of gruesome language leads to harsher punishment, but only in cases where the harm was intentional. Language had no effect when the harm was caused unintentionally,” summarized Michael Treadway, a post-doctoral fellow at Harvard Medical School and lead author of the study.
According to the researchers, the fact that the mere presence of graphic language could cause participants to ratchet up the severity of the punishments suggests that photographs, video and other graphic materials sampled from a crime scene is likely to have an even stronger impact on an individual’s desire to punish.
“Although the underlying scientific basis of this effect wasn’t known until now, the legal system recognized it a long time ago and made provisions to counteract it,” said Treadway. “Judges are permitted to exclude relevant evidence from a trial if they decide that its probative value is substantially outweighed by its prejudicial nature.”
Underlying neuroanatomy
The fMRI scans revealed the areas of the brain that are involved in this complex process. They found that the amygdala, an almond-shaped set of neurons that plays a key role in processing emotions, responded most strongly to the graphic language condition. Like the punishment ratings themselves, however, this effect in the amygdala was only present when harm was done intentionally. Moreover, in this situation the researchers found that the amygdala showed stronger communication with the dorsolateral prefrontal cortex (dlPFC), an area that is critical for punishment decision-making. When the harm was done unintentionally, however, a different regulatory network – one involved in decoding the mental states of other people – became more active and appeared to suppress amygdala responses to the graphic language, thereby preventing the amygdala from affecting decision-making areas in dlPFC.
“This is basically a reassuring finding,” said Marois. “It indicates that, when the harm is not intended, we don’t simply shunt aside the emotional impulse to punish. Instead, it appears that the brain down-regulates the impulse so we don’t feel it as strongly. That is preferable because the urge to punish is less likely to resurface at a future date.”

(Image caption: Brain image showing activity in the amygdala, the area of the brain involved with emotion. The amydgala was more active during the graphic scenarios only when the harm being described was intentional. Credit: Marois Lab / Vanderbilt)

Fault trumps gruesome evidence when it comes to meting out punishment

Issues of crime and punishment, vengeance and justice date back to the dawn of human history, but it is only in the last few years that scientists have begun exploring the basic nature of the complex neural processes in the brain that underlie these fundamental behaviors.

Now a new brain imaging study – published online Aug. 3 by the journal Nature Neurosciencehas identified the brain mechanisms that underlie our judgment of how severely a person who has harmed another should be punished. Specifically, the study determined how the area of the brain that determines whether such an act was intentional or unintentional trumps the emotional urge to punish the person, however gruesome the harm may be.

A fundamental aspect of the human experience is the desire to punish harmful acts, even when the victim is a perfect stranger. Equally important, however, is our ability to put the brakes on this impulse when we realize the harm was done unintentionally,” said Rene Marois, the Vanderbilt University professor of psychology who headed the research team. “This study helps us begin to elucidate the neural circuitry that permits this type of regulation.”

The study

In the experiment, the brains of 30 volunteers (20 male, 10 female, average age 23 years) were imaged using functional MRI (fMRI) while they read a series of brief scenarios that described how the actions of a protagonist named John brought harm to either Steve or Mary. The scenarios depicted four different levels of harm: death, maiming, physical assault and property damage. In half of them, the harm was clearly identified as intentional and in half it was clearly identified as unintentional.

Two versions of each scenario were created: one with a factual description of the harm and the other with a graphic description. For example, in a mountain climbing scenario where John cuts Steve’s rope, the factual version states, “Steve falls 100 feet to the ground below. Steve experiences significant bodily harm from the fall and he dies from his injuries shortly after impact.” And the graphic version reads, “Steve plummets to the rocks below. Nearly every bone in his body is broken upon impact. Steve’s screams are muffled by thick, foamy blood flowing from his mouth as he bleeds to death.”

After reading each scenario the participants were asked to list how much punishment John deserved on a scale from zero (no punishment) to nine (most severe punishment the subject endorsed).

Analysis of the responses

When the responses were analyzed, the researchers found that the manner in which the harmful consequences of an action are described significantly influences the level of punishment that people consider appropriate: When the harm was described in a graphic or lurid fashion then people set the punishment level higher than when it was described matter-of-factly. However, this higher punishment level only applied when the participants considered the resulting harm to be intentional. When they considered it to be unintentional, the way it was described didn’t have any effect.

What we’ve shown is that manipulations of gruesome language leads to harsher punishment, but only in cases where the harm was intentional. Language had no effect when the harm was caused unintentionally,” summarized Michael Treadway, a post-doctoral fellow at Harvard Medical School and lead author of the study.

According to the researchers, the fact that the mere presence of graphic language could cause participants to ratchet up the severity of the punishments suggests that photographs, video and other graphic materials sampled from a crime scene is likely to have an even stronger impact on an individual’s desire to punish.

“Although the underlying scientific basis of this effect wasn’t known until now, the legal system recognized it a long time ago and made provisions to counteract it,” said Treadway. “Judges are permitted to exclude relevant evidence from a trial if they decide that its probative value is substantially outweighed by its prejudicial nature.”

Underlying neuroanatomy

The fMRI scans revealed the areas of the brain that are involved in this complex process. They found that the amygdala, an almond-shaped set of neurons that plays a key role in processing emotions, responded most strongly to the graphic language condition. Like the punishment ratings themselves, however, this effect in the amygdala was only present when harm was done intentionally. Moreover, in this situation the researchers found that the amygdala showed stronger communication with the dorsolateral prefrontal cortex (dlPFC), an area that is critical for punishment decision-making. When the harm was done unintentionally, however, a different regulatory network – one involved in decoding the mental states of other people – became more active and appeared to suppress amygdala responses to the graphic language, thereby preventing the amygdala from affecting decision-making areas in dlPFC.

“This is basically a reassuring finding,” said Marois. “It indicates that, when the harm is not intended, we don’t simply shunt aside the emotional impulse to punish. Instead, it appears that the brain down-regulates the impulse so we don’t feel it as strongly. That is preferable because the urge to punish is less likely to resurface at a future date.”

Filed under brain imaging amygdala prefrontal cortex punishment psychology neuroscience science

679 notes

Do we really only use 10% of our brain?

As the new film Lucy, starring Scarlett Johansson and Morgan Freeman is set to be released in the cinemas this week, I feel I should attempt to dispel the unfounded premise of the film – that we only use 10% of our brains. Let me state that there is no scientific evidence that supports this statement, it is simply a myth.
The concept behind the film is that through the administration of a new cognitive enhancing drug, our female lead character, Lucy, becomes able to harness powerful mental capabilities and enhanced physical abilities. These include telekinesis, mental time travel and being able to absorb information instantaneously. Viewed as such, the human brain should be essentially capable of these feats, we just fail to push our capacity. So if we can unlock the “unused” 90% of the brain we too could be geniuses with super powers?

Read more

Do we really only use 10% of our brain?

As the new film Lucy, starring Scarlett Johansson and Morgan Freeman is set to be released in the cinemas this week, I feel I should attempt to dispel the unfounded premise of the film – that we only use 10% of our brains. Let me state that there is no scientific evidence that supports this statement, it is simply a myth.

The concept behind the film is that through the administration of a new cognitive enhancing drug, our female lead character, Lucy, becomes able to harness powerful mental capabilities and enhanced physical abilities. These include telekinesis, mental time travel and being able to absorb information instantaneously. Viewed as such, the human brain should be essentially capable of these feats, we just fail to push our capacity. So if we can unlock the “unused” 90% of the brain we too could be geniuses with super powers?

Read more

Filed under 10% of brain brain function Lucy psychology neuroscience science

122 notes

Not too early for maths
Bad maths grades, poor participation in class, no interest in arithmetic. Preterm children often suffer from dyscalculia – at least according to some scientific studies. A misunderstanding, claims developmental psychologist Dr Julia Jäkel, who has been studying the performance of preterm children.
Thanks to modern medicine, the percentage of preterm survivors is constantly increasing. On the cognitive level, these children frequently have long-term problems such as poor arithmetic skills and difficulty concentrating. For a long time, research focused on high-risk children, born before 32 weeks gestational age or with less than 1,500 gram. Current studies from the most recent years, however, show that this approach is too short-sighted.
Dr Julia Jäkel from the Department of Developmental Psychology has analysed cognitive abilities of children born between 23 and 41 weeks gestation. In doing so, she covered the entire spectrum, ranging from extremely preterm to healthy term born infants. For this purpose, she used data of the Bavarian Longitudinal Study, which has been following a birth cohort from the late 80s until today. “Having access to such a comprehensive long-term study is a dream come true for every developmental psychologist,” says the Bochum researcher. Over the course of the study, all children underwent a whole battery of tests that assessed their cognitive and educational abilities, and their parents were interviewed in depth.
The RUB researcher has so far mainly focused on data collected at preschool and early school age. For different test tasks, she assessed their cognitive workload, a criterion for the complexity of a given task. The data showed that preterm children had greater difficulties with tasks that demanded higher working memory resources. Moreover, results revealed that not only high-risk children had significant difficulties. On average, the more preterm a child had been born, the poorer were his or her abilities to solve complex tasks.
But what exactly is the nature of these difficulties? It has been frequently suggested that preterm children suffer from dyscalculia. A phenomenon that Julia Jäkel examined more closely. “Mathematical deficiencies, maths learning disorder, dyscalculia, innumeracy – these terms’ definitions vary slightly,” she explains, but there are no standardised, internationally consistent diagnostic criteria. In order to assess specific maths deficiencies, children in Germany are assessed with a number of tests. If their results fall below a certain cut off value in maths while their cognitive skills (IQ) are in the normal range, they are diagnosed with “maths learning disorder” or “dyscalculia”.
“The problem with preterm children, however, is that they often have general cognitive deficits,” Julia Jäkel points out. “According to current criteria, these children can’t be diagnosed.” Together with Dieter Wolke from the University of Warwick, UK, she compared different diagnostic criteria for dyscalculia in her analysis. The aim of the study was to identify specific maths deficiencies in preterm children that were independent of general cognitive impairments. With surprising results: “There is no specific maths deficit in preterm children if their general IQ is factored in,” says the researcher.
This means that preterm children do not suffer from dyscalculia more often than term children. However, they often have maths difficulties and these may not be recognized. This is because the current criteria make it impossible to diagnose dyscalculia if a child also has general cognitive deficits. Thus, these children do not receive specific help in maths although they may be in urgent need. “We need reliable and consistent diagnostic criteria,” demands Julia Jäkel. “And we’ve got to find ways to actually deliver support in schools.”
Together with her British team, the psychologist compared the results of the Bavarian Longitudinal Study with “EPICure” data, a similar study that commenced in the UK in the 1990s, following a cohort of extremely preterm children. The researchers focus on mathematical and educational performance. British preterm children had similar cognitive and basic numerical skills as German preterm children. In terms of maths achievement, however, they showed significantly better results. “We explain this with the fact that, unlike in Germany, in the UK it has not been possible for children to delay school entry,” explains Julia Jäkel. “In addition, special schools are attended by only a small percentage of extremely disabled children. All other children are integrated into normal classes in regular schools and receive targeted support there.”
The developmental psychologist has already demonstrated that assistance at primary-school age can really make a difference. Parents who support their preterm children with sensitive scaffolding can compensate the negative cognitive effects of preterm birth. It is helpful, for example, if parents give their children appropriate feedback to homework tasks and suggest potential solutions, rather than solving the tasks for the child. However, Julia Jäkel believes that a lot of research is yet to be done as far as intervention is concerned: “A large percentage of parents is very dedicated and has resources to help their children,” she says. “But research has not yet produced anything that would ensure successful results in the long-term.” Together with colleagues from the university hospital in Essen, the RUB researcher plans to investigate the benefits of computer-aided working memory training for preterm children’s school success, which has already been successfully applied on an international level.
It would also be helpful if findings from related disciplines, such as developmental psychology, educational research, and neonatal medicine were better integrated. This is, for example, because neonatal medical treatment can significantly affect later cognitive performance. Together with her interdisciplinary team, Julia Jäkel used a comprehensive model to analyse to what extent different neonatal medical indicators affect cognitive development at age 20 months, attention abilities at age six, and maths abilities at age eight years. In her analyses, she factored in child sex and socio-economic status.
Results showed that neonatal medical variables, e.g., the duration of mechanical ventilation, predicted cognitive abilities at age 20 months. Both factors together predicted attention regulation at age six years. And all those precursors, in turn, affected long-term general maths abilities.
Subsequently, Julia Jäkel analysed the data once again from a different perspective, in order to predict specific maths skills that were independent of the child’s IQ. In that model, only two variables had direct impact: the duration of mechanical ventilation and hospitalisation after birth. In the 1980s, when children participating in the Bavarian Longitudinal Study were born, German doctors often used invasive ventilation methods. Today, less invasive methods are available, but to what extent they may affect long-term cognitive performance has not yet been investigated.
“Both too high and too low oxygen concentrations are harmful to brain development,” explains Julia Jäkel. “The neonatologist in charge is faced with the great challenge of determining the right dose for each infant, depending on individually changing situations.” This is why it is so important to integrate psychological models with neonatal intensive care research. The joint objective is to offer preterm children the chance of a successful school career, high quality of life and social participation.

Not too early for maths

Bad maths grades, poor participation in class, no interest in arithmetic. Preterm children often suffer from dyscalculia – at least according to some scientific studies. A misunderstanding, claims developmental psychologist Dr Julia Jäkel, who has been studying the performance of preterm children.

Thanks to modern medicine, the percentage of preterm survivors is constantly increasing. On the cognitive level, these children frequently have long-term problems such as poor arithmetic skills and difficulty concentrating. For a long time, research focused on high-risk children, born before 32 weeks gestational age or with less than 1,500 gram. Current studies from the most recent years, however, show that this approach is too short-sighted.

Dr Julia Jäkel from the Department of Developmental Psychology has analysed cognitive abilities of children born between 23 and 41 weeks gestation. In doing so, she covered the entire spectrum, ranging from extremely preterm to healthy term born infants. For this purpose, she used data of the Bavarian Longitudinal Study, which has been following a birth cohort from the late 80s until today. “Having access to such a comprehensive long-term study is a dream come true for every developmental psychologist,” says the Bochum researcher. Over the course of the study, all children underwent a whole battery of tests that assessed their cognitive and educational abilities, and their parents were interviewed in depth.

The RUB researcher has so far mainly focused on data collected at preschool and early school age. For different test tasks, she assessed their cognitive workload, a criterion for the complexity of a given task. The data showed that preterm children had greater difficulties with tasks that demanded higher working memory resources. Moreover, results revealed that not only high-risk children had significant difficulties. On average, the more preterm a child had been born, the poorer were his or her abilities to solve complex tasks.

But what exactly is the nature of these difficulties? It has been frequently suggested that preterm children suffer from dyscalculia. A phenomenon that Julia Jäkel examined more closely. “Mathematical deficiencies, maths learning disorder, dyscalculia, innumeracy – these terms’ definitions vary slightly,” she explains, but there are no standardised, internationally consistent diagnostic criteria. In order to assess specific maths deficiencies, children in Germany are assessed with a number of tests. If their results fall below a certain cut off value in maths while their cognitive skills (IQ) are in the normal range, they are diagnosed with “maths learning disorder” or “dyscalculia”.

“The problem with preterm children, however, is that they often have general cognitive deficits,” Julia Jäkel points out. “According to current criteria, these children can’t be diagnosed.” Together with Dieter Wolke from the University of Warwick, UK, she compared different diagnostic criteria for dyscalculia in her analysis. The aim of the study was to identify specific maths deficiencies in preterm children that were independent of general cognitive impairments. With surprising results: “There is no specific maths deficit in preterm children if their general IQ is factored in,” says the researcher.

This means that preterm children do not suffer from dyscalculia more often than term children. However, they often have maths difficulties and these may not be recognized. This is because the current criteria make it impossible to diagnose dyscalculia if a child also has general cognitive deficits. Thus, these children do not receive specific help in maths although they may be in urgent need. “We need reliable and consistent diagnostic criteria,” demands Julia Jäkel. “And we’ve got to find ways to actually deliver support in schools.”

Together with her British team, the psychologist compared the results of the Bavarian Longitudinal Study with “EPICure” data, a similar study that commenced in the UK in the 1990s, following a cohort of extremely preterm children. The researchers focus on mathematical and educational performance. British preterm children had similar cognitive and basic numerical skills as German preterm children. In terms of maths achievement, however, they showed significantly better results. “We explain this with the fact that, unlike in Germany, in the UK it has not been possible for children to delay school entry,” explains Julia Jäkel. “In addition, special schools are attended by only a small percentage of extremely disabled children. All other children are integrated into normal classes in regular schools and receive targeted support there.”

The developmental psychologist has already demonstrated that assistance at primary-school age can really make a difference. Parents who support their preterm children with sensitive scaffolding can compensate the negative cognitive effects of preterm birth. It is helpful, for example, if parents give their children appropriate feedback to homework tasks and suggest potential solutions, rather than solving the tasks for the child. However, Julia Jäkel believes that a lot of research is yet to be done as far as intervention is concerned: “A large percentage of parents is very dedicated and has resources to help their children,” she says. “But research has not yet produced anything that would ensure successful results in the long-term.” Together with colleagues from the university hospital in Essen, the RUB researcher plans to investigate the benefits of computer-aided working memory training for preterm children’s school success, which has already been successfully applied on an international level.

It would also be helpful if findings from related disciplines, such as developmental psychology, educational research, and neonatal medicine were better integrated. This is, for example, because neonatal medical treatment can significantly affect later cognitive performance. Together with her interdisciplinary team, Julia Jäkel used a comprehensive model to analyse to what extent different neonatal medical indicators affect cognitive development at age 20 months, attention abilities at age six, and maths abilities at age eight years. In her analyses, she factored in child sex and socio-economic status.

Results showed that neonatal medical variables, e.g., the duration of mechanical ventilation, predicted cognitive abilities at age 20 months. Both factors together predicted attention regulation at age six years. And all those precursors, in turn, affected long-term general maths abilities.

Subsequently, Julia Jäkel analysed the data once again from a different perspective, in order to predict specific maths skills that were independent of the child’s IQ. In that model, only two variables had direct impact: the duration of mechanical ventilation and hospitalisation after birth. In the 1980s, when children participating in the Bavarian Longitudinal Study were born, German doctors often used invasive ventilation methods. Today, less invasive methods are available, but to what extent they may affect long-term cognitive performance has not yet been investigated.

“Both too high and too low oxygen concentrations are harmful to brain development,” explains Julia Jäkel. “The neonatologist in charge is faced with the great challenge of determining the right dose for each infant, depending on individually changing situations.” This is why it is so important to integrate psychological models with neonatal intensive care research. The joint objective is to offer preterm children the chance of a successful school career, high quality of life and social participation.

Filed under dyscalculia mathematics cognitive development brain development children psychology neuroscience science

125 notes

Clues to curbing obesity found in neuronal ‘sweet spot’
Preventing weight gain, obesity, and ultimately diabetes could be as simple as keeping a nuclear receptor from being activated in a small part of the brain, according to a new study by Yale School of Medicine researchers.
Published in the Aug. 1 issue of The Journal of Clinical Investigation (JCI), the study showed that when the researchers blocked the effects of the nuclear receptor PPARgamma in a small number of brain cells in mice, the animals ate less and became resistant to a high-fat diet.
“These animals ate fat and sugar, and did not gain weight, while their control littermates did,” said lead author Sabrina Diano, professor in the Department of Obstetrics, Gynecology & Reproductive Sciences at Yale School of Medicine. “We showed that the PPARgamma receptor in neurons that produce POMC could control responses to a high-fat diet without resulting in obesity.”
POMC neurons are found in the hypothalamus and regulate food intake. They are the neurons that when activated make you feel full and curb appetite. PPARgamma regulates the activation of these neurons.
Diano and her team studied transgenic mice that were genetically engineered to delete the PPARgamma receptor from POMC neurons. They wanted to see if they could prevent the obesity associated with a high-fat, high-sugar diet.
“When we blocked PPARgamma in these hypothalamic cells, we found an increased level of free radical formation in POMC neurons, and they were more active,” said Diano, who is also professor of comparative medicine and neurobiology at Yale and director of the Reproductive Neurosciences Group.
The findings also have key implications in diabetes. PPARgamma is a target of thiazolidinedione (TZD), a class of drugs used to treat type 2 diabetes. They lower blood-glucose levels, however, patients gain weight on these medications.
“Our study suggests that the increased weight gain in diabetic patients treated with TZD could be due to the effect of this drug in the brain, therefore, targeting peripheral PPARgamma to treat type 2 diabetes should be done by developing TZD compounds that can’t penetrate the brain,” said Diano. “We could keep the benefits of TZD without the side-effects of weight gain. Our next steps in this research are to test this theory in diabetes mouse models.”

Clues to curbing obesity found in neuronal ‘sweet spot’

Preventing weight gain, obesity, and ultimately diabetes could be as simple as keeping a nuclear receptor from being activated in a small part of the brain, according to a new study by Yale School of Medicine researchers.

Published in the Aug. 1 issue of The Journal of Clinical Investigation (JCI), the study showed that when the researchers blocked the effects of the nuclear receptor PPARgamma in a small number of brain cells in mice, the animals ate less and became resistant to a high-fat diet.

“These animals ate fat and sugar, and did not gain weight, while their control littermates did,” said lead author Sabrina Diano, professor in the Department of Obstetrics, Gynecology & Reproductive Sciences at Yale School of Medicine. “We showed that the PPARgamma receptor in neurons that produce POMC could control responses to a high-fat diet without resulting in obesity.”

POMC neurons are found in the hypothalamus and regulate food intake. They are the neurons that when activated make you feel full and curb appetite. PPARgamma regulates the activation of these neurons.

Diano and her team studied transgenic mice that were genetically engineered to delete the PPARgamma receptor from POMC neurons. They wanted to see if they could prevent the obesity associated with a high-fat, high-sugar diet.

“When we blocked PPARgamma in these hypothalamic cells, we found an increased level of free radical formation in POMC neurons, and they were more active,” said Diano, who is also professor of comparative medicine and neurobiology at Yale and director of the Reproductive Neurosciences Group.

The findings also have key implications in diabetes. PPARgamma is a target of thiazolidinedione (TZD), a class of drugs used to treat type 2 diabetes. They lower blood-glucose levels, however, patients gain weight on these medications.

“Our study suggests that the increased weight gain in diabetic patients treated with TZD could be due to the effect of this drug in the brain, therefore, targeting peripheral PPARgamma to treat type 2 diabetes should be done by developing TZD compounds that can’t penetrate the brain,” said Diano. “We could keep the benefits of TZD without the side-effects of weight gain. Our next steps in this research are to test this theory in diabetes mouse models.”

Filed under obesity neurons PPARgamma receptor diabetes hypothalamus medicine science

free counters