Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

333 notes

Kim Peek, The Real Rain Man
Kim Peek, who lent inspiration to the fictional character Raymond Babbitt—played by Dustin Hoffman—in the movie Rain Man, was a remarkable savant. A savant is an individual who—with little or no apparent effort—completes intellectual tasks that would be impossible for ordinary people to master.
Kim Peek’s special abilities started early, around the age of a year and a half. He could read both pages of an open book at once, one page with one eye and the other with the other eye. This style of reading continued until his dead in 2009. His reading comprehension was impressive. He would retain 98 percent of the information he read. Since he spent most of his days in the library with his dad, he quickly made it through thousands of books, encyclopedia and maps. He could read a thick book in an hour and remember just about anything in it. Because he could quickly absorb loads of information and recall it when necessary, his condition made him a living encyclopedia and a walking GPS. He could provide driving directions between almost any two cities in the world. He could also do calendar calculations (“which day was June 15, 1632?”) and remember old baseball scores and a vast amount of musical, historical and political facts. His memory abilities were astounding.
Unlike many individuals with savant syndrome, Kim Peek was not afflicted with autistic spectrum disorder. Though he was strongly introverted, he did not have difficulties with social understanding and communication. The main cause of his remarkable abilities seems to have been the lack of connections between his brain’s two hemispheres. An MRI scan revealed an absence of the corpus callosum, the anterior commissure and the hippocampal commissure, the parts of the neurological system that transfer information between hemispheres. In some sense Kim was a natural born split-brain patient.
Read more

Kim Peek, The Real Rain Man

Kim Peek, who lent inspiration to the fictional character Raymond Babbitt—played by Dustin Hoffman—in the movie Rain Man, was a remarkable savant. A savant is an individual who—with little or no apparent effort—completes intellectual tasks that would be impossible for ordinary people to master.

Kim Peek’s special abilities started early, around the age of a year and a half. He could read both pages of an open book at once, one page with one eye and the other with the other eye. This style of reading continued until his dead in 2009. His reading comprehension was impressive. He would retain 98 percent of the information he read. Since he spent most of his days in the library with his dad, he quickly made it through thousands of books, encyclopedia and maps. He could read a thick book in an hour and remember just about anything in it. Because he could quickly absorb loads of information and recall it when necessary, his condition made him a living encyclopedia and a walking GPS. He could provide driving directions between almost any two cities in the world. He could also do calendar calculations (“which day was June 15, 1632?”) and remember old baseball scores and a vast amount of musical, historical and political facts. His memory abilities were astounding.

Unlike many individuals with savant syndrome, Kim Peek was not afflicted with autistic spectrum disorder. Though he was strongly introverted, he did not have difficulties with social understanding and communication. The main cause of his remarkable abilities seems to have been the lack of connections between his brain’s two hemispheres. An MRI scan revealed an absence of the corpus callosum, the anterior commissure and the hippocampal commissure, the parts of the neurological system that transfer information between hemispheres. In some sense Kim was a natural born split-brain patient.

Read more

Filed under ACC Kim Peek congenital disorders corpus callosum memory savants split-brain neuroscience science

70 notes

Brain displays an intrinsic mechanism for fighting infection
White blood cells have long reigned as the heroes of the immune system. When an infection strikes, the cells, produced in bone marrow, race through the blood to fight off the pathogen. But new research is emerging that individual organs can also play a role in immune system defense, essentially being their own hero. In a study examining a rare and deadly brain infection, scientists at The Rockefeller University have found that the brain cells of healthy people likely produce their own immune system molecules, demonstrating an “intrinsic immunity” that is crucial for stopping an infection.
Shen-Ying Zhang, a clinical scholar in the St. Giles Laboratory of Human Genetics of Infectious Diseases, has been studying children with Herpes simplex encephalitis, a life-threatening brain infection from the herpes virus, HSV-1, that can cause significant brain damage. The scientists already knew from previous work that children with this encephalitis have a genetic defect that impairs the function of an immune system receptor — toll-like receptor 3 (TLR3) — in the brain. For this study they wanted to see how the defect in TLR3 was hampering the brain’s ability to fight the herpes infection.
When TLR3 detects a pathogen it triggers an immune response causing the release of proteins called interferons to sound the alarm and “interfere” with the pathogen’s replication. It’s most commonly associated with white blood cells, found throughout the body, but here the researchers were examining the receptor’s presence on neurons and other brain cells.
“One interesting thing about these patients is that they didn’t have any of the other, more common herpes symptoms. They didn’t have an infection on their skin or their mouths, just in their brains. We therefore hypothesized that the TLR3 response must be specifically responsible for keeping the herpes virus from infecting the brain and not necessary in other parts of the body,” says Zhang.
The lab, headed by Jean-Laurent Casanova, collaborated with scientists at Harvard Medical School and Memorial Sloan-Kettering Cancer Institute to create induced pluripotent stem cells. Made from the patients’ own tissue, the stem cells were developed into central nervous system cells that carried the patients’ genetic defects. Zhang exposed the cells to HSV-1 and to synthetic double-stranded RNA, which mimics a byproduct of the virus that spurs the toll-like receptors into action. By measuring levels of interferon, Zhang showed that the patients’ TLR3 response was indeed faulty; their cells weren’t making these important immune system proteins, leaving them unable to fight off the infection.
Zhang also exposed the patients’ blood cells to the virus and found that the TLR3 defect was not an issue there as it was in the brain — interferons were released by other means.
Because the toll-like receptors on neurons proved to be vital in preventing the encephalitis infection, the researchers concluded that brain cells use it as an in-house mechanism to fight infection, rather than relying on white blood cells. When its function was impaired, patients couldn’t get better.
“This is evidence of an intrinsic immunity, a newly-discovered function of the immune system,” says Zhang. “It’s likely that other organs also have their own specific tools for fighting infection.”
The researchers are putting together a pilot study to test an interferon-based treatment in patients with the encephalitis, believing it will help speed recovery and increase the survival rate when used alongside antiviral drugs. They’ll also explore whether the brain displays an intrinsic immunity to other types of viral infection.

Brain displays an intrinsic mechanism for fighting infection

White blood cells have long reigned as the heroes of the immune system. When an infection strikes, the cells, produced in bone marrow, race through the blood to fight off the pathogen. But new research is emerging that individual organs can also play a role in immune system defense, essentially being their own hero. In a study examining a rare and deadly brain infection, scientists at The Rockefeller University have found that the brain cells of healthy people likely produce their own immune system molecules, demonstrating an “intrinsic immunity” that is crucial for stopping an infection.

Shen-Ying Zhang, a clinical scholar in the St. Giles Laboratory of Human Genetics of Infectious Diseases, has been studying children with Herpes simplex encephalitis, a life-threatening brain infection from the herpes virus, HSV-1, that can cause significant brain damage. The scientists already knew from previous work that children with this encephalitis have a genetic defect that impairs the function of an immune system receptor — toll-like receptor 3 (TLR3) — in the brain. For this study they wanted to see how the defect in TLR3 was hampering the brain’s ability to fight the herpes infection.

When TLR3 detects a pathogen it triggers an immune response causing the release of proteins called interferons to sound the alarm and “interfere” with the pathogen’s replication. It’s most commonly associated with white blood cells, found throughout the body, but here the researchers were examining the receptor’s presence on neurons and other brain cells.

“One interesting thing about these patients is that they didn’t have any of the other, more common herpes symptoms. They didn’t have an infection on their skin or their mouths, just in their brains. We therefore hypothesized that the TLR3 response must be specifically responsible for keeping the herpes virus from infecting the brain and not necessary in other parts of the body,” says Zhang.

The lab, headed by Jean-Laurent Casanova, collaborated with scientists at Harvard Medical School and Memorial Sloan-Kettering Cancer Institute to create induced pluripotent stem cells. Made from the patients’ own tissue, the stem cells were developed into central nervous system cells that carried the patients’ genetic defects. Zhang exposed the cells to HSV-1 and to synthetic double-stranded RNA, which mimics a byproduct of the virus that spurs the toll-like receptors into action. By measuring levels of interferon, Zhang showed that the patients’ TLR3 response was indeed faulty; their cells weren’t making these important immune system proteins, leaving them unable to fight off the infection.

Zhang also exposed the patients’ blood cells to the virus and found that the TLR3 defect was not an issue there as it was in the brain — interferons were released by other means.

Because the toll-like receptors on neurons proved to be vital in preventing the encephalitis infection, the researchers concluded that brain cells use it as an in-house mechanism to fight infection, rather than relying on white blood cells. When its function was impaired, patients couldn’t get better.

“This is evidence of an intrinsic immunity, a newly-discovered function of the immune system,” says Zhang. “It’s likely that other organs also have their own specific tools for fighting infection.”

The researchers are putting together a pilot study to test an interferon-based treatment in patients with the encephalitis, believing it will help speed recovery and increase the survival rate when used alongside antiviral drugs. They’ll also explore whether the brain displays an intrinsic immunity to other types of viral infection.

Filed under brain brain infection white blood cells immune system encephalitis neuroscience science

48 notes

Why overlearned sequences are special: distinct neural networks for ordinal sequences
Several observations suggest that overlearned ordinal categories (e.g., letters, numbers, weekdays, months) are processed differently than non-ordinal categories in the brain. In synesthesia, for example, anomalous perceptual experiences are most often triggered by members of ordinal categories (Rich et al., 2005; Eagleman, 2009). In semantic dementia (SD), the processing of ordinal stimuli appears to be preserved relative to non-ordinal ones (Cappelletti et al., 2001). Moreover, ordinal stimuli often map onto unconscious spatial representations, as observed in the SNARC effect (Dehaene et al., 1993; Fias, 1996). At present, little is known about the neural representation of ordinal categories. Using functional neuroimaging, we show that words in ordinal categories are processed in a fronto-temporo-parietal network biased toward the right hemisphere. This differs from words in non-ordinal categories (such as names of furniture, animals, cars, and fruit), which show an expected bias toward the left hemisphere. Further, we find that increased predictability of stimulus order correlates with smaller regions of BOLD activation, a phenomenon we term prediction suppression. Our results provide new insights into the processing of ordinal stimuli, and suggest a new anatomical framework for understanding the patterns seen in synesthesia, unconscious spatial representation, and SD.

Why overlearned sequences are special: distinct neural networks for ordinal sequences

Several observations suggest that overlearned ordinal categories (e.g., letters, numbers, weekdays, months) are processed differently than non-ordinal categories in the brain. In synesthesia, for example, anomalous perceptual experiences are most often triggered by members of ordinal categories (Rich et al., 2005; Eagleman, 2009). In semantic dementia (SD), the processing of ordinal stimuli appears to be preserved relative to non-ordinal ones (Cappelletti et al., 2001). Moreover, ordinal stimuli often map onto unconscious spatial representations, as observed in the SNARC effect (Dehaene et al., 1993; Fias, 1996). At present, little is known about the neural representation of ordinal categories. Using functional neuroimaging, we show that words in ordinal categories are processed in a fronto-temporo-parietal network biased toward the right hemisphere. This differs from words in non-ordinal categories (such as names of furniture, animals, cars, and fruit), which show an expected bias toward the left hemisphere. Further, we find that increased predictability of stimulus order correlates with smaller regions of BOLD activation, a phenomenon we term prediction suppression. Our results provide new insights into the processing of ordinal stimuli, and suggest a new anatomical framework for understanding the patterns seen in synesthesia, unconscious spatial representation, and SD.

Filed under brain brain activity ordinal sequences predictability semantic dementia neuroscience science

59 notes

Decision to give a group effort in the brain
A monkey would probably never agree that it is better to give than to receive, but they do apparently get some reward from giving to another monkey.
During a task in which rhesus macaques had control over whether they or another monkey would receive a squirt of fruit juice, three distinct areas of the brain were found to be involved in weighing benefits to oneself against benefits to the other, according to new research by Duke University researchers.
The team used sensitive electrodes to detect the activity of individual neurons as the animals weighed different scenarios, such as whether to reward themselves, the other monkey or nobody at all. Three areas of the brain were seen to weigh the problem differently depending on the social context of the reward. The research appears Dec. 24 in the journal Nature Neuroscience.
Using a computer screen to allocate juice rewards, the monkeys preferred to reward themselves first and foremost. But they also chose to reward the other monkey when it was either that or nothing for either of them. They also were more likely to give the reward to a monkey they knew over one they didn’t, preferred to give to lower status than higher status monkeys, and had almost no interest in giving the juice to an inanimate object.
Calculating the social aspects of the reward system seems to be a combination of action by two centers involved in calculating all sorts of rewards and a third center that adds the social dimension, according to lead researcher Michael Platt, director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience.
The orbital frontal cortex, right above the eyes, was activated when calculating rewards to the self. The anterior cingulate sulcus in the middle of the top of the brain seemed to calculate giving up a reward. But both centers appear “divorced from social context,” Platt said. A third area, the anterior cingulate gyrus (ACCg), seemed to “care a lot about what happened to the other monkey,” Platt said.
Based on results of various combinations of the reward-giving scenario the monkeys were put through, it would appear that neurons in the ACCg encode both the giving and receiving of rewards, and do so in a remarkably similar way.
The use of single-neuron electrodes to measure the activity of brain areas gives a much more precise picture than brain imaging, Platt said. Even the best imaging available now is “a six-second snapshot of tens of thousands of neurons,” which are typically operating in milliseconds.
What the team has seen happening is consistent with other studies of damaged ACCg regions in which animals lost their typical hesitation about retrieving food when facing social choices. This same region of the brain is active in people when they empathize with someone else.
"Many neurons in the anterior cingulate gyrus (ACCg) respond both when monkeys choose a drink for themselves and when they choose to give a drink to another monkey," Platt said. "One might view these as sort of mirror neurons for the reward system." The region is active as an animal merely watches another animal receiving a reward without having one themselves.
The research is another piece of the puzzle as neuroscientists search for the roots of charity and social behavior in our species and others. There have been two schools of thought about how the social reward system is set up, Platt said. One holds that there is generic circuitry for rewards that has been adapted to our social behavior because it helped humans and other social animals like monkeys thrive. Another school holds that social behavior is so important to humans and other highly social animals like monkeys that there may be some special circuits for it, Platt said.
This finding, in macaques that have only a very distant common ancestor with us and are “not a particularly prosocial animal,” suggests that “this specialized social circuitry evolved a long time ago presumably to support cooperative behavior,” Platt said.
(Photo: EPA)

Decision to give a group effort in the brain

A monkey would probably never agree that it is better to give than to receive, but they do apparently get some reward from giving to another monkey.

During a task in which rhesus macaques had control over whether they or another monkey would receive a squirt of fruit juice, three distinct areas of the brain were found to be involved in weighing benefits to oneself against benefits to the other, according to new research by Duke University researchers.

The team used sensitive electrodes to detect the activity of individual neurons as the animals weighed different scenarios, such as whether to reward themselves, the other monkey or nobody at all. Three areas of the brain were seen to weigh the problem differently depending on the social context of the reward. The research appears Dec. 24 in the journal Nature Neuroscience.

Using a computer screen to allocate juice rewards, the monkeys preferred to reward themselves first and foremost. But they also chose to reward the other monkey when it was either that or nothing for either of them. They also were more likely to give the reward to a monkey they knew over one they didn’t, preferred to give to lower status than higher status monkeys, and had almost no interest in giving the juice to an inanimate object.

Calculating the social aspects of the reward system seems to be a combination of action by two centers involved in calculating all sorts of rewards and a third center that adds the social dimension, according to lead researcher Michael Platt, director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience.

The orbital frontal cortex, right above the eyes, was activated when calculating rewards to the self. The anterior cingulate sulcus in the middle of the top of the brain seemed to calculate giving up a reward. But both centers appear “divorced from social context,” Platt said. A third area, the anterior cingulate gyrus (ACCg), seemed to “care a lot about what happened to the other monkey,” Platt said.

Based on results of various combinations of the reward-giving scenario the monkeys were put through, it would appear that neurons in the ACCg encode both the giving and receiving of rewards, and do so in a remarkably similar way.

The use of single-neuron electrodes to measure the activity of brain areas gives a much more precise picture than brain imaging, Platt said. Even the best imaging available now is “a six-second snapshot of tens of thousands of neurons,” which are typically operating in milliseconds.

What the team has seen happening is consistent with other studies of damaged ACCg regions in which animals lost their typical hesitation about retrieving food when facing social choices. This same region of the brain is active in people when they empathize with someone else.

"Many neurons in the anterior cingulate gyrus (ACCg) respond both when monkeys choose a drink for themselves and when they choose to give a drink to another monkey," Platt said. "One might view these as sort of mirror neurons for the reward system." The region is active as an animal merely watches another animal receiving a reward without having one themselves.

The research is another piece of the puzzle as neuroscientists search for the roots of charity and social behavior in our species and others. There have been two schools of thought about how the social reward system is set up, Platt said. One holds that there is generic circuitry for rewards that has been adapted to our social behavior because it helped humans and other social animals like monkeys thrive. Another school holds that social behavior is so important to humans and other highly social animals like monkeys that there may be some special circuits for it, Platt said.

This finding, in macaques that have only a very distant common ancestor with us and are “not a particularly prosocial animal,” suggests that “this specialized social circuitry evolved a long time ago presumably to support cooperative behavior,” Platt said.

(Photo: EPA)

Filed under brain orbital frontal cortex reward system primates social behavior neuroscience science

184 notes

Neuroscientists find excessive protein synthesis linked to autistic-like behaviors

Autistic-like behaviors can be partially remedied by normalizing excessive levels of protein synthesis in the brain, a team of researchers has found in a study of laboratory mice. The findings, which appear in the latest issue of Nature, provide a pathway to the creation of pharmaceuticals aimed at treating autism spectrum disorders (ASD) that are associated with diminished social interaction skills, impaired communication ability, and repetitive behaviors.

"The creation of a drug to address ASD will be difficult, but these findings offer a potential route to get there," said Eric Klann, a professor at NYU’s Center for Neural Science and the study’s senior author. "We have not only confirmed a common link for several such disorders, but also have raised the exciting possibility that the behavioral afflictions of those individuals with ASD can be addressed."

The study’s other co-authors included researchers from the University of California, San Francisco (UCSF) and three French institutions: Aix-Marseille Universite’; Institut National de la Santé et de la Recherche Médicale (INSERM); and Le Centre National de la Recherche Scientifique (CNRS).

The researchers focused on the EIF4E gene, whose mutation is associated with autism. The mutation causing autism was proposed to increase levels of the eIF4E, the protein product of EIF4E, and lead to exaggerated protein synthesis. Excessive eIF4E signaling and exaggerated protein synthesis also may play a role in a range of neurological disorders, including fragile X syndrome (FXS).

In their experiments, the researchers examined mice with increased levels of eIF4E. They found that these mice had exaggerated levels of protein synthesis in the brain and exhibited behaviors similar to those found in autistic individuals—repetitive behaviors, such as repeatedly burying marbles, diminished social interaction (the study monitored interactions with other mice), and behavioral inflexibility (the afflicted mice were unable to navigate mazes that had been slightly altered from ones they had previously solved). The researchers also found altered communication between neurons in brain regions linked to the abnormal behaviors.

To remedy to these autistic-like behaviors, the researchers then tested a drug, 4EGI-1, which diminishes protein synthesis induced by the increased levels of eIF4E. Through this drug, they hypothesized that they could return the afflicted mice’s protein production to normal levels, and, with it, reverse autistic-like behaviors.

The subsequent experiments confirmed their hypotheses. The mice were less likely to engage in repetitive behaviors, more likely to interact with other mice, and were successful in navigating mazes that differed from those they previously solved, thereby showing enhanced behavioral flexibility. Additional investigation revealed that these changes were likely due to a reduction in protein production—the levels of newly synthesized proteins in the brains of these mice were similar to those of normal mice.

"These findings highlight an invaluable mouse model for autism in which many drugs that target eIF4E can be tested," added co-author Davide Ruggero, an associate professor at UCSF’s School of Medicine and Department of Urology. "These include novel compounds that we are developing to target eIF4E hyperactivation in cancer that may also be potentially therapeutic for autistic patients."

(Source: eurekalert.org)

Filed under autism ASD fragile x syndrome protein synthesis neuroscience science

1,643 notes

The Top 5 Neuroscience Breakthroughs of 2012

More than any year before, 2012 was the year neuroscience exploded into pop culture. From mind-controlled robot hands to cyborg animals to TV specials to triumphant books, brain breakthroughs were tearing up the airwaves and the internets. From all the thrilling neurological adventures we covered over the past year, we’ve collected five stories we want to make absolutely sure you didn’t miss.

A Roadmap of Brain Wiring

Neuroscientists like to compare the task of unraveling the brain’s connections to the frustration of untangling the cords beneath your computer desk – except that in the brain, there are hundreds of millions of cords, and at least one hundred trillion plugs. Even with our most advanced computers, some researchers were despairing of ever seeing a complete connectivity map of the human brain in our lifetimes. But thanks to a team led by Van Wedeen at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital, 2012 gave us an unexpectedly clear glimpse of our brains’ large-scale wiring patterns. As it turns out, the overall pattern isn’t so much a tangle as a fabric – an intricate, multi-layered grid of cross-hatched neural highways. What’s more, it looks like our brains share this grid pattern with many other species. We’re still a long way from decoding how most of this wiring functions, but this is a big step in the right direction.

Laser-Controlled Desire

Scientists have been stimulating rats’ pleasure centers since the 1950s – but 2012 saw the widespread adoption of a new brain-stimulation method that makes all those wires and incisions look positively crude. Researchers in the blossoming field of optogenetics develop delicate devices that control the firing of targeted groups of neurons – using only light itself. By hooking rats up to a tiny fiber-optic cable and firing lasers directly into their brains, a team led by Garret D. Stuber at the University of North Carolina at Chapel Hill School of Medicine were able to isolate specific neurochemical shifts that cause rats to feel pleasure or anxiety – and switch between them at will. This method isn’t only more precise than electrical stimulation – it’s also much less damaging to the animals.

Programmable Brain Cells

Pluripotent stem cell research took off like a rocket in 2012. After discovering that skin cells can be genetically reprogrammed into stem cells, which can in turn be reprogrammed into just about any cell in the human body, a team led by Sheng Ding at UCSF managed to engineer a working network of newborn neurons from a harvest of old skin cells. In other words, the team didn’t just convert skin cells into stem cells, then into neurons – they actually kept the batch of neurons alive and functional long enough to self-organize into a primitive neural network. In the near future, it’s likely that we’ll be treating many kinds of brain injuries by growing brand-new neurons from other kinds of cells in a patient’s own body. This is already close on the horizon for liver and heart cells – but the thought of being able to technologically shape the re-growth of a damaged brain is even more exciting.

Memories on Disc

We’ve talked a lot about how easily our brains can modify and rewrite our long-term memories of facts and scenarios. In 2012, though, researchers went Full Mad Scientist with the implications of this knowledge, and blew some mouse minds in the process. One team, led by Mark Mayford of the Scripps Research Institute, took advantage of some recently invented technology that enables scientists to record and store a mouse’s memory of a familiar place on a microchip. Mayford’s team figured out how to turn specific mouse memories on and off with the flick of a switch – but they were just getting warmed up. The researchers then proceeded to record a memory in one mouse’s brain, transfer it into another mouse’s nervous system, and activate it in conjunction with one of the second mouse’s own memories. The result was a bizarre “hybrid memory” – familiarity with a place the mouse had never visited. Well, not in the flesh, anyway.

Videos of Thoughts

Our most exciting neuroscience discovery of 2012 is also one of the most controversial. A team of researchers from the Gallant lab at UC Berkeley discovered a way to reconstruct videos of entire scenes from neural activity in a person’s visual cortex. Those on the cautionary side emphasize that activity in the visual cortex is fairly easy to decode (relatively speaking, of course) and that we’re still a long, long way from decoding videos of imaginary voyages or emotional palettes. In fact, from one perspective, this isn’t much different from converting one file format into another. On the other hand, though, these videos offer the first hints of the technological reality our children may inhabit: A world where the boundaries between the objective external world and our individual subjective experiences are gradually blurred and broken down. When it comes to transforming our relationship with our own consciousness – and those of the people around us – it doesn’t get much more profound than that.

Filed under brain breakthroughs neuroscience 2012 neuroscience science

115 notes

Human Intelligence Secrets Revealed by Chimp Brains
Despite sharing 98 percent of our DNA with chimpanzees, humans have much bigger brains and are, as a species, much more intelligent. Now a new study sheds light on why: Unlike chimps, humans undergo a massive explosion in white matter growth, or the connections between brain cells, in the first two years of life.
The new results, published in the Proceedings of the Royal Society B, partly explain why humans are so much brainier than our nearest living relatives. But they also reveal why the first two years of life play such a key role in human development.
"What’s really unique about us is that our brains experience rapid establishment of connectivity in the first two years of life," said Chet Sherwood, an evolutionary neuroscientist at George Washington University, who was not involved in the study. "That probably helps to explain why those first few years of human life are so critical to set us on the course to language acquisition, cultural knowledge and all those things that make us human."
Chimpanzees
While past studies have shown that human brains go through a rapid expansion in connectivity, it wasn’t clear that was unique amongst great apes (a group that includes chimps, gorillas, orangutans and humans). To prove it was the signature of humanity’s superior intelligence, researchers would need to prove it was different from that in our closest living relatives.
However, a U.S. moratorium on acquiring new chimpanzees for medical research meant that people like Sherwood, who is trying to understand chimpanzee brain development, had to study decades-old baby chimpanzee brains that were lying around in veterinary pathologists’ labs, Sherwood told LiveScience.
But in Japan, those limitations didn’t go into place till later, allowing the researchers to do live magnetic resonance imaging (MRI) brain scans of three baby chimps as they grew to 6 years of age. They then compared the data with existing brain-imaging scans for six macaques and 28 Japanese children.
The researchers found that chimpanzees and humans both had much more brain development in early life than macaques.
"The increase in total cerebral volume during early infancy and the juvenile stage in chimpanzees and humans was approximately three times greater than that in macaques," the researchers wrote in the journal article.
But human brains expanded much more dramatically than chimpanzee brains during the first few years of life; most of that human-brain expansion was driven by explosive growth in the connections between brain cells, which manifests itself in an expansion in white matter. Chimpanzee brain volumes ballooned about half that of humans’ expansion during that time period.
The findings, while not unexpected, are unique because the researchers followed the same individual chimpanzees over time; past studies have instead pieced together brain development from scans on several apes of different ages, Sherwood said.
The explosion in white matter may also explain why experiences during the first few years of life can greatly affect children’s IQ, social life and long-term response to stress.
"That opens an opportunity for environment and social experience to influence the molding of connectivity," Sherwood said.

Human Intelligence Secrets Revealed by Chimp Brains

Despite sharing 98 percent of our DNA with chimpanzees, humans have much bigger brains and are, as a species, much more intelligent. Now a new study sheds light on why: Unlike chimps, humans undergo a massive explosion in white matter growth, or the connections between brain cells, in the first two years of life.

The new results, published in the Proceedings of the Royal Society B, partly explain why humans are so much brainier than our nearest living relatives. But they also reveal why the first two years of life play such a key role in human development.

"What’s really unique about us is that our brains experience rapid establishment of connectivity in the first two years of life," said Chet Sherwood, an evolutionary neuroscientist at George Washington University, who was not involved in the study. "That probably helps to explain why those first few years of human life are so critical to set us on the course to language acquisition, cultural knowledge and all those things that make us human."

Chimpanzees

While past studies have shown that human brains go through a rapid expansion in connectivity, it wasn’t clear that was unique amongst great apes (a group that includes chimps, gorillas, orangutans and humans). To prove it was the signature of humanity’s superior intelligence, researchers would need to prove it was different from that in our closest living relatives.

However, a U.S. moratorium on acquiring new chimpanzees for medical research meant that people like Sherwood, who is trying to understand chimpanzee brain development, had to study decades-old baby chimpanzee brains that were lying around in veterinary pathologists’ labs, Sherwood told LiveScience.

But in Japan, those limitations didn’t go into place till later, allowing the researchers to do live magnetic resonance imaging (MRI) brain scans of three baby chimps as they grew to 6 years of age. They then compared the data with existing brain-imaging scans for six macaques and 28 Japanese children.

The researchers found that chimpanzees and humans both had much more brain development in early life than macaques.

"The increase in total cerebral volume during early infancy and the juvenile stage in chimpanzees and humans was approximately three times greater than that in macaques," the researchers wrote in the journal article.

But human brains expanded much more dramatically than chimpanzee brains during the first few years of life; most of that human-brain expansion was driven by explosive growth in the connections between brain cells, which manifests itself in an expansion in white matter. Chimpanzee brain volumes ballooned about half that of humans’ expansion during that time period.

The findings, while not unexpected, are unique because the researchers followed the same individual chimpanzees over time; past studies have instead pieced together brain development from scans on several apes of different ages, Sherwood said.

The explosion in white matter may also explain why experiences during the first few years of life can greatly affect children’s IQ, social life and long-term response to stress.

"That opens an opportunity for environment and social experience to influence the molding of connectivity," Sherwood said.

Filed under brain development evolution primates cerebral tissue white matter neuroscience science

144 notes

Mistaking OCD for ADHD Has Serious Consequences
On the surface, obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD) appear very similar, with impaired attention, memory, or behavioral control. But Prof. Reuven Dar of Tel Aviv University’s School of Psychological Sciences argues that these two neuropsychological disorders have very different roots — and there are enormous consequences if they are mistaken for each other.
Prof. Dar and fellow researcher Dr. Amitai Abramovitch, who completed his PhD under Prof. Dar’s supervision, have determined that despite appearances, OCD and ADHD are far more different than alike. While groups of both OCD and ADHD patients were found to have difficulty controlling their abnormal impulses in a laboratory setting, only the ADHD group had significant problems with these impulses in the real world.
According to Prof. Dar, this shows that while OCD and ADHD may appear similar on a behavioral level, the mechanism behind the two disorders differs greatly. People with ADHD are impulsive risk-takers, rarely reflecting on the consequences of their actions. In contrast, people with OCD are all too concerned with consequences, causing hesitancy, difficulty in decision-making, and the tendency to over-control and over-plan.
Their findings, published in the Journal of Neuropsychology, draw a clear distinction between OCD and ADHD and provide more accurate guidelines for correct diagnosis. Confusing the two threatens successful patient care, warns Prof. Dar, noting that treatment plans for the two disorders can differ dramatically. Ritalin, a psychostimulant commonly prescribed to ADHD patients, can actually exacerbate OCD behaviors, for example. Prescribed to an OCD patient, it will only worsen symptoms.

Mistaking OCD for ADHD Has Serious Consequences

On the surface, obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD) appear very similar, with impaired attention, memory, or behavioral control. But Prof. Reuven Dar of Tel Aviv University’s School of Psychological Sciences argues that these two neuropsychological disorders have very different roots — and there are enormous consequences if they are mistaken for each other.

Prof. Dar and fellow researcher Dr. Amitai Abramovitch, who completed his PhD under Prof. Dar’s supervision, have determined that despite appearances, OCD and ADHD are far more different than alike. While groups of both OCD and ADHD patients were found to have difficulty controlling their abnormal impulses in a laboratory setting, only the ADHD group had significant problems with these impulses in the real world.

According to Prof. Dar, this shows that while OCD and ADHD may appear similar on a behavioral level, the mechanism behind the two disorders differs greatly. People with ADHD are impulsive risk-takers, rarely reflecting on the consequences of their actions. In contrast, people with OCD are all too concerned with consequences, causing hesitancy, difficulty in decision-making, and the tendency to over-control and over-plan.

Their findings, published in the Journal of Neuropsychology, draw a clear distinction between OCD and ADHD and provide more accurate guidelines for correct diagnosis. Confusing the two threatens successful patient care, warns Prof. Dar, noting that treatment plans for the two disorders can differ dramatically. Ritalin, a psychostimulant commonly prescribed to ADHD patients, can actually exacerbate OCD behaviors, for example. Prescribed to an OCD patient, it will only worsen symptoms.

Filed under ADHD OCD frontostriatal hypoactivity hyperactivity psychology neuroscience science

147 notes

Researchers report progress in quest to create objective method of detecting pain
A method of analyzing brain structure using advanced computer algorithms accurately predicted 76 percent of the time whether a patient had lower back pain in a new study by researchers from the Stanford University School of Medicine.
The study, published online Dec. 17 in Cerebral Cortex, reported that using these algorithms to read brain scans may be an early step toward providing an objective method for diagnosing chronic pain.
“People have been looking for an objective pain detector — a ‘pain scanner’ — for a long time,” said Sean Mackey, MD, PhD, chief of the Division of Pain Medicine and professor of anesthesiology, pain and perioperative medicine, and of neurosciences and neurology. “We’re still a long way from that, but this method may someday augment self-reporting as the primary way of determining whether a patient is in chronic pain.”
The need for a better way to objectively measure pain instead of relying solely on self-reporting has long been acknowledged. But the highly subjective nature of pain has made this an elusive goal. Advances in neuroimaging techniques have initiated a debate over whether this may be possible. Such a tool would be particularly useful in treating very young or very old patients or others who have difficulty communicating, Mackey said.
In a study published last year in PLoS ONE, Mackey and colleagues used computer algorithms to analyze magnetic resonance imaging scans of the brain to accurately measure thermal pain in research subjects 81 percent of the time. But the question remained whether this could be a successful method for measuring chronic pain.
The goal of the new study was to accurately identify patients with lower back pain vs. healthy individuals on the basis of structural changes to the brain, and also to investigate possible pathological differences across the brain.
Researchers conducted MRI scans of 47 subjects who had lower back pain and 47 healthy subjects. Both groups were screened for medication use and mood disorders. The average age was 37.
The idea was to “train” a linear support vector machine — a computer algorithm invented in 1995 — on one set of individuals, and then use that computer model to accurately read the brain scans and classify pain in a completely new set of individuals.
The method successfully predicted the patients with lower back pain 76 percent of the time.
“Lower back pain is the most common chronic condition we deal with,” Mackey said. “In many cases, we don’t understand the cause. What we have learned is that the problem may not be in the back, but in the amplification coming from the back to the brain and nervous system. In this study, we did identify brain regions we think are playing a role in this phenomena.”

Researchers report progress in quest to create objective method of detecting pain

A method of analyzing brain structure using advanced computer algorithms accurately predicted 76 percent of the time whether a patient had lower back pain in a new study by researchers from the Stanford University School of Medicine.

The study, published online Dec. 17 in Cerebral Cortex, reported that using these algorithms to read brain scans may be an early step toward providing an objective method for diagnosing chronic pain.

“People have been looking for an objective pain detector — a ‘pain scanner’ — for a long time,” said Sean Mackey, MD, PhD, chief of the Division of Pain Medicine and professor of anesthesiology, pain and perioperative medicine, and of neurosciences and neurology. “We’re still a long way from that, but this method may someday augment self-reporting as the primary way of determining whether a patient is in chronic pain.”

The need for a better way to objectively measure pain instead of relying solely on self-reporting has long been acknowledged. But the highly subjective nature of pain has made this an elusive goal. Advances in neuroimaging techniques have initiated a debate over whether this may be possible. Such a tool would be particularly useful in treating very young or very old patients or others who have difficulty communicating, Mackey said.

In a study published last year in PLoS ONE, Mackey and colleagues used computer algorithms to analyze magnetic resonance imaging scans of the brain to accurately measure thermal pain in research subjects 81 percent of the time. But the question remained whether this could be a successful method for measuring chronic pain.

The goal of the new study was to accurately identify patients with lower back pain vs. healthy individuals on the basis of structural changes to the brain, and also to investigate possible pathological differences across the brain.

Researchers conducted MRI scans of 47 subjects who had lower back pain and 47 healthy subjects. Both groups were screened for medication use and mood disorders. The average age was 37.

The idea was to “train” a linear support vector machine — a computer algorithm invented in 1995 — on one set of individuals, and then use that computer model to accurately read the brain scans and classify pain in a completely new set of individuals.

The method successfully predicted the patients with lower back pain 76 percent of the time.

“Lower back pain is the most common chronic condition we deal with,” Mackey said. “In many cases, we don’t understand the cause. What we have learned is that the problem may not be in the back, but in the amplification coming from the back to the brain and nervous system. In this study, we did identify brain regions we think are playing a role in this phenomena.”

Filed under pain chronic pain pain detection neuroimaging computer algorithms lower back pain neuroscience science

564 notes

Bullying by childhood peers leaves a trace that can change the expression of a gene linked to mood
A recent study by a researcher at the Centre for Studies on Human Stress (CSHS) at the Hôpital Louis-H. Lafontaine and professor at the Université de Montréal suggests that bullying by peers changes the structure surrounding a gene involved in regulating mood, making victims more vulnerable to mental health problems as they age. The study published in the journal Psychological Medicine seeks to better understand the mechanisms that explain how difficult experiences disrupt our response to stressful situations. “Many people think that our genes are immutable; however this study suggests that environment, even the social environment, can affect their functioning. This is particularly the case for victimization experiences in childhood, which change not only our stress response but also the functioning of genes involved in mood regulation,” says Isabelle Ouellet-Morin, lead author of the study.
A previous study by Ouellet-Morin, conducted at the Institute of Psychiatry in London (UK), showed that bullied children secrete less cortisol—the stress hormone—but had more problems with social interaction and aggressive behaviour. The present study indicates that the reduction of cortisol, which occurs around the age of 12, is preceded two years earlier by a change in the structure surrounding a gene (SERT) that regulates serotonin, a neurotransmitter involved in mood regulation and depression.
To achieve these results, 28 pairs of identical twins with a mean age of 10 years were analyzed separately according to their experiences of bullying by peers: one twin had been bullied at school while the other had not. “Since they were identical twins living in the same conditions, changes in the chemical structure surrounding the gene cannot be explained by genetics or family environment. Our results suggest that victimization experiences are the source of these changes,” says Ouellet-Morin. According to the author, it would now be worthwhile to evaluate the possibility of reversing these psychological effects, in particular, through interventions at school and support for victims.
(Image: mentalhealthsupport.co.uk)

Bullying by childhood peers leaves a trace that can change the expression of a gene linked to mood

A recent study by a researcher at the Centre for Studies on Human Stress (CSHS) at the Hôpital Louis-H. Lafontaine and professor at the Université de Montréal suggests that bullying by peers changes the structure surrounding a gene involved in regulating mood, making victims more vulnerable to mental health problems as they age. The study published in the journal Psychological Medicine seeks to better understand the mechanisms that explain how difficult experiences disrupt our response to stressful situations. “Many people think that our genes are immutable; however this study suggests that environment, even the social environment, can affect their functioning. This is particularly the case for victimization experiences in childhood, which change not only our stress response but also the functioning of genes involved in mood regulation,” says Isabelle Ouellet-Morin, lead author of the study.

A previous study by Ouellet-Morin, conducted at the Institute of Psychiatry in London (UK), showed that bullied children secrete less cortisol—the stress hormone—but had more problems with social interaction and aggressive behaviour. The present study indicates that the reduction of cortisol, which occurs around the age of 12, is preceded two years earlier by a change in the structure surrounding a gene (SERT) that regulates serotonin, a neurotransmitter involved in mood regulation and depression.

To achieve these results, 28 pairs of identical twins with a mean age of 10 years were analyzed separately according to their experiences of bullying by peers: one twin had been bullied at school while the other had not. “Since they were identical twins living in the same conditions, changes in the chemical structure surrounding the gene cannot be explained by genetics or family environment. Our results suggest that victimization experiences are the source of these changes,” says Ouellet-Morin. According to the author, it would now be worthwhile to evaluate the possibility of reversing these psychological effects, in particular, through interventions at school and support for victims.

(Image: mentalhealthsupport.co.uk)

Filed under bullying childhood gene expression mental health mood regulation stress response psychology neuroscience science

free counters