Neuroscience

Articles and news from the latest research reports.

Posts tagged science

226 notes

Are Thoughts of Death Conducive to Humor?
A New Study Shows an Increase in Humorous Creativity when Individuals are Primed with Thoughts of Death.
Humor is an intrinsic part of human experience. It plays a role in every aspect of human existence, from day-to-day conversation to television shows. Yet little research has been conducted to date on the psychological function of humor. In human psychology, awareness of the impermanence of life is just as prevalent as humor. According to the Terror Management Theory, knowledge of one’s own impermanence creates potentially disruptive existential anxiety, which the individual brings under control with two coping mechanisms, or anxiety buffers: rigid adherence to dominant cultural values, and self-esteem bolstering.
A new article by Christopher R. Long of Ouachita Baptist University and Dara Greenwood of Vassar College is titled Joking in the Face of Death: A Terror Management Approach to Humor Production. Appearing in the journal HUMOR, it documents research on whether the activation of thoughts concerning death influences one’s ability to creatively generate humor. As humor is useful on a fundamental level for a variety of purposes, including psychological defense against anxiety, the authors hypothesized that the activation of thoughts concerning death could facilitate the production of humor.
For their study, Long and Greenwood subdivided 117 students into four experimental groups. These groups were confronted with the topics of pain and death while completing various tasks. Two of the test groups were exposed unconsciously to words flashed for 33 milliseconds on a computer while they completed tasks – the first to the word “pain,” the second to the word “death.” The remaining two groups were prompted in a writing task to express emotions concerning either their own death or a painful visit to the dentist. Afterward, all four groups were instructed to supply a caption to a cartoon from The New Yorker.
These cartoon captions were presented to an independent jury who knew nothing about the experiment. The captions written by individuals who were subconsciously primed with the word death were clearly voted as funnier by the jury. By contrast, the exact opposite result was obtained for the students who consciously wrote about death: their captions were seen as less humorous.
Based on this experiment, the researchers conclude that humor helps the individual to tolerate latent anxiety that may otherwise be destabilizing. In this connection, they point to previous studies indicating that humor is an integral component of resilience.
In light of the finding that the activation of conscious thoughts concerning death impaired the creative generation of humor, Long and Greenwood highlight the need for additional research, not only to explore the effectiveness of humor as a coping mechanism under various circumstances, but also to identify its emotional, cognitive, and/or social benefits under conditions of adversity.

Are Thoughts of Death Conducive to Humor?

A New Study Shows an Increase in Humorous Creativity when Individuals are Primed with Thoughts of Death.

Humor is an intrinsic part of human experience. It plays a role in every aspect of human existence, from day-to-day conversation to television shows. Yet little research has been conducted to date on the psychological function of humor. In human psychology, awareness of the impermanence of life is just as prevalent as humor. According to the Terror Management Theory, knowledge of one’s own impermanence creates potentially disruptive existential anxiety, which the individual brings under control with two coping mechanisms, or anxiety buffers: rigid adherence to dominant cultural values, and self-esteem bolstering.

A new article by Christopher R. Long of Ouachita Baptist University and Dara Greenwood of Vassar College is titled Joking in the Face of Death: A Terror Management Approach to Humor Production. Appearing in the journal HUMOR, it documents research on whether the activation of thoughts concerning death influences one’s ability to creatively generate humor. As humor is useful on a fundamental level for a variety of purposes, including psychological defense against anxiety, the authors hypothesized that the activation of thoughts concerning death could facilitate the production of humor.

For their study, Long and Greenwood subdivided 117 students into four experimental groups. These groups were confronted with the topics of pain and death while completing various tasks. Two of the test groups were exposed unconsciously to words flashed for 33 milliseconds on a computer while they completed tasks – the first to the word “pain,” the second to the word “death.” The remaining two groups were prompted in a writing task to express emotions concerning either their own death or a painful visit to the dentist. Afterward, all four groups were instructed to supply a caption to a cartoon from The New Yorker.

These cartoon captions were presented to an independent jury who knew nothing about the experiment. The captions written by individuals who were subconsciously primed with the word death were clearly voted as funnier by the jury. By contrast, the exact opposite result was obtained for the students who consciously wrote about death: their captions were seen as less humorous.

Based on this experiment, the researchers conclude that humor helps the individual to tolerate latent anxiety that may otherwise be destabilizing. In this connection, they point to previous studies indicating that humor is an integral component of resilience.

In light of the finding that the activation of conscious thoughts concerning death impaired the creative generation of humor, Long and Greenwood highlight the need for additional research, not only to explore the effectiveness of humor as a coping mechanism under various circumstances, but also to identify its emotional, cognitive, and/or social benefits under conditions of adversity.

Filed under humor humorous creativity creativity terror management mortality psychology neuroscience science

135 notes

Semantics on the basis of words’ connectivity

It is now possible to identify the meaning of words with multiple meanings, without using their semantic context

image

Two Brazilian physicists have now devised a method to automatically elucidate the meaning of words with several senses, based solely on their patterns of connectivity with nearby words in a given sentence – and not on semantics. Thiago Silva and Diego Amancio from the University of São Paulo, Brazil, reveal, in a paper about to be published in EPJ B, how they modelled classics texts as complex networks in order to derive their meaning. This type of model plays a key role in several natural processing language tasks such as machine translation, information retrieval, content analysis and text processing.

In this study, the authors chose a set of ten so-called polysemous words—words with multiple meanings—such as bear, jam, just, rock or present. They then verified their patterns of connectivity with nearby words in the text of literary classics such as Jane Austen’s Pride and Prejudice. Specifically, they established a model that consisted of a set of nodes representing words connected by their “edges,” if they are adjacent in a text.

The authors then compared the results of their disambiguation exercise with the traditional semantic-based approach. They observed significant accuracy rates in identifying the suitable meanings when using both techniques. The approach described in this study, based on a so-called deterministic tourist walk characterisation, can therefore be considered a complementary methodology for distinguishing between word senses.In future works, the authors are planning to devise new measures to connect not only adjacent words, but also words within a given interval in order to enhance the ability of the model to grasp semantic factors. This approach is supported by another recent study by the same authors, showing that traditional complex network measures mainly depend on the syntax.

(Source: springer.com)

Filed under language semantics complex networks learning techniques tourist walk neuroscience science

167 notes

How the brain creates the ‘buzz’ that helps ideas spread
How do ideas spread? What messages will go viral on social media, and can this be predicted?
UCLA psychologists have taken a significant step toward answering these questions, identifying for the first time the brain regions associated with the successful spread of ideas, often called “buzz.”
The research has a broad range of implications, the study authors say, and could lead to more effective public health campaigns, more persuasive advertisements and better ways for teachers to communicate with students.
"Our study suggests that people are regularly attuned to how the things they’re seeing will be useful and interesting, not just to themselves but to other people," said the study’s senior author, Matthew Lieberman, a UCLA professor of psychology and of psychiatry and biobehavioral sciences and author of the forthcoming book "Social: Why Our Brains Are Wired to Connect." "We always seem to be on the lookout for who else will find this helpful, amusing or interesting, and our brain data are showing evidence of that. At the first encounter with information, people are already using the brain network involved in thinking about how this can be interesting to other people. We’re wired to want to share information with other people. I think that is a profound statement about the social nature of our minds."
The study findings are published in the online edition of the journal Psychological Science, with print publication to follow later this summer.
"Before this study, we didn’t know what brain regions were associated with ideas that become contagious, and we didn’t know what regions were associated with being an effective communicator of ideas," said lead author Emily Falk, who conducted the research as a UCLA doctoral student in Lieberman’s lab and is currently a faculty member at the University of Pennsylvania’s Annenberg School for Communication. "Now we have mapped the brain regions associated with ideas that are likely to be contagious and are associated with being a good ‘idea salesperson.’ In the future, we would like to be able to use these brain maps to forecast what ideas are likely to be successful and who is likely to be effective at spreading them."
In the first part of the study, 19 UCLA students (average age 21), underwent functional magnetic resonance imaging (fMRI) brain scans at UCLA’s Ahmanson–Lovelace Brain Mapping Center as they saw and heard information about 24 potential television pilot ideas. Among the fictitious pilots — which were presented by a separate group of students — were a show about former beauty-queen mothers who want their daughters to follow in their footsteps; a Spanish soap opera about a young woman and her relationships; a reality show in which contestants travel to countries with harsh environments; a program about teenage vampires and werewolves; and a show about best friends and rivals in a crime family.
The students exposed to these TV pilot ideas were asked to envision themselves as television studio interns who would decide whether or not they would recommend each idea to their “producers.” These students made videotaped assessments of each pilot.
Another group of 79 UCLA undergraduates (average age 21) was asked to act as the “producers.” These students watched the interns’ videos assessments of the pilots and then made their own ratings about the pilot ideas based on those assessments.
Lieberman and Falk wanted to learn which brain regions were activated when the interns were first exposed to information they would later pass on to others.
"We’re constantly being exposed to information on Facebook, Twitter and so on," said Lieberman. "Some of it we pass on, and a lot of it we don’t. Is there something that happens in the moment we first see it — maybe before we even realize we might pass it on — that is different for those things that we will pass on successfully versus those that we won’t?"
It turns out, there is. The psychologists found that the interns who were especially good at persuading the producers showed significantly more activation in a brain region known as the temporoparietal junction, or TPJ, at the time they were first exposed to the pilot ideas they would later recommend. They had more activation in this region than the interns who were less persuasive and more activation than they themselves had when exposed to pilot ideas they didn’t like. The psychologists call this the “salesperson effect.”
"It was the only region in the brain that showed this effect," Lieberman said. One might have thought brain regions associated with memory would show more activation, but that was not the case, he said.
"We wanted to explore what differentiates ideas that bomb from ideas that go viral," Falk said. "We found that increased activity in the TPJ was associated with an increased ability to convince others to get on board with their favorite ideas. Nobody had looked before at which brain regions are associated with the successful spread of ideas. You might expect people to be most enthusiastic and opinionated about ideas that they themselves are excited about, but our research suggests that’s not the whole story. Thinking about what appeals to others may be even more important."
The TPJ, located on the outer surface of the brain, is part of what is known as the brain’s “mentalizing network,” which is involved in thinking about what other people think and feel. The network also includes the dorsomedial prefrontal cortex, located in the middle of the brain.
"When we read fiction or watch a movie, we’re entering the minds of the characters — that’s mentalizing," Lieberman said. "As soon as you hear a good joke, you think, ‘Who can I tell this to and who can’t I tell?’ Making this judgment will activate these two brain regions. If we’re playing poker and I’m trying to figure out if you’re bluffing, that’s going to invoke this network. And when I see someone on Capitol Hill testifying and I’m thinking whether they are lying or telling the truth, that’s going to invoke these two brain regions.
"Good ideas turn on the mentalizing system," he said. "They make us want to tell other people."
The interns who showed more activity in their mentalizing system when they saw the pilots they intended to recommend were then more successful in convincing the producers to also recommend those pilots, the psychologists found.
"As I’m looking at an idea, I might be thinking about what other people are likely to value, and that might make me a better idea salesperson later," Falk said.
By further studying the neural activity in these brain regions to see what information and ideas activate these regions more, psychologists potentially could predict which advertisements are most likely to spread and go viral and which will be most effective, Lieberman and Falk said.
Such knowledge could also benefit public health campaigns aimed at everything from reducing risky behaviors among teenagers to combating cancer, smoking and obesity.
"The explosion of new communication technologies, combined with novel analytic tools, promises to dramatically expand our understanding of how ideas spread," Falk said. "We’re laying basic science foundations to addressimportant public health questions that are difficult to answer otherwise — about what makes campaigns successful and how we can improve their impact."
As we may like particular radio DJs who play music we enjoy, the Internet has led us to act as “information DJs” who share things that we think will be of interest to people in our networks, Lieberman said.
"What is new about our study is the finding that the mentalizing network is involved when I read something and decide who else might be interested in it," he said. "This is similar to what an advertiser has to do. It’s not enough to have a product that people should like."

How the brain creates the ‘buzz’ that helps ideas spread

How do ideas spread? What messages will go viral on social media, and can this be predicted?

UCLA psychologists have taken a significant step toward answering these questions, identifying for the first time the brain regions associated with the successful spread of ideas, often called “buzz.”

The research has a broad range of implications, the study authors say, and could lead to more effective public health campaigns, more persuasive advertisements and better ways for teachers to communicate with students.

"Our study suggests that people are regularly attuned to how the things they’re seeing will be useful and interesting, not just to themselves but to other people," said the study’s senior author, Matthew Lieberman, a UCLA professor of psychology and of psychiatry and biobehavioral sciences and author of the forthcoming book "Social: Why Our Brains Are Wired to Connect." "We always seem to be on the lookout for who else will find this helpful, amusing or interesting, and our brain data are showing evidence of that. At the first encounter with information, people are already using the brain network involved in thinking about how this can be interesting to other people. We’re wired to want to share information with other people. I think that is a profound statement about the social nature of our minds."

The study findings are published in the online edition of the journal Psychological Science, with print publication to follow later this summer.

"Before this study, we didn’t know what brain regions were associated with ideas that become contagious, and we didn’t know what regions were associated with being an effective communicator of ideas," said lead author Emily Falk, who conducted the research as a UCLA doctoral student in Lieberman’s lab and is currently a faculty member at the University of Pennsylvania’s Annenberg School for Communication. "Now we have mapped the brain regions associated with ideas that are likely to be contagious and are associated with being a good ‘idea salesperson.’ In the future, we would like to be able to use these brain maps to forecast what ideas are likely to be successful and who is likely to be effective at spreading them."

In the first part of the study, 19 UCLA students (average age 21), underwent functional magnetic resonance imaging (fMRI) brain scans at UCLA’s Ahmanson–Lovelace Brain Mapping Center as they saw and heard information about 24 potential television pilot ideas. Among the fictitious pilots — which were presented by a separate group of students — were a show about former beauty-queen mothers who want their daughters to follow in their footsteps; a Spanish soap opera about a young woman and her relationships; a reality show in which contestants travel to countries with harsh environments; a program about teenage vampires and werewolves; and a show about best friends and rivals in a crime family.

The students exposed to these TV pilot ideas were asked to envision themselves as television studio interns who would decide whether or not they would recommend each idea to their “producers.” These students made videotaped assessments of each pilot.

Another group of 79 UCLA undergraduates (average age 21) was asked to act as the “producers.” These students watched the interns’ videos assessments of the pilots and then made their own ratings about the pilot ideas based on those assessments.

Lieberman and Falk wanted to learn which brain regions were activated when the interns were first exposed to information they would later pass on to others.

"We’re constantly being exposed to information on Facebook, Twitter and so on," said Lieberman. "Some of it we pass on, and a lot of it we don’t. Is there something that happens in the moment we first see it — maybe before we even realize we might pass it on — that is different for those things that we will pass on successfully versus those that we won’t?"

It turns out, there is. The psychologists found that the interns who were especially good at persuading the producers showed significantly more activation in a brain region known as the temporoparietal junction, or TPJ, at the time they were first exposed to the pilot ideas they would later recommend. They had more activation in this region than the interns who were less persuasive and more activation than they themselves had when exposed to pilot ideas they didn’t like. The psychologists call this the “salesperson effect.”

"It was the only region in the brain that showed this effect," Lieberman said. One might have thought brain regions associated with memory would show more activation, but that was not the case, he said.

"We wanted to explore what differentiates ideas that bomb from ideas that go viral," Falk said. "We found that increased activity in the TPJ was associated with an increased ability to convince others to get on board with their favorite ideas. Nobody had looked before at which brain regions are associated with the successful spread of ideas. You might expect people to be most enthusiastic and opinionated about ideas that they themselves are excited about, but our research suggests that’s not the whole story. Thinking about what appeals to others may be even more important."

The TPJ, located on the outer surface of the brain, is part of what is known as the brain’s “mentalizing network,” which is involved in thinking about what other people think and feel. The network also includes the dorsomedial prefrontal cortex, located in the middle of the brain.

"When we read fiction or watch a movie, we’re entering the minds of the characters — that’s mentalizing," Lieberman said. "As soon as you hear a good joke, you think, ‘Who can I tell this to and who can’t I tell?’ Making this judgment will activate these two brain regions. If we’re playing poker and I’m trying to figure out if you’re bluffing, that’s going to invoke this network. And when I see someone on Capitol Hill testifying and I’m thinking whether they are lying or telling the truth, that’s going to invoke these two brain regions.

"Good ideas turn on the mentalizing system," he said. "They make us want to tell other people."

The interns who showed more activity in their mentalizing system when they saw the pilots they intended to recommend were then more successful in convincing the producers to also recommend those pilots, the psychologists found.

"As I’m looking at an idea, I might be thinking about what other people are likely to value, and that might make me a better idea salesperson later," Falk said.

By further studying the neural activity in these brain regions to see what information and ideas activate these regions more, psychologists potentially could predict which advertisements are most likely to spread and go viral and which will be most effective, Lieberman and Falk said.

Such knowledge could also benefit public health campaigns aimed at everything from reducing risky behaviors among teenagers to combating cancer, smoking and obesity.

"The explosion of new communication technologies, combined with novel analytic tools, promises to dramatically expand our understanding of how ideas spread," Falk said. "We’re laying basic science foundations to addressimportant public health questions that are difficult to answer otherwise — about what makes campaigns successful and how we can improve their impact."

As we may like particular radio DJs who play music we enjoy, the Internet has led us to act as “information DJs” who share things that we think will be of interest to people in our networks, Lieberman said.

"What is new about our study is the finding that the mentalizing network is involved when I read something and decide who else might be interested in it," he said. "This is similar to what an advertiser has to do. It’s not enough to have a product that people should like."

Filed under brain mapping dorsomedial prefrontal cortex temporoparietal junction psychology neuroscience science

82 notes

Researchers Create 15-Million-Year Model Of Great Ape History
Using the study of genetic variation in a large panel of humans, chimpanzees, gorillas and orangutans, researchers from the Universitat Pompeu Fabra in Barcelona, Spain, and Washington University in Seattle have created a model of great ape history over the past 15 million years.
This is the most comprehensive catalog of great ape genetic diversity. The catalog elucidates the evolution and population histories of great apes from Africa and Indonesia. The research team hopes the catalog will also help current and future conservation efforts that strive to preserve natural genetic diversity in populations.
An international group of more than 75 scientists and wildlife conservationists worked on the genetic analysis of 79 wild and captive-born great apes. The group of great apes represents all six great ape species: chimpanzee, bonobo, Sumatran orangutan, Bornean orangutan, eastern gorilla and western lowland gorilla; as well as seven subspecies. The study, published in Nature, also included nine human genomes.
“The research provided us the deepest survey to date of great ape genetic diversity with evolutionary insights into the divergence and emergence of great-ape species,” noted Evan Eichler, a UW professor of genome sciences and a Howard Hughes Medical Institute Investigator.
Due to the difficulty in obtaining genetic specimens from wild apes, genetic variation among great apes had been largely uncharted prior to this study. The research team credits the many conservationists in various countries, many of them in dangerous or isolated locations, with the success of the project.
Peter H. Sudmant, a UW graduate student in genome sciences, said, “Gathering this data is critical to understanding differences between great ape species, and separating aspects of the genetic code that distinguish humans from other primates.”
Factors that shaped primate evolution, including natural selection, population growth and collapse, geographic isolation and migration, climate and geological changes are likely to be revealed by the analysis of great ape genetic diversity.
Understanding more about great ape genetic diversity, according to Sudmant, also contributes to knowledge about disease susceptibility among various primate species. This knowledge is important to both conservation efforts and to human health. For example, the ebola virus is responsible for thousands of chimp and gorilla deaths in Africa. Also, the origin of the HIV in humans comes from simian immunodeficiency virus (SIV), which is found in non-human primates.
“Because the way we think, communicate and act is what makes us distinctively human,” Sudmant, who works in a lab that studies both primate evolutionary biology and neuropsychiatric diseases such as autism, schizophrenia, developmental delay, and cognitive and behavioral disorders, said, “we are specifically looking for the genetic differences between humans and other great apes that might confer these traits.”
The differences between species may direct scientists to portions of the human genome associated with cognition, speech or behavior. This could provide clues to which mutations might underlie neurological disease.
The research team published a companion paper in Genome Research, in which they found the first genetic evidence of a disorder in chimpanzees that resembles Smith-Magenis syndrome. Smith-Magenis is a disabling physical, mental and behavioral condition in humans. The veterinary records of Suzie-A, the chimpanzee exhibiting the disorder, match human symptoms of Smith-Magenis almost exactly. Suzie-A was overweight, rage-prone, had a curved-spine and died from kidney failure.
The discovery of Suzie-A’s syndrome came about while the scientists were exploring and comparing the accumulation of copy number variants during great ape evolution, which are differences between individuals, populations or species in the number of times specific segments of DNA appear. The genomes of humans and great apes have been restructured by the duplication and deletion of DNA segments, which are also behind many genetic diseases.
The new catalog of genetic diversity will help address the challenging plight of great ape species on the brink of extinction, in addition to offering a view of the origins of humans and their disorders. It will also provide an important tool to allow biologists to identify the origin of great apes poached for their body parts or hunted for bush meat. The study also explains why current zoo breeding programs that have tried to increase the genetic diversity of their captive great ape populations have resulted in populations that are genetically dissimilar to their wild counterparts.
“By avoiding inbreeding to produce a diverse population, zoos and conservation groups may be entirely eroding genetic signals specific to certain populations in specific geographic locations in the wild,” Sudmant said.
Donald, one of the captive-bred apes studied by the team, had a genetic makeup of two distinct chimpanzee subspecies which are located around 1,250 miles away from each other in the wild.
The variety of changes that occurred along each of the ape lineages, as they separated from each other through migration, geological change and climate events, are delineated in the study findings. Natural disturbances such as the formation of rivers and the partition of islands from the mainland have all served to isolate groups of apes. These isolated populations are exposed to a unique set of environmental pressures that result in population fluctuations and adaptations, depending on the circumstances.
The ancestors of some present day apes were present at the same time as early human-like species. The researchers found, however, the evolutionary history of the ancestral great ape populations had far more complexity than that of humans. Human history appears “almost boring,” according to Sudmant and Eicher, when compared to our closest relatives, the chimpanzees. For example, the last few million years of chimp evolution are full of population explosions followed by implosions. These rapid fluctuations in chimpanzee populations demonstrate remarkable plasticity. Scientists still don’t understand the reasons for the fluctuations in chimpanzee population size long before our own population explosion.
Sudmant’s interest in studying and preserving the great apes stems from the similarities of the great apes to humans.
“If you look at a chimpanzee or a gorilla, those guys will look right back at you,” he said. “They act just like us. We need to find ways to protect these precious species from extinction.”

Researchers Create 15-Million-Year Model Of Great Ape History

Using the study of genetic variation in a large panel of humans, chimpanzees, gorillas and orangutans, researchers from the Universitat Pompeu Fabra in Barcelona, Spain, and Washington University in Seattle have created a model of great ape history over the past 15 million years.

This is the most comprehensive catalog of great ape genetic diversity. The catalog elucidates the evolution and population histories of great apes from Africa and Indonesia. The research team hopes the catalog will also help current and future conservation efforts that strive to preserve natural genetic diversity in populations.

An international group of more than 75 scientists and wildlife conservationists worked on the genetic analysis of 79 wild and captive-born great apes. The group of great apes represents all six great ape species: chimpanzee, bonobo, Sumatran orangutan, Bornean orangutan, eastern gorilla and western lowland gorilla; as well as seven subspecies. The study, published in Nature, also included nine human genomes.

“The research provided us the deepest survey to date of great ape genetic diversity with evolutionary insights into the divergence and emergence of great-ape species,” noted Evan Eichler, a UW professor of genome sciences and a Howard Hughes Medical Institute Investigator.

Due to the difficulty in obtaining genetic specimens from wild apes, genetic variation among great apes had been largely uncharted prior to this study. The research team credits the many conservationists in various countries, many of them in dangerous or isolated locations, with the success of the project.

Peter H. Sudmant, a UW graduate student in genome sciences, said, “Gathering this data is critical to understanding differences between great ape species, and separating aspects of the genetic code that distinguish humans from other primates.”

Factors that shaped primate evolution, including natural selection, population growth and collapse, geographic isolation and migration, climate and geological changes are likely to be revealed by the analysis of great ape genetic diversity.

Understanding more about great ape genetic diversity, according to Sudmant, also contributes to knowledge about disease susceptibility among various primate species. This knowledge is important to both conservation efforts and to human health. For example, the ebola virus is responsible for thousands of chimp and gorilla deaths in Africa. Also, the origin of the HIV in humans comes from simian immunodeficiency virus (SIV), which is found in non-human primates.

“Because the way we think, communicate and act is what makes us distinctively human,” Sudmant, who works in a lab that studies both primate evolutionary biology and neuropsychiatric diseases such as autism, schizophrenia, developmental delay, and cognitive and behavioral disorders, said, “we are specifically looking for the genetic differences between humans and other great apes that might confer these traits.”

The differences between species may direct scientists to portions of the human genome associated with cognition, speech or behavior. This could provide clues to which mutations might underlie neurological disease.

The research team published a companion paper in Genome Research, in which they found the first genetic evidence of a disorder in chimpanzees that resembles Smith-Magenis syndrome. Smith-Magenis is a disabling physical, mental and behavioral condition in humans. The veterinary records of Suzie-A, the chimpanzee exhibiting the disorder, match human symptoms of Smith-Magenis almost exactly. Suzie-A was overweight, rage-prone, had a curved-spine and died from kidney failure.

The discovery of Suzie-A’s syndrome came about while the scientists were exploring and comparing the accumulation of copy number variants during great ape evolution, which are differences between individuals, populations or species in the number of times specific segments of DNA appear. The genomes of humans and great apes have been restructured by the duplication and deletion of DNA segments, which are also behind many genetic diseases.

The new catalog of genetic diversity will help address the challenging plight of great ape species on the brink of extinction, in addition to offering a view of the origins of humans and their disorders. It will also provide an important tool to allow biologists to identify the origin of great apes poached for their body parts or hunted for bush meat. The study also explains why current zoo breeding programs that have tried to increase the genetic diversity of their captive great ape populations have resulted in populations that are genetically dissimilar to their wild counterparts.

“By avoiding inbreeding to produce a diverse population, zoos and conservation groups may be entirely eroding genetic signals specific to certain populations in specific geographic locations in the wild,” Sudmant said.

Donald, one of the captive-bred apes studied by the team, had a genetic makeup of two distinct chimpanzee subspecies which are located around 1,250 miles away from each other in the wild.

The variety of changes that occurred along each of the ape lineages, as they separated from each other through migration, geological change and climate events, are delineated in the study findings. Natural disturbances such as the formation of rivers and the partition of islands from the mainland have all served to isolate groups of apes. These isolated populations are exposed to a unique set of environmental pressures that result in population fluctuations and adaptations, depending on the circumstances.

The ancestors of some present day apes were present at the same time as early human-like species. The researchers found, however, the evolutionary history of the ancestral great ape populations had far more complexity than that of humans. Human history appears “almost boring,” according to Sudmant and Eicher, when compared to our closest relatives, the chimpanzees. For example, the last few million years of chimp evolution are full of population explosions followed by implosions. These rapid fluctuations in chimpanzee populations demonstrate remarkable plasticity. Scientists still don’t understand the reasons for the fluctuations in chimpanzee population size long before our own population explosion.

Sudmant’s interest in studying and preserving the great apes stems from the similarities of the great apes to humans.

“If you look at a chimpanzee or a gorilla, those guys will look right back at you,” he said. “They act just like us. We need to find ways to protect these precious species from extinction.”

Filed under primates great apes evolution genetic variation genetics genomics science

428 notes

First man to hear people before they speak

"I told my daughter her living room TV was out of sync. Then I noticed the kitchen telly was also dubbed badly. Suddenly I noticed that her voice was out of sync too. It wasn’t the TV, it was me."

Ever watched an old movie, only for the sound to go out of sync with the action? Now imagine every voice you hear sounds similarly off-kilter – even your own. That’s the world PH lives in. Soon after surgery for a heart problem, he began to notice that something wasn’t quite right.
"I was staying with my daughter and they like to have the television on in their house. I turned to my daughter and said ‘you ought to get a decent telly, one where the sound and programme are synchronised’. I gave a little chuckle. But they said ‘there’s nothing wrong with the TV’."
Puzzled, he went to the kitchen to make a cup of tea. “They’ve got another telly up on the wall and it was the same. I went into the lounge and I said to her ‘hey you’ve got two TVs that need sorting!’.”
That was when he started to notice that his daughter’s speech was out of time with her lip movements too. “It wasn’t the TV, it was me. It was happening in real life.”
PH is the first confirmed case of someone who hears people speak before registering the movement of their lips. His situation is giving unique insights into how our brains unify what we hear and see.
It’s unclear why PH’s problem started when it did – but it may have had something to do with having acute pericarditis, inflammation of the sac around the heart, or the surgery he had to treat it.
Brain scans after the timing problems appeared showed two lesions in areas thought to play a role in hearing, timing and movement. “Where these came from is anyone’s guess,” says PH. “They may have been there all my life or as a result of being in intensive care.”
Disconcerting delay
Several weeks later, PH realised that it wasn’t just other people who were out of sync: when he spoke, he registered his words before he felt his jaw make the movement. “It felt like a significant delay, it sort of snuck up on me. It was very disconcerting. At the time I didn’t know whether the delay was going to get bigger, but it seems to have stuck at about a quarter of a second.”
Light and sound travel at different speeds, so when someone speaks, visual and auditory inputs arrive at our eyes and ears at different times. The signals are then processed at different rates in the brain. Despite this, we normally perceive the events as happening simultaneously – but how the brain achieves this is unclear.
To investigate PH’s situation, Elliot Freeman at City University London and colleagues performed a temporal order judgement test. PH was shown clips of people talking and was asked whether the voice came before or after the lip movements. Sure enough, he said it came before, and to perceive them as synchronous the team had to play the voice about 200 milliseconds later than the lip movements.
The team then carried out a second, more objective test based on the McGurk illusion. This involves listening to one syllable while watching someone mouth another; the combination makes you perceive a third syllable.
Since PH hears people speaking before he sees their lips move, the team expected the illusion to work when they delayed the voice. So they were surprised to get the opposite result: presenting the voice 200 ms earlier than the lip movements triggered the illusion, suggesting that his brain was processing the sight before the sound in this particular task.
And it wasn’t only PH who gave these results. When 37 others were tested on both tasks, many showed a similar pattern, though none of the mismatches were noticeable in everyday life.
Many clocks
Freeman says this implies that the same event in the outside world is perceived by different parts of your brain as happening at different times. This suggests that, rather than one unified “now”, there are many clocks in the brain – two of which showed up in the tasks – and that all the clocks measure their individual “nows” relative to their average.
In PH’s case, one or more of these clocks has been significantly slowed – shifting his average – possibly as a result of the lesions. Freeman thinks PH’s timing discrepancies may be too large and have happened too suddenly for him to ignore or adapt to, resulting in him being aware of the asynchrony in everyday life. He may perceive just one of his clocks because it is the only one he has conscious access to, says Freeman.
PH says that in general he has learned to live with the sensory mismatch but admits he has trouble in noisy places or at large meetings. Since he hears himself speak before he feels his mouth move, does he ever feel like he’s not in control of his own voice? “No, I’m definitely sure it’s me that’s speaking,” he says, “it’s just a strange sensation.”
Help may be at hand: Freeman is looking for a way to slow down PH’s hearing so it matches what he is seeing. PH says he would be happy to trial a treatment, but he’s actually not that anxious to fix the problem. “It’s not life-threatening,” he says. “You learn to live with these things as you get older. I don’t expect my body to work perfectly.”

First man to hear people before they speak

"I told my daughter her living room TV was out of sync. Then I noticed the kitchen telly was also dubbed badly. Suddenly I noticed that her voice was out of sync too. It wasn’t the TV, it was me."

Ever watched an old movie, only for the sound to go out of sync with the action? Now imagine every voice you hear sounds similarly off-kilter – even your own. That’s the world PH lives in. Soon after surgery for a heart problem, he began to notice that something wasn’t quite right.

"I was staying with my daughter and they like to have the television on in their house. I turned to my daughter and said ‘you ought to get a decent telly, one where the sound and programme are synchronised’. I gave a little chuckle. But they said ‘there’s nothing wrong with the TV’."

Puzzled, he went to the kitchen to make a cup of tea. “They’ve got another telly up on the wall and it was the same. I went into the lounge and I said to her ‘hey you’ve got two TVs that need sorting!’.”

That was when he started to notice that his daughter’s speech was out of time with her lip movements too. “It wasn’t the TV, it was me. It was happening in real life.”

PH is the first confirmed case of someone who hears people speak before registering the movement of their lips. His situation is giving unique insights into how our brains unify what we hear and see.

It’s unclear why PH’s problem started when it did – but it may have had something to do with having acute pericarditis, inflammation of the sac around the heart, or the surgery he had to treat it.

Brain scans after the timing problems appeared showed two lesions in areas thought to play a role in hearing, timing and movement. “Where these came from is anyone’s guess,” says PH. “They may have been there all my life or as a result of being in intensive care.”

Disconcerting delay

Several weeks later, PH realised that it wasn’t just other people who were out of sync: when he spoke, he registered his words before he felt his jaw make the movement. “It felt like a significant delay, it sort of snuck up on me. It was very disconcerting. At the time I didn’t know whether the delay was going to get bigger, but it seems to have stuck at about a quarter of a second.”

Light and sound travel at different speeds, so when someone speaks, visual and auditory inputs arrive at our eyes and ears at different times. The signals are then processed at different rates in the brain. Despite this, we normally perceive the events as happening simultaneously – but how the brain achieves this is unclear.

To investigate PH’s situation, Elliot Freeman at City University London and colleagues performed a temporal order judgement test. PH was shown clips of people talking and was asked whether the voice came before or after the lip movements. Sure enough, he said it came before, and to perceive them as synchronous the team had to play the voice about 200 milliseconds later than the lip movements.

The team then carried out a second, more objective test based on the McGurk illusion. This involves listening to one syllable while watching someone mouth another; the combination makes you perceive a third syllable.

Since PH hears people speaking before he sees their lips move, the team expected the illusion to work when they delayed the voice. So they were surprised to get the opposite result: presenting the voice 200 ms earlier than the lip movements triggered the illusion, suggesting that his brain was processing the sight before the sound in this particular task.

And it wasn’t only PH who gave these results. When 37 others were tested on both tasks, many showed a similar pattern, though none of the mismatches were noticeable in everyday life.

Many clocks

Freeman says this implies that the same event in the outside world is perceived by different parts of your brain as happening at different times. This suggests that, rather than one unified “now”, there are many clocks in the brain – two of which showed up in the tasks – and that all the clocks measure their individual “nows” relative to their average.

In PH’s case, one or more of these clocks has been significantly slowed – shifting his average – possibly as a result of the lesions. Freeman thinks PH’s timing discrepancies may be too large and have happened too suddenly for him to ignore or adapt to, resulting in him being aware of the asynchrony in everyday life. He may perceive just one of his clocks because it is the only one he has conscious access to, says Freeman.

PH says that in general he has learned to live with the sensory mismatch but admits he has trouble in noisy places or at large meetings. Since he hears himself speak before he feels his mouth move, does he ever feel like he’s not in control of his own voice? “No, I’m definitely sure it’s me that’s speaking,” he says, “it’s just a strange sensation.”

Help may be at hand: Freeman is looking for a way to slow down PH’s hearing so it matches what he is seeing. PH says he would be happy to trial a treatment, but he’s actually not that anxious to fix the problem. “It’s not life-threatening,” he says. “You learn to live with these things as you get older. I don’t expect my body to work perfectly.”

Filed under brain hearing inflammation lip movements McGurk illusion neuroscience science

137 notes

With Parents’ Help, Preschoolers Can Learn to Pay Attention

Pay attention! Whether it’s listening to a teacher giving instructions or completing a word problem, the ability to tune out distractions and focus on a task is key to academic success. Now, a new study suggests that a brief training program in attention for 3- to 5-year-olds and their families could help boost brain activity and narrow the academic achievement gap between low- and high-income students.
Children from families of low socioeconomic status generally score lower than more affluent kids on standardized tests of intelligence, language, spatial reasoning, and math, says Priti Shah, a cognitive neuroscientist at the University of Wisconsin who was not involved in the study. “That’s just a plain fact.” A more controversial question that scientists and politicians have batted around for decades, says Shah, is “What is the source of that difference?” Part of it may be genetic, but environmental factors, ranging from prenatal nutrition to exposure to toxic substances like lead, may also account for the early childhood differences in cognitive ability that appear by age 3 or 4. So far, however, “there aren’t that many randomized, controlled trials that show that the environment has an impact on a child’s abilities,” Shah says.
The new study does just that. It focuses on the ability to hone in on a task and ignore distractions, which “leverages every single thing we do,” says cognitive neuroscientist Helen Neville at the University of Oregon, Eugene. For more than 30 years, Neville and her colleagues have been studying the neural bases of this ability, called selective attention.
A classic example of selective attention is the "cocktail party" problem, where we must ignore other voices while listening to one person’s story. When an adult does that, “you get a little blip” in their brain activity, she says—a microvolt of electricity lasting a 10th of a second that can be picked up with EEG electrodes on the scalp. Children of higher socioeconomic status show a similar brain response to adults, whereas children from lower-income families generally show a much reduced response or none at all, Neville says.
Programs designed to improve cognitive skills such as selective attention are often costly and time-intensive, and don’t address how a child’s caretakers and home environment can reinforce those skills, Neville says. To determine whether a short, relatively inexpensive family-based training program could generate improvements, Neville and colleagues recruited 141 3- to 5-year olds in Oregon who were in Head Start—a preschool program for children whose families live at or below the poverty line —and randomly divided them into three groups.
For 8 weeks, children in the first group spent about an hour every week playing games and doing activities that require focused attention. Some tasks were simple, like coloring inside the lines, while others were more complex. In one game, for example, children were asked to deliver a small dish of water to a frog, walking only along a narrow ribbon, says Eric Pakulak, a study co-author. Other children might play in the periphery with balloons to ramp up the challenge, he says. In addition, “We also talk about what it means to be paying attention, and how to notice that you’re distracted.”
While the students played, parents or caregivers took 2-hour-long weekly classes on parenting that included general strategies for reducing family stress, such as creating consistent home routines, as well as activities specifically directed at boosting attention similar to those used in class that they could play with their children—one activity, for example, was to match words such as “happy” or “sad” to pictures of different facial expressions. In the second group, students performed the attention-boosting activities as well, but parents received only three 90-minute sessions of instruction and did not have an opportunity to learn the curriculum in depth; in the third group, neither kids nor their parents did anything special.
After 8 weeks, the team applied a battery of standard assessments, such as IQ and spatial reasoning tests and behavioral reports from teachers and parents; they also measured changes in brain activity while students listened to two recorded stories simultaneously. Instructed to attend to only one of two competing stories—”The Blue Kangaroo” vs. “Harry the Dog,” for example—the children whose parents had received additional attention instruction showed a 50 percent increase in brain activity in response to the correct story compared to children in the other two groups, the authors report online today in the Proceedings of the National Academy of Sciences; their responses matched those seen in adults and children of higher socioeconomic status. In addition, the children on average showed a roughly 7-point IQ increase, and teachers and parents reported significant improvements in academic performance and behavior. No such differences were evident in the two controls, Neville says, suggesting that parental involvement was key.
Many existing programs try to help young children of low socioeconomic status develop the skills needed to thrive in school, but “almost all happen without any scientifically designed pre-vs. post-behavioral or neural measures,” says Rajeev Raizada, a cognitive neuroscientist at the University of Rochester in New York. This study is one of the first to combine such tests with an intervention, he says. Such interventions “are of great interest scientifically, because they are about as close as you can get to experimental research on the effects of child poverty on the brain,” says Martha Farah, a cognitive neuroscientist at the University of Pennsylvania.
Raizada cautions that the parental training program was broad, making it hard to know which aspects were really crucial, he says. “Another crucial question is how long-lasting will the kids’ gains be?” he adds. “A common feature of intervention programs is that they tend to produce some immediate gains, but those gains often tend to fade out over subsequent months.”
Before implementing programs based on the new study, Farah says, “we need to invest in replication, fine-tuning, and all the hard work of bringing a program to scale.” Still, given striking improvements seen in just 8 weekly sessions, “I think that we need to regard these results as wonderful news,” she says.

With Parents’ Help, Preschoolers Can Learn to Pay Attention

Pay attention! Whether it’s listening to a teacher giving instructions or completing a word problem, the ability to tune out distractions and focus on a task is key to academic success. Now, a new study suggests that a brief training program in attention for 3- to 5-year-olds and their families could help boost brain activity and narrow the academic achievement gap between low- and high-income students.

Children from families of low socioeconomic status generally score lower than more affluent kids on standardized tests of intelligence, language, spatial reasoning, and math, says Priti Shah, a cognitive neuroscientist at the University of Wisconsin who was not involved in the study. “That’s just a plain fact.” A more controversial question that scientists and politicians have batted around for decades, says Shah, is “What is the source of that difference?” Part of it may be genetic, but environmental factors, ranging from prenatal nutrition to exposure to toxic substances like lead, may also account for the early childhood differences in cognitive ability that appear by age 3 or 4. So far, however, “there aren’t that many randomized, controlled trials that show that the environment has an impact on a child’s abilities,” Shah says.

The new study does just that. It focuses on the ability to hone in on a task and ignore distractions, which “leverages every single thing we do,” says cognitive neuroscientist Helen Neville at the University of Oregon, Eugene. For more than 30 years, Neville and her colleagues have been studying the neural bases of this ability, called selective attention.

A classic example of selective attention is the "cocktail party" problem, where we must ignore other voices while listening to one person’s story. When an adult does that, “you get a little blip” in their brain activity, she says—a microvolt of electricity lasting a 10th of a second that can be picked up with EEG electrodes on the scalp. Children of higher socioeconomic status show a similar brain response to adults, whereas children from lower-income families generally show a much reduced response or none at all, Neville says.

Programs designed to improve cognitive skills such as selective attention are often costly and time-intensive, and don’t address how a child’s caretakers and home environment can reinforce those skills, Neville says. To determine whether a short, relatively inexpensive family-based training program could generate improvements, Neville and colleagues recruited 141 3- to 5-year olds in Oregon who were in Head Start—a preschool program for children whose families live at or below the poverty line —and randomly divided them into three groups.

For 8 weeks, children in the first group spent about an hour every week playing games and doing activities that require focused attention. Some tasks were simple, like coloring inside the lines, while others were more complex. In one game, for example, children were asked to deliver a small dish of water to a frog, walking only along a narrow ribbon, says Eric Pakulak, a study co-author. Other children might play in the periphery with balloons to ramp up the challenge, he says. In addition, “We also talk about what it means to be paying attention, and how to notice that you’re distracted.”

While the students played, parents or caregivers took 2-hour-long weekly classes on parenting that included general strategies for reducing family stress, such as creating consistent home routines, as well as activities specifically directed at boosting attention similar to those used in class that they could play with their children—one activity, for example, was to match words such as “happy” or “sad” to pictures of different facial expressions. In the second group, students performed the attention-boosting activities as well, but parents received only three 90-minute sessions of instruction and did not have an opportunity to learn the curriculum in depth; in the third group, neither kids nor their parents did anything special.

After 8 weeks, the team applied a battery of standard assessments, such as IQ and spatial reasoning tests and behavioral reports from teachers and parents; they also measured changes in brain activity while students listened to two recorded stories simultaneously. Instructed to attend to only one of two competing stories—”The Blue Kangaroo” vs. “Harry the Dog,” for example—the children whose parents had received additional attention instruction showed a 50 percent increase in brain activity in response to the correct story compared to children in the other two groups, the authors report online today in the Proceedings of the National Academy of Sciences; their responses matched those seen in adults and children of higher socioeconomic status. In addition, the children on average showed a roughly 7-point IQ increase, and teachers and parents reported significant improvements in academic performance and behavior. No such differences were evident in the two controls, Neville says, suggesting that parental involvement was key.

Many existing programs try to help young children of low socioeconomic status develop the skills needed to thrive in school, but “almost all happen without any scientifically designed pre-vs. post-behavioral or neural measures,” says Rajeev Raizada, a cognitive neuroscientist at the University of Rochester in New York. This study is one of the first to combine such tests with an intervention, he says. Such interventions “are of great interest scientifically, because they are about as close as you can get to experimental research on the effects of child poverty on the brain,” says Martha Farah, a cognitive neuroscientist at the University of Pennsylvania.

Raizada cautions that the parental training program was broad, making it hard to know which aspects were really crucial, he says. “Another crucial question is how long-lasting will the kids’ gains be?” he adds. “A common feature of intervention programs is that they tend to produce some immediate gains, but those gains often tend to fade out over subsequent months.”

Before implementing programs based on the new study, Farah says, “we need to invest in replication, fine-tuning, and all the hard work of bringing a program to scale.” Still, given striking improvements seen in just 8 weekly sessions, “I think that we need to regard these results as wonderful news,” she says.

Filed under preschoolers attention brain activity socioeconomic status psychology neuroscience science

271 notes

To Preserve Memory Into Old Age, Keep Your Brain Active!
A new study from Rush University Medical Center in Chicago claims reading and writing may preserve memory into old age. By keeping your brain active, says study author Robert S. Wilson, PhD, you’re able to slow the rate at which your memory decreases in later years.
This is not the first time researchers have arrived at such a conclusion, of course. Previous studies have also found keeping the brain active by reading, writing, completing crossword puzzles and more can essentially exercise the brain and keep it limber far into old age. One study also concluded that keeping television consumption to a minimal amount may also boost brain power over the years. Wilson’s study was recently published in the journal Neurology.
“Our study suggests that exercising your brain by taking part in activities such as these across a person’s lifetime, from childhood through old age, is important for brain health in old age,” said Wilson in a statement.
For his study, Wilson gathered nearly 300 people around the age of 80. He then gave them tests which were designed to measure both their memory and cognition each year until they passed away at an average age of 89. The same participants also answered questions about their past, such as whether they read books, did any writing, or engaged in any other mentally stimulating activities. The volunteers answered these questions for every part of their life, from childhood to adolescence, middle age and beyond.
When the participants passed away, their brains were then examined at an autopsy as Wilson’s team looked for physical evidence of dementia, such as lesions in the brain, tangles or plaques. After examining the brains of these volunteers and compiling the data from the questionnaires, Wilson’s team found those who had kept their minds active throughout their lives had a slower rate of memory decline than those who did not often participate in mentally challenging activities. Based on the amount of plaques and tangles in the brains, keeping your brain active is responsible for a 15 percent differential in memory decline.
The study also found the rate of memory decline was reduced by 32 percent in people who kept their brains active late in life. Those who were not mentally active had it much worse; their memories declined 48 percent faster than their actively reading and writing peers.
“Based on this, we shouldn’t underestimate the effects of everyday activities, such as reading and writing, on our children, ourselves and our parents or grandparents,” said Wilson.
And this news is hardly surprising. Doctors, teachers and parents have been admonishing children to turn off the television and pick up a book for years. There is no shortage of studies to back up their claims. A 2009 study, for example, found people who keep their brains active saw a 30 to 50 percent decrease in risk of developing memory loss. This study, conducted by doctors at the Mayo Clinic in Rochester, Minnesota observed people between the ages of 70 and 89 with and without diagnosed memory loss.
Those who were likely to read magazines or engage in other social activities were 40 percent less likely to develop memory loss than homebodies who did not read. Furthermore, those who spent less than seven hours a day watching television were 50 percent less likely to develop memory loss than those who planted themselves in front of the tube for long stretches of time.

To Preserve Memory Into Old Age, Keep Your Brain Active!

A new study from Rush University Medical Center in Chicago claims reading and writing may preserve memory into old age. By keeping your brain active, says study author Robert S. Wilson, PhD, you’re able to slow the rate at which your memory decreases in later years.

This is not the first time researchers have arrived at such a conclusion, of course. Previous studies have also found keeping the brain active by reading, writing, completing crossword puzzles and more can essentially exercise the brain and keep it limber far into old age. One study also concluded that keeping television consumption to a minimal amount may also boost brain power over the years. Wilson’s study was recently published in the journal Neurology.

“Our study suggests that exercising your brain by taking part in activities such as these across a person’s lifetime, from childhood through old age, is important for brain health in old age,” said Wilson in a statement.

For his study, Wilson gathered nearly 300 people around the age of 80. He then gave them tests which were designed to measure both their memory and cognition each year until they passed away at an average age of 89. The same participants also answered questions about their past, such as whether they read books, did any writing, or engaged in any other mentally stimulating activities. The volunteers answered these questions for every part of their life, from childhood to adolescence, middle age and beyond.

When the participants passed away, their brains were then examined at an autopsy as Wilson’s team looked for physical evidence of dementia, such as lesions in the brain, tangles or plaques. After examining the brains of these volunteers and compiling the data from the questionnaires, Wilson’s team found those who had kept their minds active throughout their lives had a slower rate of memory decline than those who did not often participate in mentally challenging activities. Based on the amount of plaques and tangles in the brains, keeping your brain active is responsible for a 15 percent differential in memory decline.

The study also found the rate of memory decline was reduced by 32 percent in people who kept their brains active late in life. Those who were not mentally active had it much worse; their memories declined 48 percent faster than their actively reading and writing peers.

“Based on this, we shouldn’t underestimate the effects of everyday activities, such as reading and writing, on our children, ourselves and our parents or grandparents,” said Wilson.

And this news is hardly surprising. Doctors, teachers and parents have been admonishing children to turn off the television and pick up a book for years. There is no shortage of studies to back up their claims. A 2009 study, for example, found people who keep their brains active saw a 30 to 50 percent decrease in risk of developing memory loss. This study, conducted by doctors at the Mayo Clinic in Rochester, Minnesota observed people between the ages of 70 and 89 with and without diagnosed memory loss.

Those who were likely to read magazines or engage in other social activities were 40 percent less likely to develop memory loss than homebodies who did not read. Furthermore, those who spent less than seven hours a day watching television were 50 percent less likely to develop memory loss than those who planted themselves in front of the tube for long stretches of time.

Filed under memory memory loss dementia brain psychology neuroscience science

165 notes

Unique Epigenomic Code Identified During Human Brain Development 
Changes in the epigenome, including chemical modifications of DNA, can act as an extra layer of information in the genome, and are thought to play a role in learning and memory, as well as in age-related cognitive decline. The results of a new study by scientists at the Salk Institute for Biological Studies show that the landscape of DNA methylation, a particular type of epigenomic modification, is highly dynamic in brain cells during the transition from birth to adulthood, helping to understand how information in the genomes of cells in the brain is controlled from fetal development to adulthood. The brain is much more complex than all other organs in the body and this discovery opens the door to a deeper understanding of how the intricate patterns of connectivity in the brain are formed.
“These results extend our knowledge of the unique role of DNA methylation in brain development and function,” says senior author Joseph R. Ecker, professor and director of Salk’s Genomic Analysis Laboratory and holder of the Salk International Council Chair in Genetics. “They offer a new framework for testing the role of the epigenome in healthy function and in pathological disruptions of neural circuits.”
A healthy brain is the product of a long process of development. The front-most part of our brain, called the frontal cortex, plays a key role in our ability to think, decide and act. The brain accomplishes all of this through the interaction of special cells such as neurons and glia. We know that these cells have distinct functions, but what gives these cells their individual identities? The answer lies in how each cell expresses the information contained in its DNA. Epigenomic modifications, such as DNA methylation, can control which genes are turned on or off without changing letters of the DNA alphabet (A-T-C-G), and thus help distinguish different cell types.
In this new study, published July 4 in Science, the scientists found that the patterns of DNA methylation undergo widespread reconfiguration in the frontal cortex of mouse and human brains during a time of development when synapses, or connections between nerve cells, are growing rapidly. The researchers identified the exact sites of DNA methylation throughout the genome in brains from infants through adults. They found that one form of DNA methylation is present in neurons and glia from birth. Strikingly, a second form of “non-CG” DNA methylation that is almost exclusive to neurons accumulates as the brain matures, becoming the dominant form of methylation in the genome of human neurons. These results help us to understand how the intricate DNA landscape of brain cells develops during the key stages of childhood.
The genetic code in DNA is made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). DNA methylation typically occurs at so-called CpG sites, where C (cytosine) sits next to G (guanine) in the DNA alphabet. About 80 to 90 percent of CpG sites are methylated in human DNA. Salk researchers previously discovered that in human embryonic stem cells and induced pluripotent stem cells, a type of artificially derived stem cell, DNA methylation can also occur when G does not follow C, hence “non-CG methylation.” Originally, they thought that this type of methylation disappeared when stem cells differentiate into specific tissue-types such as lung or fat cells. The current study finds this is not the case in the brain, where non-CG methylation appears after cells differentiate, usually during childhood and adolescence when the brain is maturing.
By sequencing the genomes of mouse and human brain tissue as well as neurons and glia (from the frontal cortex of the brain) during early postnatal, juvenile, adolescent and adult stages, the Salk team found that non-CG methylation accumulates in neurons through early childhood and adolescence, and becomes the dominant form of DNA methylation in mature human neurons. “This shows that the period during which the neural circuits of the brain mature is accompanied by a parallel process of large-scale reconfiguration of the neural epigenome,” says Ecker, who is a Howard Hughes Medical Institute and Gordon and Betty Moore Foundation investigator.
The study provides the first comprehensive maps of how DNA methylation patterns change in the mouse and human brain during development, forming a critical foundation to now explore whether changes in methylation patterns may be linked to human diseases, including psychiatric disorders. Recent studies have demonstrated a possible role for DNA methylation in schizophrenia, depression, suicide and bipolar disorder. “Our work will let us begin to ask more detailed questions about how changes in the epigenome sculpt the complex identities of brain cells through life,” says co-first author Eran Mukamel, from Salk’s Computational Neurobiology Laboratory.
“The human brain has been called the most complex system that we know of in the universe,” says Ryan Lister, co-corresponding author on the new paper, previously a postdoctoral fellow in Ecker’s laboratory at Salk and now a group leader at The University of Western Australia. “So perhaps we shouldn’t be so surprised that this complexity extends to the level of the brain epigenome. These unique features of DNA methylation that emerge during critical phases of brain development suggest the presence of previously unrecognized regulatory processes that may be critically involved in normal brain function and brain disorders.”
At present, there is consensus among neuroscientists that many mental disorders have a neurodevelopmental origin and arise from an interaction between genetic predisposition and environmental influences (for example, early-life stress or drug abuse), the outcome of which is altered activity of brain networks. The building and shaping of these brain networks requires a long maturation process in which central nervous system cell types (neurons and glia) need to fine-tune the way they express their genetic code.
“DNA methylation fulfills this role,” says study co-author Terrence J. Sejnowski, a Howard Hughes Medical Institute Investigator, holder of the Francis Crick Chair and head of Salk’s Computational Neurobiology Laboratory. “We found that patterns of methylation are dynamic during brain development, in particular for non-CG methylation during early childhood and adolescence, which changes the way that we think about normal brain function and dysfunction.”
By disrupting the transcriptional expression of neurons, adds co-corresponding author M. Margarita Behrens, a staff scientist in the Computational Neurobiology Laboratory, “the alterations of these methylation patterns will change the way in which networks are formed, which could, in turn, lead to the appearance of mental disorders later in life.”

Unique Epigenomic Code Identified During Human Brain Development

Changes in the epigenome, including chemical modifications of DNA, can act as an extra layer of information in the genome, and are thought to play a role in learning and memory, as well as in age-related cognitive decline. The results of a new study by scientists at the Salk Institute for Biological Studies show that the landscape of DNA methylation, a particular type of epigenomic modification, is highly dynamic in brain cells during the transition from birth to adulthood, helping to understand how information in the genomes of cells in the brain is controlled from fetal development to adulthood. The brain is much more complex than all other organs in the body and this discovery opens the door to a deeper understanding of how the intricate patterns of connectivity in the brain are formed.

“These results extend our knowledge of the unique role of DNA methylation in brain development and function,” says senior author Joseph R. Ecker, professor and director of Salk’s Genomic Analysis Laboratory and holder of the Salk International Council Chair in Genetics. “They offer a new framework for testing the role of the epigenome in healthy function and in pathological disruptions of neural circuits.”

A healthy brain is the product of a long process of development. The front-most part of our brain, called the frontal cortex, plays a key role in our ability to think, decide and act. The brain accomplishes all of this through the interaction of special cells such as neurons and glia. We know that these cells have distinct functions, but what gives these cells their individual identities? The answer lies in how each cell expresses the information contained in its DNA. Epigenomic modifications, such as DNA methylation, can control which genes are turned on or off without changing letters of the DNA alphabet (A-T-C-G), and thus help distinguish different cell types.

In this new study, published July 4 in Science, the scientists found that the patterns of DNA methylation undergo widespread reconfiguration in the frontal cortex of mouse and human brains during a time of development when synapses, or connections between nerve cells, are growing rapidly. The researchers identified the exact sites of DNA methylation throughout the genome in brains from infants through adults. They found that one form of DNA methylation is present in neurons and glia from birth. Strikingly, a second form of “non-CG” DNA methylation that is almost exclusive to neurons accumulates as the brain matures, becoming the dominant form of methylation in the genome of human neurons. These results help us to understand how the intricate DNA landscape of brain cells develops during the key stages of childhood.

The genetic code in DNA is made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). DNA methylation typically occurs at so-called CpG sites, where C (cytosine) sits next to G (guanine) in the DNA alphabet. About 80 to 90 percent of CpG sites are methylated in human DNA. Salk researchers previously discovered that in human embryonic stem cells and induced pluripotent stem cells, a type of artificially derived stem cell, DNA methylation can also occur when G does not follow C, hence “non-CG methylation.” Originally, they thought that this type of methylation disappeared when stem cells differentiate into specific tissue-types such as lung or fat cells. The current study finds this is not the case in the brain, where non-CG methylation appears after cells differentiate, usually during childhood and adolescence when the brain is maturing.

By sequencing the genomes of mouse and human brain tissue as well as neurons and glia (from the frontal cortex of the brain) during early postnatal, juvenile, adolescent and adult stages, the Salk team found that non-CG methylation accumulates in neurons through early childhood and adolescence, and becomes the dominant form of DNA methylation in mature human neurons. “This shows that the period during which the neural circuits of the brain mature is accompanied by a parallel process of large-scale reconfiguration of the neural epigenome,” says Ecker, who is a Howard Hughes Medical Institute and Gordon and Betty Moore Foundation investigator.

The study provides the first comprehensive maps of how DNA methylation patterns change in the mouse and human brain during development, forming a critical foundation to now explore whether changes in methylation patterns may be linked to human diseases, including psychiatric disorders. Recent studies have demonstrated a possible role for DNA methylation in schizophrenia, depression, suicide and bipolar disorder. “Our work will let us begin to ask more detailed questions about how changes in the epigenome sculpt the complex identities of brain cells through life,” says co-first author Eran Mukamel, from Salk’s Computational Neurobiology Laboratory.

“The human brain has been called the most complex system that we know of in the universe,” says Ryan Lister, co-corresponding author on the new paper, previously a postdoctoral fellow in Ecker’s laboratory at Salk and now a group leader at The University of Western Australia. “So perhaps we shouldn’t be so surprised that this complexity extends to the level of the brain epigenome. These unique features of DNA methylation that emerge during critical phases of brain development suggest the presence of previously unrecognized regulatory processes that may be critically involved in normal brain function and brain disorders.”

At present, there is consensus among neuroscientists that many mental disorders have a neurodevelopmental origin and arise from an interaction between genetic predisposition and environmental influences (for example, early-life stress or drug abuse), the outcome of which is altered activity of brain networks. The building and shaping of these brain networks requires a long maturation process in which central nervous system cell types (neurons and glia) need to fine-tune the way they express their genetic code.

“DNA methylation fulfills this role,” says study co-author Terrence J. Sejnowski, a Howard Hughes Medical Institute Investigator, holder of the Francis Crick Chair and head of Salk’s Computational Neurobiology Laboratory. “We found that patterns of methylation are dynamic during brain development, in particular for non-CG methylation during early childhood and adolescence, which changes the way that we think about normal brain function and dysfunction.”

By disrupting the transcriptional expression of neurons, adds co-corresponding author M. Margarita Behrens, a staff scientist in the Computational Neurobiology Laboratory, “the alterations of these methylation patterns will change the way in which networks are formed, which could, in turn, lead to the appearance of mental disorders later in life.”

Filed under brain cells dna methylation brain development cognitive function frontal cortex epigenetics neuroscience science

190 notes

Why do we gesticulate?
If you rely on hand gestures to get your point across, you can thank fish for that! Scientists have found that the evolution of the control of speech and hand movements can be traced back to the same place in the brain, which could explain why we use hand gestures when we are speaking.
Professor Andrew Bass (Cornell University), who will be presenting his work at the meeting of the Society for Experimental Biology on the 3rd July, said: “We have traced the evolutionary origins of the behavioural coupling between speech and hand movement back to a developmental compartment in the brain of fishes.”
"Pectoral appendages (fins and forelimbs) are mainly used for locomotion. However, pectoral appendages also function in social communication for the purposes of making sounds that we simply refer to as non-vocal sonic signals, and for gestural signalling."
Studies of early development in fishes show that neural networks in the brain controlling the more complex vocal and pectoral mechanisms of social signalling among birds and mammals have their ancestral origins in a single compartment of the hindbrain in fishes. This begins to explain the ancestral origins of the neural basis for the close coupling between vocal and pectoral/gestural signalling that is observed among many vertebrate groups, including humans.
Professor Bass said: “Coupling of vocal and pectoral-gestural circuitry starts to get at the evolutionary origins of the coupling between vocalization (speech) and gestural signalling (hand movements). This is all part of the perhaps even larger story of language evolution.”

Why do we gesticulate?

If you rely on hand gestures to get your point across, you can thank fish for that! Scientists have found that the evolution of the control of speech and hand movements can be traced back to the same place in the brain, which could explain why we use hand gestures when we are speaking.

Professor Andrew Bass (Cornell University), who will be presenting his work at the meeting of the Society for Experimental Biology on the 3rd July, said: “We have traced the evolutionary origins of the behavioural coupling between speech and hand movement back to a developmental compartment in the brain of fishes.”

"Pectoral appendages (fins and forelimbs) are mainly used for locomotion. However, pectoral appendages also function in social communication for the purposes of making sounds that we simply refer to as non-vocal sonic signals, and for gestural signalling."

Studies of early development in fishes show that neural networks in the brain controlling the more complex vocal and pectoral mechanisms of social signalling among birds and mammals have their ancestral origins in a single compartment of the hindbrain in fishes. This begins to explain the ancestral origins of the neural basis for the close coupling between vocal and pectoral/gestural signalling that is observed among many vertebrate groups, including humans.

Professor Bass said: “Coupling of vocal and pectoral-gestural circuitry starts to get at the evolutionary origins of the coupling between vocalization (speech) and gestural signalling (hand movements). This is all part of the perhaps even larger story of language evolution.”

Filed under hand movements hand gestures hindbrain evolution neuroscience science

51 notes

Shape-shifting Disease Proteins May Explain Variable Appearance of Neurodegenerative Diseases

Neurodegenerative diseases are not all alike. Two individuals suffering from the same disease may experience very different age of onset, symptoms, severity, and constellation of impairments, as well as different rates of disease progression. Researchers in the Perelman School of Medicine at the University of Pennsylvania have shown one disease protein can morph into different strains and promote misfolding of other disease proteins commonly found in Alzheimer’s, Parkinson’s and other related neurodegenerative diseases.

Virginia M.Y. Lee, PhD, MBA, professor of Pathology and Laboratory Medicine and director of the Center for Neurodegenerative Disease Research, with co-director, John Q. Trojanowski MD, PhD, postdoctoral fellow Jing L. Guo, PhD, and colleagues, discovered that alpha-synuclein, a protein that forms sticky clumps in the neurons of Parkinson’s disease patients, can exist in at least two different structural shapes, or “strains,” when it clumps into fibrils, despite having precisely the same chemical composition.

These two strains differ in their ability to promote fibril formation of normal alpha-synuclein, as well as the protein tau, which forms neurofibrillary tangles in individuals with Alzheimer’s disease.

Importantly, these alpha-synuclein strains are not static; they somehow evolve, such that fibrils that initially cannot promote tau tangles acquire that ability after multiple rounds of “seeded” fibril formation in test tubes.

The findings appear in the July 3rd issue of Cell.

Morphed Misfolding Proteins Found In Overlapping Neurodegenerative Diseases
Tau and alpha-synuclein protein clumps are hallmarks of separate diseases – Alzheimer’s and Parkinson’s, respectively. Yet these two proteins are often found entangled in diseased brains of patients who may manifest symptoms of both disorders.

One possible explanation for this convergence of Alzheimer’s and Parkinson’s disease pathology in the same patient is a global disruption in protein folding. But, Guo and Lee showed that one strain of alpha-synuclein fibrils which cannot promote tau fibrillization actually evolved into another strain that could efficiently cause tau to fibrillize in cultured neurons, although both strains are identical at the amino acid sequence level. Guo and Lee called the starting conformation “Strain A,” and the evolved conformation, “Strain B.”

To figure out how A and B differ, Guo showed that the two strains folded into different shapes, as indicated by their differential reactivity to antibodies and sensitivity to protein-degrading enzymes. The two strains also differed in their ability to promote tau fibrillization and pathology in mouse brains, mimicking the results from cultured cells. When analyzing post-mortem brains of Parkinson’s patients, the team found at least two distinct forms of pathological alpha-synuclein.

Lee and her team speculate that in humans, alpha-synuclein aggregates may shift their shapes as they pass from cell to cell (much like a cube of silly putty being re-shaped to form a sphere), possibly developing the ability to entangle other proteins such as tau along the way. That process, in turn, could theoretically yield distinct types of alpha-synuclein pathologies that are observed in different brain regions of Parkinson’s disease patients.

While further research is needed to confirm and extend these findings, they have potentially significant implications for patients afflicted with Parkinson’s and other neurodegenerative diseases. For example, Lee explains, they could account for some of the heterogeneity observed in Parkinson’s disease. Different strains of pathological alpha-synuclein may promote formation of distinct types of alpha-synuclein aggregates that may or may not induce tau pathology in different brain regions and in different patients. That, in turn, could explain why some Parkinson’s patients, for example, experience only motor impairments while others ultimately develop cognitive impairments.

The findings also have potential therapeutic implications, Lee says. By recognizing that pathological alpha-synuclein can exist in different forms that are linked with different impairments, researchers can now selectively target one or the other, or both, for instance with strain-selective antibodies.

“What we’ve found opens up new areas for developing therapies, and particularly immunotherapies, for Parkinson’s and other neurodegenerative diseases,” Lee says.

(Source: uphs.upenn.edu)

Filed under alzheimer's disease parkinson's disease protein folding neurodegenerative diseases neuroscience science

free counters