Neuroscience

Articles and news from the latest research reports.

Posts tagged science

233 notes

From contemporary syntax to human language’s deep origins



On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.
Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.
From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.
But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?
“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”
Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.
“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”
Introducing the ‘integration hypothesis’
The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.
The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.
The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.
Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.
But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.
However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.
The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”
An inherent capacity
To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.
As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.
“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”
Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.
“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”
Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”
Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.
“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

From contemporary syntax to human language’s deep origins

On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.

Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.

From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.

But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?

“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”

Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.

“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”

Introducing the ‘integration hypothesis’

The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.

The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.

The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.

Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.

But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.

However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.

The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”

An inherent capacity

To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.

As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.

“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”

Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.

“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”

Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”

Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.

“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

Filed under language birdsong evolution linguistics psychology neuroscience science

216 notes

Gene mutation discovery could explain brain disorders in children
Researchers have discovered that mutations in one of the brain’s key genes could be responsible for impaired mental function in children born with an intellectual disability.
The research, published today in the journal, Human Molecular Genetics, proves that the gene, TUBB5, is essential for a healthy functioning brain.
It’s estimated that intellectual disability affects up to four per cent of people worldwide, and two per cent of all Australians. One of the ways in which intellectual disability occurs is through genetic mutations, which cause problems with normal fetal brain development.  
During fetal brain development, TUBB5 is essential for the proper placement and wiring of new neurons. When the gene is mutated, the brain, which sends and receives messages to the rest of the body, is impaired.
Lead researcher, Dr Julian Heng, from the Australian Regenerative Medicine Institute (ARMI) at Monash University, said genetic mutations to TUBB5 could be responsible for a range of intellectual disabilities. It could also affect the development of basic motor skills such as walking.
“TUBB5 works like a type of scaffolding inside neurons, enabling them to shape their connections to other neurons, so it’s essential for healthy brain development. If the scaffolding is faulty, in this case if TUBB5 mutates, it can have serious consequences,” Dr Heng said.
These new findings build on the team’s collaborative work with researchers in Austria, which led to the discovery of TUBB5 mutations in human brain disorders in 2012. By looking at just three unrelated patients with microcephaly, a rare brain disease in children, the team found striking similarities – each had a mutation to TUBB5. The team also provided the first evidence that the TUBB5 mutations were responsible for each patient’s disorder.
Dr Heng said the research could have important implications, not only for intellectual disabilities but also for a wide range of developmental disorders.
“Learning more about the TUBB5 gene and its mutations could reveal how it shapes the connections of neurons in normal and diseased brain states.
“We’re just at the beginning of this work but if we can understand why and how mutations occur to TUBB5, we may even be able to repair these mutations. In the future, we believe this work will enable us to develop new therapies to transform people’s lives,” Dr Heng said.
The work may potentially lead to new information about the causes and possible treatments for other brain developmental syndromes, including autism, a condition that affects as many as 1 in 160 children.
Dr Heng said that because TUBB5 belongs to a family of genes which produce the scaffolding in neurons, it means that there is scope for further study into its impact.
“By learning what these scaffolding proteins do to help neurons make brain circuits, we might be able to pinpoint the underlying causes of a wide range of brain disorders in children, and develop more targeted treatments,” Dr Heng said.
Scientists believe that in the future this knowledge, combined with regenerative medicine techniques, could also aid the replacement of neurons in times of brain injury or disease.
The next phase of the research will be to develop a working model to better understand how TUBB5 can be targeted for gene therapy.

Gene mutation discovery could explain brain disorders in children

Researchers have discovered that mutations in one of the brain’s key genes could be responsible for impaired mental function in children born with an intellectual disability.

The research, published today in the journal, Human Molecular Genetics, proves that the gene, TUBB5, is essential for a healthy functioning brain.

It’s estimated that intellectual disability affects up to four per cent of people worldwide, and two per cent of all Australians. One of the ways in which intellectual disability occurs is through genetic mutations, which cause problems with normal fetal brain development.  

During fetal brain development, TUBB5 is essential for the proper placement and wiring of new neurons. When the gene is mutated, the brain, which sends and receives messages to the rest of the body, is impaired.

Lead researcher, Dr Julian Heng, from the Australian Regenerative Medicine Institute (ARMI) at Monash University, said genetic mutations to TUBB5 could be responsible for a range of intellectual disabilities. It could also affect the development of basic motor skills such as walking.

“TUBB5 works like a type of scaffolding inside neurons, enabling them to shape their connections to other neurons, so it’s essential for healthy brain development. If the scaffolding is faulty, in this case if TUBB5 mutates, it can have serious consequences,” Dr Heng said.

These new findings build on the team’s collaborative work with researchers in Austria, which led to the discovery of TUBB5 mutations in human brain disorders in 2012. By looking at just three unrelated patients with microcephaly, a rare brain disease in children, the team found striking similarities – each had a mutation to TUBB5. The team also provided the first evidence that the TUBB5 mutations were responsible for each patient’s disorder.

Dr Heng said the research could have important implications, not only for intellectual disabilities but also for a wide range of developmental disorders.

“Learning more about the TUBB5 gene and its mutations could reveal how it shapes the connections of neurons in normal and diseased brain states.

“We’re just at the beginning of this work but if we can understand why and how mutations occur to TUBB5, we may even be able to repair these mutations. In the future, we believe this work will enable us to develop new therapies to transform people’s lives,” Dr Heng said.

The work may potentially lead to new information about the causes and possible treatments for other brain developmental syndromes, including autism, a condition that affects as many as 1 in 160 children.

Dr Heng said that because TUBB5 belongs to a family of genes which produce the scaffolding in neurons, it means that there is scope for further study into its impact.

“By learning what these scaffolding proteins do to help neurons make brain circuits, we might be able to pinpoint the underlying causes of a wide range of brain disorders in children, and develop more targeted treatments,” Dr Heng said.

Scientists believe that in the future this knowledge, combined with regenerative medicine techniques, could also aid the replacement of neurons in times of brain injury or disease.

The next phase of the research will be to develop a working model to better understand how TUBB5 can be targeted for gene therapy.

Filed under children TUBB5 brain disorders neurons genetics neuroscience science

89 notes

Researchers Use Human Stem Cells to Create Light-Sensitive Retina in a Dish

Using a type of human stem cell, Johns Hopkins researchers say they have created a three-dimensional complement of human retinal tissue in the laboratory, which notably includes functioning photoreceptor cells capable of responding to light, the first step in the process of converting it into visual images.

image

(Image caption: Rod photoreceptors (in green) within a “mini retina” derived from human iPS cells in the lab. Image courtesy of Johns Hopkins Medicine)

“We have basically created a miniature human retina in a dish that not only has the architectural organization of the retina but also has the ability to sense light,” says study leader M. Valeria Canto-Soler, Ph.D., an assistant professor of ophthalmology at the Johns Hopkins University School of Medicine. She says the work, reported online June 10 in the journal Nature Communications, “advances opportunities for vision-saving research and may ultimately lead to technologies that restore vision in people with retinal diseases.”

Like many processes in the body, vision depends on many different types of cells working in concert, in this case to turn light into something that can be recognized by the brain as an image. Canto-Soler cautions that photoreceptors are only part of the story in the complex eye-brain process of vision, and her lab hasn’t yet recreated all of the functions of the human eye and its links to the visual cortex of the brain. “Is our lab retina capable of producing a visual signal that the brain can interpret into an image? Probably not, but this is a good start,” she says.

The achievement emerged from experiments with human induced pluripotent stem cells (iPS) and could, eventually, enable genetically engineered retinal cell transplants that halt or even reverse a patient’s march toward blindness, the researchers say.

The iPS cells are adult cells that have been genetically reprogrammed to their most primitive state. Under the right circumstances, they can develop into most or all of the 200 cell types in the human body. In this case, the Johns Hopkins team turned them into retinal progenitor cells destined to form light-sensitive retinal tissue that lines the back of the eye.

Using a simple, straightforward technique they developed to foster the growth of the retinal progenitors, Canto-Soler and her team saw retinal cells and then tissue grow in their petri dishes, says Xiufeng Zhong, Ph.D., a postdoctoral researcher in Canto-Soler’s lab. The growth, she says, corresponded in timing and duration to retinal development in a human fetus in the womb. Moreover, the photoreceptors were mature enough to develop outer segments, a structure essential for photoreceptors to function.

Retinal tissue is complex, comprising seven major cell types, including six kinds of neurons, which are all organized into specific cell layers that absorb and process light, “see,” and transmit those visual signals to the brain for interpretation. The lab-grown retinas recreate the three-dimensional architecture of the human retina. “We knew that a 3-D cellular structure was necessary if we wanted to reproduce functional characteristics of the retina,” says Canto-Soler, “but when we began this work, we didn’t think stem cells would be able to build up a retina almost on their own. In our system, somehow the cells knew what to do.”

When the retinal tissue was at a stage equivalent to 28 weeks of development in the womb, with fairly mature photoreceptors, the researchers tested these mini-retinas to see if the photoreceptors could in fact sense and transform light into visual signals.

They did so by placing an electrode into a single photoreceptor cell and then giving a pulse of light to the cell, which reacted in a biochemical pattern similar to the behavior of photoreceptors in people exposed to light.

Specifically, she says, the lab-grown photoreceptors responded to light the way retinal rods do. Human retinas contain two major photoreceptor cell types called rods and cones. The vast majority of photoreceptors in humans are rods, which enable vision in low light. The retinas grown by the Johns Hopkins team were also dominated by rods.

Canto-Soler says that the newly developed system gives them the ability to generate hundreds of mini-retinas at a time directly from a person affected by a particular retinal disease such as retinitis pigmentosa. This provides a unique biological system to study the cause of retinal diseases directly in human tissue, instead of relying on animal models.

The system, she says, also opens an array of possibilities for personalized medicine such as testing drugs to treat these diseases in a patient-specific way. In the long term, the potential is also there to replace diseased or dead retinal tissue with lab-grown material to restore vision.

(Source: hopkinsmedicine.org)

Filed under stem cells iPSCs photoreceptors retinal tissue vision medicine science

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

215 notes

(Image caption: At left, the brains of adults who had ADHD as children but no longer have it show synchronous activity between the posterior cingulate cortex (the larger red region) and the medial prefrontal cortex (smaller red region). At right, the brains of adults who continue to experience ADHD do not show this synchronous activity. Illustration: Jose-Luis Olivares/MIT, based on images courtesy of the researchers)
Inside the adult ADHD brain
About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.
In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.
Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.
“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.
Shifting brain patterns
This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.
The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.
“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”
In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.
In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.
“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”
Lingering problems
However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.
In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.
Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.
The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.
“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

(Image caption: At left, the brains of adults who had ADHD as children but no longer have it show synchronous activity between the posterior cingulate cortex (the larger red region) and the medial prefrontal cortex (smaller red region). At right, the brains of adults who continue to experience ADHD do not show this synchronous activity. Illustration: Jose-Luis Olivares/MIT, based on images courtesy of the researchers)

Inside the adult ADHD brain

About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.

In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.

Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.

“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.

Shifting brain patterns

This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.

The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.

“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”

In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.

In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.

“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”

Lingering problems

However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.

In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.

Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.

The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.

“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

Filed under ADHD neuroimaging prefrontal cortex default mode network neuroscience science

185 notes

"All systems go" for a paralyzed person to kick off the World Cup
The Walk Again Project is an international collaboration of more than one hundred scientists, led by Prof. Miguel Nicolelis of Duke University and the International Institute for Neurosciences of Natal, Brazil. Prof. Gordon Cheng, head of the Institute for Cognitive Systems at the Technische Universität München (TUM), is a leading partner.
Eight Brazilian patients, men and women between 20 and 40 years of age who are paralyzed from the waist down, have been training for months to use the exoskeleton. The system works by recording electrical activity in the patient’s brain, recognizing his or her intention – such as to take a step or kick a ball – and translating that to action. It also gives the patient tactile feedback using sensitive artificial skin created by Cheng’s institute.
The feeling of touching the ground
Inspiration for this so-called CellulARSkin technology – as well as for the Walk Again Project itself – came from a 2008 collaboration. As Cheng sums up that complex and widely reported experiment, “Miguel set up a monkey walking on a treadmill in North Carolina, and then I made my humanoid robot walk with the signal in Kyoto.” It was a short step for the researchers to envision a paralyzed person walking with the help of a robotic exoskeleton that could be guided by mental activity alone.
"Our brains are very adaptive in the way that we can extend our embodiment to use tools," Cheng says, "as in driving a car or eating with chopsticks. After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body." It was clear that technical advances would be required to allow a relatively compact, lightweight exoskeleton to be assembled, and that visual feedback would not be enough. A sense of touch would be essential for the patient’s emotional comfort as well as control over the exoskeleton. Thus the challenge was to give a paralyzed person, together with the ability to walk, the feeling of touching the ground.
A versatile solution
Upon joining TUM in 2010, Cheng made it a research priority for his institute to improve on the state of the art in tactile sensing for robotic systems. The result, CellulARSkin, provides a framework for a robust and self-organizing surface sensor network. It can be implemented using standard off-the-shelf hardware and thus will benefit from future improvements in miniaturization, performance, and cost.
The basic unit is a flat, six-sided package of electronic components including a low-power-consumption microprocessor as well as sensors that detect pre-touch proximity, pressure, vibration, temperature, and even movement in three-dimensional space. Any number of these individual “cells” can be networked together in a honeycomb pattern, protected in the current prototype by a rubbery skin of molded elastomer.
"It’s not just the sensor that’s important," Cheng says. "The intelligence of the sensor is even more important." Cooperation among the networked cells, and between the network and a central system, allows CellulARSkin to configure itself for each specific application and to recover automatically from certain kinds of damage. These capabilities offer advantages in enabling smarter, safer interaction of machines with people, and in rapid setup of industrial robots – as is being pursued in the EU-sponsored project "Factory in a Day."In the Walk Again Project, CellulARSkin is being used in two ways. Integrated with the exoskeleton, for example on the bottoms of the feet, the artificial skin sends signals to tiny motors that vibrate against the patient’s arms. Through training with this kind of indirect sensory feedback, a patient can learn to incorporate the robotic legs and feet into his or her own body schema. CellulARSkin is also being wrapped around parts of the patient’s own body to help the medical team monitor for any signs of distress or discomfort.A milestone, but “just the beginning”"I think some people see the World Cup opening as the end," Cheng says, "but it’s really just the beginning. This may be a major milestone, but we have a lot more work to do." He views the event as a public demonstration of what science can do for people. "Also, I see it as a great tribute to all the patients’ hard work and their bravery!"

"All systems go" for a paralyzed person to kick off the World Cup

The Walk Again Project is an international collaboration of more than one hundred scientists, led by Prof. Miguel Nicolelis of Duke University and the International Institute for Neurosciences of Natal, Brazil. Prof. Gordon Cheng, head of the Institute for Cognitive Systems at the Technische Universität München (TUM), is a leading partner.

Eight Brazilian patients, men and women between 20 and 40 years of age who are paralyzed from the waist down, have been training for months to use the exoskeleton. The system works by recording electrical activity in the patient’s brain, recognizing his or her intention – such as to take a step or kick a ball – and translating that to action. It also gives the patient tactile feedback using sensitive artificial skin created by Cheng’s institute.

The feeling of touching the ground

Inspiration for this so-called CellulARSkin technology – as well as for the Walk Again Project itself – came from a 2008 collaboration. As Cheng sums up that complex and widely reported experiment, “Miguel set up a monkey walking on a treadmill in North Carolina, and then I made my humanoid robot walk with the signal in Kyoto.” It was a short step for the researchers to envision a paralyzed person walking with the help of a robotic exoskeleton that could be guided by mental activity alone.

"Our brains are very adaptive in the way that we can extend our embodiment to use tools," Cheng says, "as in driving a car or eating with chopsticks. After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body." It was clear that technical advances would be required to allow a relatively compact, lightweight exoskeleton to be assembled, and that visual feedback would not be enough. A sense of touch would be essential for the patient’s emotional comfort as well as control over the exoskeleton. Thus the challenge was to give a paralyzed person, together with the ability to walk, the feeling of touching the ground.

A versatile solution

Upon joining TUM in 2010, Cheng made it a research priority for his institute to improve on the state of the art in tactile sensing for robotic systems. The result, CellulARSkin, provides a framework for a robust and self-organizing surface sensor network. It can be implemented using standard off-the-shelf hardware and thus will benefit from future improvements in miniaturization, performance, and cost.

The basic unit is a flat, six-sided package of electronic components including a low-power-consumption microprocessor as well as sensors that detect pre-touch proximity, pressure, vibration, temperature, and even movement in three-dimensional space. Any number of these individual “cells” can be networked together in a honeycomb pattern, protected in the current prototype by a rubbery skin of molded elastomer.

"It’s not just the sensor that’s important," Cheng says. "The intelligence of the sensor is even more important." Cooperation among the networked cells, and between the network and a central system, allows CellulARSkin to configure itself for each specific application and to recover automatically from certain kinds of damage. These capabilities offer advantages in enabling smarter, safer interaction of machines with people, and in rapid setup of industrial robots – as is being pursued in the EU-sponsored project "Factory in a Day."

In the Walk Again Project, CellulARSkin is being used in two ways. Integrated with the exoskeleton, for example on the bottoms of the feet, the artificial skin sends signals to tiny motors that vibrate against the patient’s arms. Through training with this kind of indirect sensory feedback, a patient can learn to incorporate the robotic legs and feet into his or her own body schema. CellulARSkin is also being wrapped around parts of the patient’s own body to help the medical team monitor for any signs of distress or discomfort.

A milestone, but “just the beginning”

"I think some people see the World Cup opening as the end," Cheng says, "but it’s really just the beginning. This may be a major milestone, but we have a lot more work to do." He views the event as a public demonstration of what science can do for people. "Also, I see it as a great tribute to all the patients’ hard work and their bravery!"

Filed under BMI exoskeleton robotics Walk Again Project CellulARSkin neuroscience science

160 notes

That Sounds Familiar, But Why?

When it comes to familiarity, a slew of memories including seemingly unrelated ones can come flooding into the brain, according to mathematical theories called global similarity models.

image

After conducting an fMRI study on memory and categorization, researchers including a Texas Tech University psychologist have shown for the first time that these mathematical models seem to correctly explain processing in the medial temporal lobes, a region of the brain associated with long-term memory that is disrupted by memory disorders like Alzheimer’s disease.

The findings were published in The Journal of Neuroscience.

Tyler Davis, assistant director of Texas Tech’s Neuroimaging Institute and an assistant professor of psychology, specializes in neurobiological approaches to learning and memory. He was part of a team that delved into global similarity models.

“Since at least the 1980s, scientists researching memory have believed that when a person finds someone’s face or a new experience familiar, that person is not simply retrieving a memory of only this previous experience, but memories of many other related and unrelated experiences as well,” Davis said. “Formal mathematical theories of memory called global similarity models suggest that when we judge familiarity, we match an experience, such as a face or a trip to a restaurant, to all of the memories that we have stored in our brains. Our recent work using fMRI suggests these models are correct.”

People may believe when they see someone’s familiar face or take a trip to a familiar restaurant, they only activate the most similar or recent memories for comparison. However, Davis said this is not the case. According to global similarity models, the feeling of familiarity for the taste of brisket at a particular restaurant draws on a spectrum of memories that a person has stored in his or her brain.

Eating the brisket can activate memories not only of a previous trip to that restaurant, but also of the décor, eating brisket at a similar restaurant, what that person’s home-cooked brisket tastes like and even seemingly tangential memories such as a recent trip to another city.

“In terms of global similarity theories and our new findings, the important thing is when you are judging familiarity, your brain doesn’t just retrieve the most relevant memories but many other memories as well,” Davis said. “This seems counter-intuitive to how memory feels. We often feel like we are just retrieving that previous trip to that one particular restaurant when we are asked whether we’d been there before, but there is a lot of behavioral evidence that we activate many other memories as well when we judge familiarity.”

This does not mean that every memory we have stored contributes to familiarity in the same way. The more similar a previous memory is to the current experience, the more it will contribute to judgments of familiarity.

In terms of the brisket example, Davis said, previous trips to the restaurant are going to impact the familiarity more than dissimilar memories, such as the recent trip out of town. However, similarity from these other less-related experiences can have a measurable effect in judgments of familiarity.

In his recent research, Davis and others used fMRI to examine how memory similarity related to behavioral measures of familiarity, in terms of activation patterns in the medial temporal lobes.

“We found that peoples’ memory for the items in our experiments was related to their activation patterns in the medial temporal lobes in a manner that was anticipated by mathematical global similarity models,” Davis said. “The more similar the activation pattern for an item was to all of the other activation patterns, the more strongly people remembered it. This is consistent with global similarity models, which suggest that the items that are most similar to all other items stored in memory will be most familiar.”

The findings suggest that global similarity models may have a neurobiological basis, he said. This is evidence that similarity, in terms of neural processing, may impact memory. People may find things familiar not just because they are identical to things we’ve previously experienced, but because they are similar to a number of things we’ve previously experienced.

(Source: today.ttu.edu)

Filed under neuroimaging global similarity models memory neuroscience science

80 notes

Game Technology Teaches Mice and Men to Hear Better in Noisy Environments

The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard Medical School and Harvard University programmed a new type of game that trained both mice and humans to enhance their ability to discriminate soft sounds in noisy backgrounds. Their findings will be published in PNAS Online Early Edition the week of June 9-13, 2014.

image

In the experiment, adult humans and mice with normal hearing were trained on a rudimentary ‘audiogame’ inspired by sensory foraging behavior that required them to discriminate changes in the loudness of a tone presented in a moderate level of background noise. Their findings suggest new therapeutic options for clinical populations that receive little benefit from conventional sensory rehabilitation strategies.

“Like the children’s game ‘hot and cold’, our game provided instantaneous auditory feedback that allowed our human and mouse subjects to hone in on the location of a hidden target,” said senior author Daniel Polley, Ph.D., director of the Mass. Eye and Ear’s Amelia Peabody Neural Plasticity Unit of the Eaton-Peabody Laboratories and assistant professor of otology and laryngology at Harvard Medical School. “Over the course of training, both species learned adaptive search strategies that allowed them to more efficiently convert noisy, dynamic audio cues into actionable information for finding the target. To our surprise, human subjects who mastered this simple game over the course of 30 minutes of daily training for one month exhibited a generalized improvement in their ability to understand speech in noisy background conditions. Comparable improvements in the processing of speech in high levels of background noise were not observed for control subjects who heard the sounds of the game but did not actually play the game.”

The researchers recorded the electrical activity of neurons in auditory regions of the mouse cerebral cortex to gain some insight into how training might have boosted the ability of the brain to separate signal from noise. They found that training substantially altered the way the brain encoded sound.

In trained mice, many neurons became highly sensitive to faint sounds that signaled the location of the target in the game. Moreover, neurons displayed increased resistance to noise suppression; they retained an ability to encode faint sounds even under conditions of elevated background noise.

“Again, changes of this ilk were not observed in control mice that watched (and listened) to their counterparts play the game. Active participation in the training was required; passive listening was not enough,” Dr. Polley said.

These findings illustrate the utility of brain training exercises that are inspired by careful neuroscience research. “When combined with conventional assistive devices such as hearing aids or cochlear implants, ‘audiogames’ of the type we describe here may be able to provide the hearing impaired with an improved ability to reconnect to the auditory world. Of particular interest is the finding that brain training improved speech processing in noisy backgrounds – a listening environment where conventional hearing aids offer limited benefit,” concluded Dr. Jonathon Whitton, lead author on the paper. Dr. Whitton is a principal investigator at the Amelia Peabody Neural Plasticity Unit and affiliated with the Program in Speech Hearing Bioscience and Technology, Harvard–Massachusetts Institute of Technology Division of Health, Sciences, and Technology.

(Source: masseyeandear.org)

Filed under hearing hearing loss auditory cortex foraging noise suppression neuroscience science

288 notes

Study shows anaesthesia may harm memory
General anaesthesia before the age of one may impair memory later in childhood, and the effects may possibly be lifelong, a study said Monday.
This was the conclusion of scientists who compared the recollection skills of two groups of children — some who had undergone anaesthesia in infancy and others who had not.
The children, aged six to 11 and divided into two groups of 28 each, were tested over a period of 10 months for their ability to recollect specific drawings and details therein.
The children who had been anaesthetised as babies had about 28 per cent less recollection on average than their peers, and scored 20 per cent lower in tests that assessed how much detail they could remember about the drawings.
"The children did not differ in tests measuring intelligence or behaviour, but those who had received anaesthesia had significantly lower recollection scores," said a media summary provided by the journal Neuropsychopharmacology, which published the results.
Read more

Study shows anaesthesia may harm memory

General anaesthesia before the age of one may impair memory later in childhood, and the effects may possibly be lifelong, a study said Monday.

This was the conclusion of scientists who compared the recollection skills of two groups of children — some who had undergone anaesthesia in infancy and others who had not.

The children, aged six to 11 and divided into two groups of 28 each, were tested over a period of 10 months for their ability to recollect specific drawings and details therein.

The children who had been anaesthetised as babies had about 28 per cent less recollection on average than their peers, and scored 20 per cent lower in tests that assessed how much detail they could remember about the drawings.

"The children did not differ in tests measuring intelligence or behaviour, but those who had received anaesthesia had significantly lower recollection scores," said a media summary provided by the journal Neuropsychopharmacology, which published the results.

Read more

Filed under anaesthesia memory children psychology neuroscience science

152 notes

To recover consciousness, brain activity passes through newly detected states

Anesthesia makes otherwise painful procedures possible by derailing a conscious brain, rendering it incapable of sensing or responding to a surgeon’s knife. But little research exists on what happens when the drugs wear off.

image

(Image caption: Unconscious states. New findings suggest the anesthetized brain must pass through certain ‘way stations’ on the path back to consciousness. Above, the prevalence of particular clusters of brain activity states as recorded in rats that had been administered an anesthetic. The longest appear in red and the shortest in yellow and green.)

“I always found it remarkable that someone can recover from anesthesia, not only that you blink your eyes and can walk around, but you return to being yourself. So if you learned how to do something on Sunday and on Monday, you have surgery, and you wake up and you still know how to do it,” says Alexander Proekt, a visiting fellow in Don Pfaff’s Laboratory of Neurobiology and Behavior at Rockefeller University and an anesthesiologist at Weill Cornell Medical College. “It seemed like there ought to be some kind of guide or path for the system to follow.”

The obvious explanation is that as the anesthetic washes out of the body, electrical activity in the brain gradually returns to its conscious patterns. However, new research by Proekt and colleagues suggests the trip back is not so simple.

“Using statistical analysis, our research shows that the recovery from deep anesthesia is not a smooth, linear process. Instead, there are dynamic ‘way stations’ or states of activity the brain must temporarily occupy on the way to full recovery,” Pfaff says. “These results have implications for understanding how someone’s ability to recover consciousness can be disrupted by, for example, brain injury.”

Proekt, along with former postdoc Andrew Hudson, now an assistant professor in anesthesiology at the University of California, Los Angeles, and Diany Paola Calderon, a research associate in the lab, put rats “under” using the common medical and veterinary anesthetic isoflurane. As the rats recovered, the team monitored the electrical potential outside neurons, known as  local field potentials (LFPs), in particular parts of the brain known, from previous elecrophysiological and pharmacological studies, to be associated with wakefulness and anesthesia. These recordings gave them a sensitive handle on the activities of whole groups of neurons in particular parts of the thalamus and cortex.

In the awake brain, of both humans and rats, neurons generate electrical voltage that oscillates. Many of these oscillations together form a signal that appears as a squiggly line on a recording of brain activity, such as an LFP. When someone is asleep, under anesthesia, or in a coma, these oscillations occur more slowly, or at a low frequency. When he or she is awake, they speed up. The researchers examined the recordings from the rats’ brains to figure out how the electrical activity in these regions changed as they moved from anesthetized to awake.

“Recordings from each animal wound up having particular features that spontaneously appeared, suggesting their brain activity was abruptly transitioning through particular states,” Hudson says. “We analyzed the probability of a brain jumping from one state to another, and we found that certain states act as hubs through which the brain must pass to continue on its way to consciousness.” While the electrical activity in all the rats’ brains passed through these hubs, the precise path back to consciousness was not the same each time, the team reports today in the Proceedings of the National Academy of Sciences.

“These results suggest there is indeed an intrinsic way in which the unconscious brain finds its way back to consciousness. The anesthetic is just a tool for severely reducing brain activity  in a way in which we can control,” Hudson says.

In other scenarios, including coma caused by brain injury or neurological disease, the disruption to brain activity cannot be controlled, making these states much more difficult to study. However, the team’s results may help explain what is going on in these cases. “Maybe a pathway has shut down, or a brain structure that was key for full consciousness is no longer working. We don’t know yet, but our results suggest the possibility that under certain circumstances, someone may be theoretically capable of returning to consciousness but, due to the inability to transition through the hubs we have identified, his or her brain is unable to navigate the way back,” Calderon says.

(Source: newswire.rockefeller.edu)

Filed under consciousness brain activity anaesthesia neurons neuroscience science

free counters