Neuroscience

Articles and news from the latest research reports.

Posts tagged science

162 notes

Has evolution given humans unique brain structures?
Humans have at least two functional networks in their cerebral cortex not found in rhesus monkeys. This means that new brain networks were likely added in the course of evolution from primate ancestor to human. These findings, based on an analysis of functional brain scans, were published in a study by neurophysiologist Wim Vanduffel (KU Leuven and Harvard Medical School) in collaboration with a team of Italian and American researchers.
Our ancestors evolutionarily split from those of rhesus monkeys about 25 million years ago. Since then, brain areas have been added, have disappeared or have changed in function. This raises the question, ‘Has evolution given humans unique brain structures?’. Scientists have entertained the idea before but conclusive evidence was lacking. By combining different research methods, we now have a first piece of evidence that could prove that humans have unique cortical brain networks.
Professor Vanduffel explains: “We did functional brain scans in humans and rhesus monkeys at rest and while watching a movie to compare both the place and the function of cortical brain networks. Even at rest, the brain is very active. Different brain areas that are active simultaneously during rest form so-called ‘resting state’ networks. For the most part, these resting state networks in humans and monkeys are surprisingly similar, but we found two networks unique to humans and one unique network in the monkey.”
“When watching a movie, the cortex processes an enormous amount of visual and auditory information. The human-specific resting state networks react to this stimulation in a totally different way than any part of the monkey brain. This means that they also have a different function than any of the resting state networks found in the monkey. In other words, brain structures that are unique in humans are anatomically absent in the monkey and there no other brain structures in the monkey that have an analogous function. Our unique brain areas are primarily located high at the back and at the front of the cortex and are probably related to specific human cognitive abilities, such as human-specific intelligence.”
The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to a particular task, thus allowing activity to be tracked.

Has evolution given humans unique brain structures?

Humans have at least two functional networks in their cerebral cortex not found in rhesus monkeys. This means that new brain networks were likely added in the course of evolution from primate ancestor to human. These findings, based on an analysis of functional brain scans, were published in a study by neurophysiologist Wim Vanduffel (KU Leuven and Harvard Medical School) in collaboration with a team of Italian and American researchers.

Our ancestors evolutionarily split from those of rhesus monkeys about 25 million years ago. Since then, brain areas have been added, have disappeared or have changed in function. This raises the question, ‘Has evolution given humans unique brain structures?’. Scientists have entertained the idea before but conclusive evidence was lacking. By combining different research methods, we now have a first piece of evidence that could prove that humans have unique cortical brain networks.

Professor Vanduffel explains: “We did functional brain scans in humans and rhesus monkeys at rest and while watching a movie to compare both the place and the function of cortical brain networks. Even at rest, the brain is very active. Different brain areas that are active simultaneously during rest form so-called ‘resting state’ networks. For the most part, these resting state networks in humans and monkeys are surprisingly similar, but we found two networks unique to humans and one unique network in the monkey.”

“When watching a movie, the cortex processes an enormous amount of visual and auditory information. The human-specific resting state networks react to this stimulation in a totally different way than any part of the monkey brain. This means that they also have a different function than any of the resting state networks found in the monkey. In other words, brain structures that are unique in humans are anatomically absent in the monkey and there no other brain structures in the monkey that have an analogous function. Our unique brain areas are primarily located high at the back and at the front of the cortex and are probably related to specific human cognitive abilities, such as human-specific intelligence.”

The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to a particular task, thus allowing activity to be tracked.

Filed under brain brain structure brain networks brain activity cerebral cortex primates evolution neuroscience science

360 notes

How human language could have evolved from birdsong

Linguistics and biology researchers propose a new theory on the deep roots of human speech.

image

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”

Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.

“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.

The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.

Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.

“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”

These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.

“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”

A new chapter in the songbook

The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.

To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”

Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.

By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.

Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.

“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.

As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”

Birds and bees

Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”

Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.

The researchers acknowledge that further empirical studies on the subject would be desirable.

“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”

Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.

MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.

“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”

(Source: web.mit.edu)

Filed under brain evolution linguistics communication language birdsong neuroscience science

68 notes

Children with auditory processing disorder may now have more treatment options
Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.
Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.
Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.
Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.
"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."
Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.
"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."
Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.
Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:
Phonemic training to address the brain’s ability to process speech sounds.
Words in Noise training to address the brain’s ability to process speech with background noise.
Phonemic synthesis training to address the brain’s ability to process speech sounds across words.
At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.
"Based on these results, our program is showing early signs of being effective," Burnett said.

Children with auditory processing disorder may now have more treatment options

Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.

Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.

Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.

Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.

"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."

Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.

"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."

Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.

Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:

  • Phonemic training to address the brain’s ability to process speech sounds.
  • Words in Noise training to address the brain’s ability to process speech with background noise.
  • Phonemic synthesis training to address the brain’s ability to process speech sounds across words.

At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.

"Based on these results, our program is showing early signs of being effective," Burnett said.

Filed under auditory processing disorder EARSS program hearing language processing neuroscience science

121 notes

Fear, anger or pain. Why do babies cry?
Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.
It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.
"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.
Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.
In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.
According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.
"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.
As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.
The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.
Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.
For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.
Anger, fear and pain
When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.
As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.
In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.
Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Fear, anger or pain. Why do babies cry?

Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.

It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.

"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.

Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.

In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.

According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.

"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.

As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.

The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.

Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.

For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.

Anger, fear and pain

When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.

As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.

In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.

Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Filed under infants emotions emotional response cry communication eye activity psychology neuroscience science

52 notes

Finding “Mr. Right,” How Insects Sniff Out the Perfect Mate
Unlike humans, most insects rely on their sense of smell when looking for a mate. Scientists have found that sex pheromones play an important role in finding a suitable partner of the same species; yet, little is known about the evolution and genetic basis of these alluring smells.
A team of researchers from Arizona State University and Germany found that one wasp species has evolved a specific scent, or pheromone, which keeps it from mating with other species. In addition, they discovered that the genetic basis of the new scent is simple, which allows the males to change an existing scent into a new one. Over time, the females recognize and use this new scent to distinguish their own species from others.
Scientists from ASU, the University of Regensburg, the Zoological Research Museum Alexander Koenig Bonn, and the Technical University Darmstadt in Germany, present their findings in an article published Feb. 13 online in the journal Nature.

Finding “Mr. Right,” How Insects Sniff Out the Perfect Mate

Unlike humans, most insects rely on their sense of smell when looking for a mate. Scientists have found that sex pheromones play an important role in finding a suitable partner of the same species; yet, little is known about the evolution and genetic basis of these alluring smells.

A team of researchers from Arizona State University and Germany found that one wasp species has evolved a specific scent, or pheromone, which keeps it from mating with other species. In addition, they discovered that the genetic basis of the new scent is simple, which allows the males to change an existing scent into a new one. Over time, the females recognize and use this new scent to distinguish their own species from others.

Scientists from ASU, the University of Regensburg, the Zoological Research Museum Alexander Koenig Bonn, and the Technical University Darmstadt in Germany, present their findings in an article published Feb. 13 online in the journal Nature.

Filed under mating evolution wasps pheromones smell genetics neuroscience science

141 notes

Newt sequencing may set back efforts to regrow human limbs
The ability of some animals to regenerate tissue is generally considered to be an ancient quality of all multicellular animals. A genetic analysis of newts, however, now suggests that it evolved much more recently.
Tiny and delicate it may be, but the red spotted newt (Notophthalmus viridescens) has tissue-engineering skills that far surpass the most advanced biotechnology labs. The newt can regenerate lost tissue, including heart muscle, components of its central nervous system and even the lens of its eye.
Doctors hope that this skill relies on a basic genetic program that is common — albeit often in latent form — to all animals, including mammals, so that they can harness it in regenerative medicine. Mice, for instance, are able to generate new heart cells after myocardial injury.
The newt study, by Thomas Braun at the Max Planck Institute for Heart and Lung Research in Bad Nauheim, Germany, and his colleagues, suggest that it might not be so simple.
Attempts to analyse the genetics of newts in the same way as for humans, mice and flies have so far been hampered by the enormous size of the newt genome, which is ten times larger than our own. Braun and his colleagues therefore looked at the RNA produced when genes are expressed — known as the transcriptome — and used three analytical techniques to compile their data.
The team compiled the first catalogue of all the RNA transcripts expressed in N. viridescens, looking at both primary and regenerated tissue in the heart, limbs and eyes of both embryos and larvae.
The researchers found more than 120,000 RNA transcripts, of which they estimate 15,000 code for proteins. Of those, 826 were unique to the newt. What is more, several of those sequences were expressed at different levels in regenerated tissue than in primary tissue. Their results are published in Genome Biology.

Newt sequencing may set back efforts to regrow human limbs

The ability of some animals to regenerate tissue is generally considered to be an ancient quality of all multicellular animals. A genetic analysis of newts, however, now suggests that it evolved much more recently.

Tiny and delicate it may be, but the red spotted newt (Notophthalmus viridescens) has tissue-engineering skills that far surpass the most advanced biotechnology labs. The newt can regenerate lost tissue, including heart muscle, components of its central nervous system and even the lens of its eye.

Doctors hope that this skill relies on a basic genetic program that is common — albeit often in latent form — to all animals, including mammals, so that they can harness it in regenerative medicine. Mice, for instance, are able to generate new heart cells after myocardial injury.

The newt study, by Thomas Braun at the Max Planck Institute for Heart and Lung Research in Bad Nauheim, Germany, and his colleagues, suggest that it might not be so simple.

Attempts to analyse the genetics of newts in the same way as for humans, mice and flies have so far been hampered by the enormous size of the newt genome, which is ten times larger than our own. Braun and his colleagues therefore looked at the RNA produced when genes are expressed — known as the transcriptome — and used three analytical techniques to compile their data.

The team compiled the first catalogue of all the RNA transcripts expressed in N. viridescens, looking at both primary and regenerated tissue in the heart, limbs and eyes of both embryos and larvae.

The researchers found more than 120,000 RNA transcripts, of which they estimate 15,000 code for proteins. Of those, 826 were unique to the newt. What is more, several of those sequences were expressed at different levels in regenerated tissue than in primary tissue. Their results are published in Genome Biology.

Filed under newt regenerative medicine regeneration tissue genomics genetics science

259 notes

Microchip Restores Vision
A wirelessly controlled microchip has restored limited vision to patients in a small experimental trial, report researchers in the Proceedings of the Royal Society B.

The German medical technology company Retina Implant developed the artificial retina, which was implanted in one eye of each participant as part of a company-funded trial. The patients had all been blinded by retinitis pigmentosa or another inherited disease that cause the eye’s light-detecting rod and cone cells, called photoreceptors, to degenerate and die over time. In theory, the device could also benefit patients with degenerative eye diseases such as macular degeneration, says Katarina Štigl, a clinical scientist and ophthalmologist at the University of Tübingen, who led the study.

With the implant, eight of the nine patients in the trial could perceive light. Five were able to detect moving patterns on a screen as well as everyday objects such as cutlery, doorknobs, and telephones. Three were able to read letters. Seeing their own hands and the faces of their loved ones had the biggest impression on the patients, says Štigl. “The very personal things, such as if a mouth is smiling, or the shape of a nose, are the most exciting for them,” she says.
The implanted device consists of a three-millimeter-square chip with 1,500 pixels. Each pixel contains a photodiode, which picks up incoming light, and an electrode and an amplification circuit, which boosts the weak electrical activity given off by the diode. A thin cable that runs through the eye socket connects the implant to a small coil implanted under the skin behind the ear, which means most of the system is invisible. The coil under the skin is powered by an external battery pack that can be held behind the ear with magnets.

The results follow an announcement earlier this week from California-based Second Sight that its Argus II system was approved for use in the United States. The two technologies take different approaches to restoring vision in patients with retinal degeneration. In Second Sight’s system, a camera mounted on eyeglasses picks up images that are converted into electrical signals by a small wearable computer. That data is then sent to a 60-electrode chip to stimulate neurons in the retina. The Retina Implant device instead attempts to directly replace the lost photoreceptors, allowing the remaining retinal circuitry to do the data processing.

Microchip Restores Vision

A wirelessly controlled microchip has restored limited vision to patients in a small experimental trial, report researchers in the Proceedings of the Royal Society B.

The German medical technology company Retina Implant developed the artificial retina, which was implanted in one eye of each participant as part of a company-funded trial. The patients had all been blinded by retinitis pigmentosa or another inherited disease that cause the eye’s light-detecting rod and cone cells, called photoreceptors, to degenerate and die over time. In theory, the device could also benefit patients with degenerative eye diseases such as macular degeneration, says Katarina Štigl, a clinical scientist and ophthalmologist at the University of Tübingen, who led the study.

With the implant, eight of the nine patients in the trial could perceive light. Five were able to detect moving patterns on a screen as well as everyday objects such as cutlery, doorknobs, and telephones. Three were able to read letters. Seeing their own hands and the faces of their loved ones had the biggest impression on the patients, says Štigl. “The very personal things, such as if a mouth is smiling, or the shape of a nose, are the most exciting for them,” she says.

The implanted device consists of a three-millimeter-square chip with 1,500 pixels. Each pixel contains a photodiode, which picks up incoming light, and an electrode and an amplification circuit, which boosts the weak electrical activity given off by the diode. A thin cable that runs through the eye socket connects the implant to a small coil implanted under the skin behind the ear, which means most of the system is invisible. The coil under the skin is powered by an external battery pack that can be held behind the ear with magnets.

The results follow an announcement earlier this week from California-based Second Sight that its Argus II system was approved for use in the United States. The two technologies take different approaches to restoring vision in patients with retinal degeneration. In Second Sight’s system, a camera mounted on eyeglasses picks up images that are converted into electrical signals by a small wearable computer. That data is then sent to a 60-electrode chip to stimulate neurons in the retina. The Retina Implant device instead attempts to directly replace the lost photoreceptors, allowing the remaining retinal circuitry to do the data processing.

Filed under vision retinal degeneration subretinal electronic implant electronic implants retinal diseases neuroscience science

228 notes

Bioengineers print ears that look and act like the real thing
Cornell bioengineers and physicians have created an artificial ear that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.
In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.
"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.
The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell.
"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said.
Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient’s harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.
To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject’s ear and converted the image into a digitized “solid” ear using a 3-D printer to assemble a mold.
They injected the mold with collagen derived from rat tails, and then added 250 million cartilage cells from the ears of cows. This Cornell-developed, high-density gel is similar to the consistency of Jell-O when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.
The process is also fast, Bonassar added: “It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted.”
The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.
Bonassar and Spector have been collaborating on bioengineered human replacement parts since 2007. Bonassar has also worked with Weill Cornell neurological surgeon Dr. Roger Härtl on bioengineered disc replacements using some of the same techniques demonstrated in the PLOS One study.
The researchers specifically work on replacement human structures that are primarily made of cartilage — joints, trachea, spine, nose — because cartilage does not need to be vascularized with a blood supply in order to survive.
They are now looking at ways to expand populations of human ear cartilage cells in the laboratory so that these cells can be used in the mold, instead of cow cartilage.
"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.
He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or 6 years old. At that age, ears are 80 percent of their adult size.
If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Bioengineers print ears that look and act like the real thing

Cornell bioengineers and physicians have created an artificial ear that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.

In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.

"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.

The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell.

"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said.

Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient’s harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.

To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject’s ear and converted the image into a digitized “solid” ear using a 3-D printer to assemble a mold.

They injected the mold with collagen derived from rat tails, and then added 250 million cartilage cells from the ears of cows. This Cornell-developed, high-density gel is similar to the consistency of Jell-O when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.

The process is also fast, Bonassar added: “It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted.”

The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.

Bonassar and Spector have been collaborating on bioengineered human replacement parts since 2007. Bonassar has also worked with Weill Cornell neurological surgeon Dr. Roger Härtl on bioengineered disc replacements using some of the same techniques demonstrated in the PLOS One study.

The researchers specifically work on replacement human structures that are primarily made of cartilage — joints, trachea, spine, nose — because cartilage does not need to be vascularized with a blood supply in order to survive.

They are now looking at ways to expand populations of human ear cartilage cells in the laboratory so that these cells can be used in the mold, instead of cow cartilage.

"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.

He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or 6 years old. At that age, ears are 80 percent of their adult size.

If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Filed under microtia artificial ear ear replacement implants cartilage medicine neuroscience science

57 notes

Smoking damages mouse brains
Cigarette smoke damages the lungs, but it also wreaks havoc in the brain, a study in mice suggests. Signs of Alzheimer’s disease increased in the brains of animals that breathed cigarette smoke for four months, scientists report February 19 in Nature Communications.
The relationship between smoking and Alzheimer’s in people is murky. Some evidence from the 1990s suggested that smoking actually protected people against Alzheimer’s, presumably by stimulating nicotine-detecting brain cells. More recent studies have found that smoking ups the odds of the disease.
To see what cigarettes do to the brain, scientists led by Claudio Soto of the University of Texas Medical School at Houston turned to mice. In animals bred to show signs of Alzheimer’s, cigarette smoke (one cigarette’s worth in air the mouse breathed for an hour, five days a week) worsened aspects of the disease. Compared with mice that weren’t exposed, mice exposed to smoke had several signs of Alzheimer’s: they had more amyloid beta plaques, a higher load of abnormal tau protein and more severe inflammation in their brains.  The scientists don’t know yet how cigarette smoke causes these changes, or whether a similar process happens in people.

Smoking damages mouse brains

Cigarette smoke damages the lungs, but it also wreaks havoc in the brain, a study in mice suggests. Signs of Alzheimer’s disease increased in the brains of animals that breathed cigarette smoke for four months, scientists report February 19 in Nature Communications.

The relationship between smoking and Alzheimer’s in people is murky. Some evidence from the 1990s suggested that smoking actually protected people against Alzheimer’s, presumably by stimulating nicotine-detecting brain cells. More recent studies have found that smoking ups the odds of the disease.

To see what cigarettes do to the brain, scientists led by Claudio Soto of the University of Texas Medical School at Houston turned to mice. In animals bred to show signs of Alzheimer’s, cigarette smoke (one cigarette’s worth in air the mouse breathed for an hour, five days a week) worsened aspects of the disease. Compared with mice that weren’t exposed, mice exposed to smoke had several signs of Alzheimer’s: they had more amyloid beta plaques, a higher load of abnormal tau protein and more severe inflammation in their brains.

The scientists don’t know yet how cigarette smoke causes these changes, or whether a similar process happens in people.

Filed under alzheimer's disease cigarette smoke brain brain cells amyloid beta animal studies neuroscience science

80 notes

Scientists make older adults less forgetful in memory tests
Scientists at Baycrest Health Sciences’ Rotman Research Institute (RRI) and the University of Toronto’s Psychology Department have found compelling evidence that older adults can eliminate forgetfulness and perform as well as younger adults on memory tests.
Scientists used a distraction learning strategy to help older adults overcome age-related forgetting and boost their performance to that of younger adults. Distraction learning sounds like an oxymoron, but a growing body of science is showing that older brains are adept at processing irrelevant and relevant information in the environment, without conscious effort, to aid memory performance.
“Older brains may be be doing something very adaptive with distraction to compensate for weakening memory,” said Renée Biss, lead investigator and PhD student. “In our study we asked whether distraction can be used to foster memory-boosting rehearsal for older adults. The answer is yes!”
“To eliminate age-related forgetfulness across three consecutive memory experiments and help older adults perform like younger adults is dramatic and to our knowledge a totally unique finding,” said Lynn Hasher, senior scientist on the study and a leading authority in attention and inhibitory functioning in younger and older adults. “Poor regulation of attention by older adults may actually have some benefits for memory.”
The findings, published online in Psychological Science, ahead of print publication, have intriguing implications for designing learning strategies for the mature, older student and equipping senior-housing with relevant visual distraction cues throughout the living environment that would serve as rehearsal opportunities to remember things like an upcoming appointment or medications to take, even if the cues aren’t consciously paid attention to.

Scientists make older adults less forgetful in memory tests

Scientists at Baycrest Health Sciences’ Rotman Research Institute (RRI) and the University of Toronto’s Psychology Department have found compelling evidence that older adults can eliminate forgetfulness and perform as well as younger adults on memory tests.

Scientists used a distraction learning strategy to help older adults overcome age-related forgetting and boost their performance to that of younger adults. Distraction learning sounds like an oxymoron, but a growing body of science is showing that older brains are adept at processing irrelevant and relevant information in the environment, without conscious effort, to aid memory performance.

“Older brains may be be doing something very adaptive with distraction to compensate for weakening memory,” said Renée Biss, lead investigator and PhD student. “In our study we asked whether distraction can be used to foster memory-boosting rehearsal for older adults. The answer is yes!”

“To eliminate age-related forgetfulness across three consecutive memory experiments and help older adults perform like younger adults is dramatic and to our knowledge a totally unique finding,” said Lynn Hasher, senior scientist on the study and a leading authority in attention and inhibitory functioning in younger and older adults. “Poor regulation of attention by older adults may actually have some benefits for memory.”

The findings, published online in Psychological Science, ahead of print publication, have intriguing implications for designing learning strategies for the mature, older student and equipping senior-housing with relevant visual distraction cues throughout the living environment that would serve as rehearsal opportunities to remember things like an upcoming appointment or medications to take, even if the cues aren’t consciously paid attention to.

Filed under cognitive decline memory learning psychology neuroscience science

free counters