Neuroscience

Articles and news from the latest research reports.

Posts tagged science

96 notes

Study IDs new cause of brain bleeding immediately after stroke
By discovering a new mechanism that allows blood to enter the brain immediately after a stroke, researchers at UC Irvine and the Salk Institute have opened the door to new therapies that may limit or prevent stroke-induced brain damage.
A complex and devastating neurological condition, stroke is the fourth-leading cause of death and primary reason for disability in the U.S. The blood-brain barrier is severely damaged in a stroke and lets blood-borne material into the brain, causing the permanent deficits in movement and cognition seen in stroke patients.
Dritan Agalliu, assistant professor of developmental & cell biology at UC Irvine, and Axel Nimmerjahn of the Salk Institute for Biological Studies developed a novel transgenic mouse strain in which they use a fluorescent tag to see the tight, barrier-forming junctions between the cells that make up blood vessels in the central nervous system. This allows them to perceive dynamic changes in the barrier during and after strokes in living animals.
While observing that barrier function is rapidly impaired after a stroke (within six hours), they unexpectedly found that this early barrier failure is not due to the breakdown of tight junctions between blood vessel cells, as had previously been suspected. In fact, junction deterioration did not occur until two days after the event.
Instead, the scientists reported dramatic increases in carrier proteins called serum albumin flowing directly into brain tissue. These proteins travel through the cells composing blood vessels – endothelial cells – via a specialized transport system that normally operates only in non-brain vessels or immature vessels within the central nervous system. The researchers’ work indicates that this transport system underlies the initial failure of the barrier, permitting entry of blood material into the brain immediately after a stroke (within six hours).
“These findings suggest new therapeutic directions aimed at regulating flow through endothelial cells in the barrier after a stroke occurs,” Agalliu said, “and any such therapies have the potential to reduce or prevent stroke-induced damage in the brain.”
His team is currently using genetic techniques to block degradation of the tight junctions between endothelial cells in mice and examining the effect on stroke progression. Early post-stroke control of this specialized transport system identified by the Agalliu and Nimmerjahn labs may spur the discovery of imaging methods or biomarkers in humans to detect strokes as early as possible and thereby minimize damage.

Study IDs new cause of brain bleeding immediately after stroke

By discovering a new mechanism that allows blood to enter the brain immediately after a stroke, researchers at UC Irvine and the Salk Institute have opened the door to new therapies that may limit or prevent stroke-induced brain damage.

A complex and devastating neurological condition, stroke is the fourth-leading cause of death and primary reason for disability in the U.S. The blood-brain barrier is severely damaged in a stroke and lets blood-borne material into the brain, causing the permanent deficits in movement and cognition seen in stroke patients.

Dritan Agalliu, assistant professor of developmental & cell biology at UC Irvine, and Axel Nimmerjahn of the Salk Institute for Biological Studies developed a novel transgenic mouse strain in which they use a fluorescent tag to see the tight, barrier-forming junctions between the cells that make up blood vessels in the central nervous system. This allows them to perceive dynamic changes in the barrier during and after strokes in living animals.

While observing that barrier function is rapidly impaired after a stroke (within six hours), they unexpectedly found that this early barrier failure is not due to the breakdown of tight junctions between blood vessel cells, as had previously been suspected. In fact, junction deterioration did not occur until two days after the event.

Instead, the scientists reported dramatic increases in carrier proteins called serum albumin flowing directly into brain tissue. These proteins travel through the cells composing blood vessels – endothelial cells – via a specialized transport system that normally operates only in non-brain vessels or immature vessels within the central nervous system. The researchers’ work indicates that this transport system underlies the initial failure of the barrier, permitting entry of blood material into the brain immediately after a stroke (within six hours).

“These findings suggest new therapeutic directions aimed at regulating flow through endothelial cells in the barrier after a stroke occurs,” Agalliu said, “and any such therapies have the potential to reduce or prevent stroke-induced damage in the brain.”

His team is currently using genetic techniques to block degradation of the tight junctions between endothelial cells in mice and examining the effect on stroke progression. Early post-stroke control of this specialized transport system identified by the Agalliu and Nimmerjahn labs may spur the discovery of imaging methods or biomarkers in humans to detect strokes as early as possible and thereby minimize damage.

Filed under stroke blood-brain barrier brain damage endothelial cells brain tissue neuroscience science

549 notes

New Study Suggests a Better Way to Deal with Bad Memories

What’s one of your worst memories? How did it make you feel? According to psychologists, remembering the emotions felt during a negative personal experience, such as how sad you were or how embarrassed you felt, can lead to emotional distress, especially when you can’t stop thinking about it. 

image

(Image: iStockphoto)

When these negative memories creep up, thinking about the context of the memories, rather than how you felt, is a relatively easy and effective way to alleviate the negative effects of these memories, a new study suggests.

Researchers at the Beckman Institute at the University of Illinois, led by psychology professor Florin Dolcos of the Cognitive Neuroscience Group, studied the behavioral and neural mechanisms of focusing away from emotion during recollection of personal emotional memories, and found that thinking about the contextual elements of the memories significantly reduced their emotional impact.

“Sometimes we dwell on how sad, embarrassed, or hurt we felt during an event, and that makes us feel worse and worse. This is what happens in clinical depression—ruminating on the negative aspects of a memory,” Dolcos said. “But we found that instead of thinking about your emotions during a negative memory, looking away from the worst emotions and thinking about the context, like a friend who was there, what the weather was like, or anything else non-emotional that was part of the memory, will rather effortlessly take your mind away from the unwanted emotions associated with that memory. Once you immerse yourself in other details, your mind will wander to something else entirely, and you won’t be focused on the negative emotions as much.”

This simple strategy, the study suggests, is a promising alternative to other emotion-regulation strategies, like suppression or reappraisal. 

“Suppression is bottling up your emotions, trying to put them away in a box. This is a strategy that can be effective in the short term, but in the long run, it increases anxiety and depression,” explains Sanda Dolcos, co-author on the study and postdoctoral research associate at the Beckman Institute and in the Department of Psychology. 

“Another otherwise effective emotion regulation strategy, reappraisal, or looking at the situation differently to see the glass half full, can be cognitively demanding. The strategy of focusing on non-emotional contextual details of a memory, on the other hand, is as simple as shifting the focus in the mental movie of your memories and then letting your mind wander.”

Not only does this strategy allow for effective short-term emotion regulation, but it has the possibility of lessening the severity of a negative memory with prolonged use.

In the study, participants were asked to share their most emotional negative and positive memories, such as the birth of a child, winning an award, or failing an exam, explained Sanda Dolcos. Several weeks later participants were given cues that would trigger their memories while their brains were being scanned using magnetic resonance imaging (MRI). Before each memory cue, the participants were asked to remember each event by focusing on either the emotion surrounding the event or the context. For example, if the cue triggered a memory of a close friend’s funeral, thinking about the emotional context could consist of remembering your grief during the event. If you were asked to remember contextual elements, you might instead remember what outfit you wore or what you ate that day.

“Neurologically, we wanted to know what happened in the brain when people were using this simple emotion-regulation strategy to deal with negative memories or enhance the impact of positive memories,” explained Ekaterina Denkova, first author of the report. “One thing we found is that when participants were focused on the context of the event, brain regions involved in basic emotion processing were working together with emotion control regions in order to, in the end, reduce the emotional impact of these memories.” 

Using this strategy promotes healthy functioning not only by reducing the negative impact of remembering unwanted memories, but also by increasing the positive impact of cherished memories, Florin Dolcos said. 

In the future, the researchers hope to determine if this strategy is effective in lessening the severity of negative memories over the long term. They also hope to work with clinically depressed or anxious participants to see if this strategy is effective in alleviating these psychiatric conditions. 

These results were published in Social Cognitive and Affective Neuroscience.

(Source: beckman.illinois.edu)

Filed under suppression prefrontal cortex memories autobiographical memory emotion regulation emotion psychology neuroscience science

168 notes

Rapid whole-brain imaging with single cell resolution

A major challenge of systems biology is understanding how phenomena at the cellular scale correlate with activity at the organism level. A concerted effort has been made especially in the brain, as scientists are aiming to clarify how neural activity is translated into consciousness and other complex brain activities. One example of the technologies needed is whole-brain imaging at single-cell resolution. This imaging normally involves preparing a highly transparent sample that minimizes light scattering and then imaging neurons tagged with fluorescent probes at different slices to produce a 3D representation. However, limitations in current methods prevent comprehensive study of the relationship. A new high-throughput method, CUBIC (Clear, Unobstructed Brain Imaging Cocktails and Computational Analysis), published in Cell, is a great leap forward, as it offers unprecedented rapid whole-brain imaging at single cell resolution and a simple protocol to clear and transparentize the brain sample based on the use of aminoalcohols.
In combination with light sheet fluorescence microscopy, CUBIC was tested for rapid imaging of a number of mammalian systems, such as mouse and primate, showing its scalability for brains of different size. Additionally, it was used to acquire new spatial-temporal details of gene expression patterns in the hypothalamic circadian rhythm center. Moreover, by combining images taken from opposite directions, CUBIC enables whole brain imaging and direct comparison of brains in different environmental conditions.
CUBIC overcomes a number of obstacles compared with previous methods. One is the clearing and transparency protocol, which involves serially immersing fixed tissues into just two reagents for a relatively short time. Second, CUBIC is compatible with many fluorescent probes because of low quenching, which allows for probes with longer wavelengths and reduces concern for scattering when whole brain imaging while at the same time inviting multi-color imaging. Finally, it is highly reproducible and scalable. While other methods have achieved some of these qualities, CUBIC is the first to realize all.
CUBIC provides information on previously unattainable 3D gene expression profiles and neural networks at the systems level. Because of its rapid and high-throughput imaging, CUBIC offers extraordinary opportunity to analyze localized effects of genomic editing. It also is expected to identify neural connections at the whole brain level. In fact, last author Hiroki Ueda is optimistic about further application to even larger mammalian systems. “In the near future, we would like to apply CUBIC technology to whole-body imaging at single cell resolution.”

Rapid whole-brain imaging with single cell resolution

A major challenge of systems biology is understanding how phenomena at the cellular scale correlate with activity at the organism level. A concerted effort has been made especially in the brain, as scientists are aiming to clarify how neural activity is translated into consciousness and other complex brain activities. One example of the technologies needed is whole-brain imaging at single-cell resolution. This imaging normally involves preparing a highly transparent sample that minimizes light scattering and then imaging neurons tagged with fluorescent probes at different slices to produce a 3D representation. However, limitations in current methods prevent comprehensive study of the relationship. A new high-throughput method, CUBIC (Clear, Unobstructed Brain Imaging Cocktails and Computational Analysis), published in Cell, is a great leap forward, as it offers unprecedented rapid whole-brain imaging at single cell resolution and a simple protocol to clear and transparentize the brain sample based on the use of aminoalcohols.

In combination with light sheet fluorescence microscopy, CUBIC was tested for rapid imaging of a number of mammalian systems, such as mouse and primate, showing its scalability for brains of different size. Additionally, it was used to acquire new spatial-temporal details of gene expression patterns in the hypothalamic circadian rhythm center. Moreover, by combining images taken from opposite directions, CUBIC enables whole brain imaging and direct comparison of brains in different environmental conditions.

CUBIC overcomes a number of obstacles compared with previous methods. One is the clearing and transparency protocol, which involves serially immersing fixed tissues into just two reagents for a relatively short time. Second, CUBIC is compatible with many fluorescent probes because of low quenching, which allows for probes with longer wavelengths and reduces concern for scattering when whole brain imaging while at the same time inviting multi-color imaging. Finally, it is highly reproducible and scalable. While other methods have achieved some of these qualities, CUBIC is the first to realize all.

CUBIC provides information on previously unattainable 3D gene expression profiles and neural networks at the systems level. Because of its rapid and high-throughput imaging, CUBIC offers extraordinary opportunity to analyze localized effects of genomic editing. It also is expected to identify neural connections at the whole brain level. In fact, last author Hiroki Ueda is optimistic about further application to even larger mammalian systems. “In the near future, we would like to apply CUBIC technology to whole-body imaging at single cell resolution.”

Filed under CUBIC neural activity brain imaging gene expression genetics neuroscience science

229 notes

Researchers Discover the Seat of Sex and Violence in the Brain
As reported in a paper published online today in the journal Nature, Caltech biologist David J. Anderson and his colleagues have genetically identified neurons that control aggressive behavior in the mouse hypothalamus, a structure that lies deep in the brain (orange circle in the image). Researchers have long known that innate social behaviors like mating and aggression are closely related, but the specific neurons in the brain that control these behaviors had not been identified until now.
The interdisciplinary team of graduate students and postdocs, led by Caltech senior research fellow Hyosang Lee, found that if these neurons are strongly activated by pulses of light, using a method called optogenetics, a male mouse will attack another male or even a female. However, weaker activation of the same neurons will trigger sniffing and mounting: mating behaviors. In fact, the researchers could switch the behavior of a single animal from mounting to attack by gradually increasing the strength of neuronal stimulation during a social encounter (inhibiting the neurons, in contrast, stops these behaviors dead in their tracks).
These results suggest that the level of activity within the population of neurons may control the decision between mating and fighting.  
The neurons initially were identified because they express a protein receptor for the hormone estrogen, reinforcing the view that estrogen plays an important role in the control of male aggression, contrary to popular opinion. Because the human brain contains a hypothalamus that is structurally similar to that in the mouse, these results may be relevant to human behavior as well.

Researchers Discover the Seat of Sex and Violence in the Brain

As reported in a paper published online today in the journal Nature, Caltech biologist David J. Anderson and his colleagues have genetically identified neurons that control aggressive behavior in the mouse hypothalamus, a structure that lies deep in the brain (orange circle in the image). Researchers have long known that innate social behaviors like mating and aggression are closely related, but the specific neurons in the brain that control these behaviors had not been identified until now.

The interdisciplinary team of graduate students and postdocs, led by Caltech senior research fellow Hyosang Lee, found that if these neurons are strongly activated by pulses of light, using a method called optogenetics, a male mouse will attack another male or even a female. However, weaker activation of the same neurons will trigger sniffing and mounting: mating behaviors. In fact, the researchers could switch the behavior of a single animal from mounting to attack by gradually increasing the strength of neuronal stimulation during a social encounter (inhibiting the neurons, in contrast, stops these behaviors dead in their tracks).

These results suggest that the level of activity within the population of neurons may control the decision between mating and fighting.  

The neurons initially were identified because they express a protein receptor for the hormone estrogen, reinforcing the view that estrogen plays an important role in the control of male aggression, contrary to popular opinion. Because the human brain contains a hypothalamus that is structurally similar to that in the mouse, these results may be relevant to human behavior as well.

Filed under neurons hypothalamus aggression mating estrogen optogenetics neuroscience science

94 notes

For resetting circadian rhythms, neural cooperation is key
Fruit flies are pretty predictable when it comes to scheduling their days, with peaks of activity at dawn and dusk and rest times in between. Now, researchers reporting in the Cell Press journal Cell Reports on April 17th have found that the clusters of brain cells responsible for each of those activity peaks—known as the morning and evening oscillators, respectively—don’t work alone. For flies’ internal clocks to follow the sun, cooperation is key.
"Without proper synchronization, circadian clocks are useless or can even be deleterious to organisms," said Patrick Emery from the University of Massachusetts Medical School. "In addition, most organisms have to detect changes in day length to adapt their rhythms to seasons.
"Our work clearly shows that light is detected by individual neurons that then communicate with each other to properly define the phase of circadian behavior," he added. "This emphasizes the importance of neural interaction in the generation of properly phased circadian rhythms."
In the brains of Drosophila fruit flies, there are approximately 150 circadian neurons, explained Emery and coauthor Yong Zhang, including a small group of morning oscillators that promote activity early in the day and another group of evening oscillators that promote activity later. Morning oscillators also set the pace of molecular rhythms in other parts of the brain, and hence the phase of circadian behavior. Scientists had thought they did this by relying heavily on their own sensitivity to light—what Emery calls “cell-autonomous photoreception.” Indeed, these cells do express fruit flies’ dedicated photoreceptor Cryptochrome (CRY). But recent evidence suggested that something was missing from that simple view.
In the new study, the researchers manipulated CRY’s ability to function through another clock component, known as JET (short for Jetlag), in different circadian neurons and watched what happened. The studies show that light detection by the morning oscillators isn’t enough to keep flies going about their business in a timely way. They need those evening oscillators too.
JET’s role is bigger than expected as well. In addition to enabling cell-autonomous light sensing, the protein also allows distinct circadian neurons to talk to each other in rapid fashion after light exposure, although the researchers don’t yet know how.
The new model also suggests that flies and mammals have more similarities than had been appreciated when it comes to synchronizing their activities to the sun, the researchers say. In mammals, specific neurons of the circadian pacemaker of the brain (known as the Suprachiasmatic Nucleus or SCN) receive light input from the retina. Those cells then communicate with pacemaker neurons, which resets the circadian network as a whole.

For resetting circadian rhythms, neural cooperation is key

Fruit flies are pretty predictable when it comes to scheduling their days, with peaks of activity at dawn and dusk and rest times in between. Now, researchers reporting in the Cell Press journal Cell Reports on April 17th have found that the clusters of brain cells responsible for each of those activity peaks—known as the morning and evening oscillators, respectively—don’t work alone. For flies’ internal clocks to follow the sun, cooperation is key.

"Without proper synchronization, circadian clocks are useless or can even be deleterious to organisms," said Patrick Emery from the University of Massachusetts Medical School. "In addition, most organisms have to detect changes in day length to adapt their rhythms to seasons.

"Our work clearly shows that light is detected by individual neurons that then communicate with each other to properly define the phase of circadian behavior," he added. "This emphasizes the importance of neural interaction in the generation of properly phased circadian rhythms."

In the brains of Drosophila fruit flies, there are approximately 150 circadian neurons, explained Emery and coauthor Yong Zhang, including a small group of morning oscillators that promote activity early in the day and another group of evening oscillators that promote activity later. Morning oscillators also set the pace of molecular rhythms in other parts of the brain, and hence the phase of circadian behavior. Scientists had thought they did this by relying heavily on their own sensitivity to light—what Emery calls “cell-autonomous photoreception.” Indeed, these cells do express fruit flies’ dedicated photoreceptor Cryptochrome (CRY). But recent evidence suggested that something was missing from that simple view.

In the new study, the researchers manipulated CRY’s ability to function through another clock component, known as JET (short for Jetlag), in different circadian neurons and watched what happened. The studies show that light detection by the morning oscillators isn’t enough to keep flies going about their business in a timely way. They need those evening oscillators too.

JET’s role is bigger than expected as well. In addition to enabling cell-autonomous light sensing, the protein also allows distinct circadian neurons to talk to each other in rapid fashion after light exposure, although the researchers don’t yet know how.

The new model also suggests that flies and mammals have more similarities than had been appreciated when it comes to synchronizing their activities to the sun, the researchers say. In mammals, specific neurons of the circadian pacemaker of the brain (known as the Suprachiasmatic Nucleus or SCN) receive light input from the retina. Those cells then communicate with pacemaker neurons, which resets the circadian network as a whole.

Filed under circadian rhythms fruit flies jetlag photoreceptors neurons neuroscience science

530 notes

In a cloning first, scientists create stem cells from adults
Scientists have moved a step closer to the goal of creating stem cells perfectly matched to a patient’s DNA in order to treat diseases, they announced on Thursday, creating patient-specific cell lines out of the skin cells of two adult men. 
The advance, described online in the journal Cell Stem Cell, is the first time researchers have achieved “therapeutic cloning” of adults. Technically called somatic-cell nuclear transfer, therapeutic cloning means producing embryonic cells genetically identical to a donor, usually for the purpose of using those cells to treat disease.
Read more

In a cloning first, scientists create stem cells from adults

Scientists have moved a step closer to the goal of creating stem cells perfectly matched to a patient’s DNA in order to treat diseases, they announced on Thursday, creating patient-specific cell lines out of the skin cells of two adult men.

The advance, described online in the journal Cell Stem Cell, is the first time researchers have achieved “therapeutic cloning” of adults. Technically called somatic-cell nuclear transfer, therapeutic cloning means producing embryonic cells genetically identical to a donor, usually for the purpose of using those cells to treat disease.

Read more

Filed under stem cells somatic cell nuclear transfer iPSCs regenerative medicine medicine health science

173 notes

Is Parkinson’s an Autoimmune Disease?

The cause of neuronal death in Parkinson’s disease is still unknown, but a new study proposes that neurons may be mistaken for foreign invaders and killed by the person’s own immune system, similar to the way autoimmune diseases like type I diabetes, celiac disease, and multiple sclerosis attack the body’s cells. The study was published April 16, 2014, in Nature Communications.

image

(Image caption: Four images of a neuron from a human brain show that neurons produce a protein (in red) that can direct an immune attack against the neuron (green). Credit: Carolina Cebrian.)

“This is a new, and likely controversial, idea in Parkinson’s disease; but if true, it could lead to new ways to prevent neuronal death in Parkinson’s that resemble treatments for autoimmune diseases,” said the study’s senior author, David Sulzer, PhD, professor of neurobiology in the departments of psychiatry, neurology, and pharmacology at Columbia University College of Physicians & Surgeons.

The new hypothesis about Parkinson’s emerges from other findings in the study that overturn a deep-seated assumption about neurons and the immune system.

For decades, neurobiologists have thought that neurons are protected from attacks from the immune system, in part, because they do not display antigens on their cell surfaces. Most cells, if infected by virus or bacteria, will display bits of the microbe (antigens) on their outer surface. When the immune system recognizes the foreign antigens, T cells attack and kill the cells. Because scientists thought that neurons did not display antigens, they also thought that the neurons were exempt from T-cell attacks.

“That idea made sense because, except in rare circumstances, our brains cannot make new neurons to replenish ones killed by the immune system,” Dr. Sulzer says. “But, unexpectedly, we found that some types of neurons can display antigens.”

Cells display antigens with special proteins called MHCs. Using postmortem brain tissue donated to the Columbia Brain Bank by healthy donors, Dr. Sulzer and his postdoc Carolina Cebrián, PhD, first noticed—to their surprise—that MHC-1 proteins were present in two types of neurons. These two types of neurons—one of which is dopamine neurons in a brain region called the substantia nigra—degenerate during Parkinson’s disease.

To see if living neurons use MHC-1 to display antigens (and not for some other purpose), Drs. Sulzer and Cebrián conducted in vitro experiments with mouse neurons and human neurons created from embryonic stem cells. The studies showed that under certain circumstances—including conditions known to occur in Parkinson’s—the neurons use MHC-1 to display antigens. Among the different types of neurons tested, the two types affected in Parkinson’s were far more responsive than other neurons to signals that triggered antigen display.

The researchers then confirmed that T cells recognized and attacked neurons displaying specific antigens.

The results raise the possibility that Parkinson’s is partly an autoimmune disease, Dr. Sulzer says, but more research is needed to confirm the idea.

“Right now, we’ve showed that certain neurons display antigens and that T cells can recognize these antigens and kill neurons,” Dr. Sulzer says, “but we still need to determine whether this is actually happening in people. We need to show that there are certain T cells in Parkinson’s patients that can attack their neurons.”

If the immune system does kill neurons in Parkinson’s disease, Dr. Sulzer cautions that it is not the only thing going awry in the disease. “This idea may explain the final step,” he says. “We don’t know if preventing the death of neurons at this point will leave people with sick cells and no change in their symptoms, or not.”

(Source: newsroom.cumc.columbia.edu)

Filed under parkinson's disease autoimmune diseases immune system neurons antigens neuroscience science

121 notes

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes
In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.
Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.
In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.
"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.
"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”
The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”
Vales said the use of language can change how people inspect the world around them.
"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"
Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”
For this reason, language may play an important role in a number of developmental disabilities.
"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.
"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."
How does this matter to child development?
"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."
"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes

In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.

Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.

In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.

"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.

"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”

The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”

Vales said the use of language can change how people inspect the world around them.

"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"

Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”

For this reason, language may play an important role in a number of developmental disabilities.

"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.

"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."

How does this matter to child development?

"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."

"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Filed under language child development neurodevelopmental disorders cognition working memory psychology neuroscience science

200 notes

Our Brains are Hardwired for Language
People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?
A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.
LANGUAGE UNIVERSALS
Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?
THE EXPERIMENT
To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.
Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).
UNIVERSALLY HARDWIRED BRAINS
The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.
While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.
These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.
The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.
FURTHER EVIDENCE
This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.
The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Our Brains are Hardwired for Language

People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?

A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.

LANGUAGE UNIVERSALS

Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?

THE EXPERIMENT

To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.

Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).

UNIVERSALLY HARDWIRED BRAINS

The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.

While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.

These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.

The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.

FURTHER EVIDENCE

This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.

The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Filed under language broca's area brain activity language universals linguistics psychology neuroscience science

123 notes

Neurons in the Brain Tune into Different Frequencies for Different Spatial Memory Tasks
Your brain transmits information about your current location and memories of past locations over the same neural pathways using different frequencies of a rhythmic electrical activity called gamma waves, report neuroscientists at The University of Texas at Austin.
The research, published in the journal Neuron on April 17, may provide insight into the cognitive and memory disruptions seen in diseases such as schizophrenia and Alzheimer’s, in which gamma waves are disturbed.
Previous research has shown that the same brain region is activated whether we’re storing memories of a new place or recalling past places we’ve been.
“Many of us leave our cars in a parking garage on a daily basis. Every morning, we create a memory of where we parked our car, which we retrieve in the evening when we pick it up,” said Laura Colgin, assistant professor of neuroscience and member of the Center for Learning and Memory in The University of Texas at Austin’s College of Natural Sciences. “How then do our brains distinguish between current location and the memory of a location? Our new findings suggest a mechanism for distinguishing these different representations.”
Memory involving location is stored in an area of the brain called the hippocampus. The neurons in the hippocampus that store spatial memories (such as the location where you parked your car) are called place cells. The same set of place cells are activated both when a new memory of a location is stored and, later, when the memory of that location is recalled or retrieved.
When the hippocampus forms a new spatial memory, it receives sensory information about your current location from a brain region called the entorhinal cortex. When the hippocampus recalls a past location, it retrieves the stored spatial memory from a subregion of the hippocampus called CA3.
The entorhinal cortex and CA3 transmit these different types of information using different frequencies of gamma waves. The entorhinal cortex uses fast gamma waves, which have a frequency of about 80 Hz (about the same frequency as a bass E note played on a piano). In contrast, CA3 sends its signals on slow gamma waves, which have a frequency of about 40 Hz.
Colgin and her colleagues hypothesized that fast gamma waves promote encoding of recent experiences, while slow gamma waves support memory retrieval.
They tested these hypotheses by recording gamma waves in the hippocampus, together with electrical signals from place cells, in rats navigating through a simple environment. They found that place cells represented the rat’s current location when cells were active on fast gamma waves. When cells were active on slow gamma waves, place cells represented locations in the direction that the rat was heading.
“These findings suggest that fast gamma waves promote current memory encoding, such as the memory of where we just parked,” said Colgin. “However, when we need to remember where we are going, like when finding our parked car later in the day, the hippocampus tunes into slow gamma waves.”
Because gamma waves are seen in many areas of the brain besides the hippocampus, Colgin’s findings may generalize beyond spatial memory. The ability for neurons to tune into different frequencies of gamma waves provides a way for the brain to traffic different types of information across the same neuronal circuits.
Colgin said one of the next steps in her team’s research will be to apply technologies that induce different types of gamma waves in rats performing memory tasks. She imagines that they will be able to improve new memory encoding by inducing fast gamma waves. Conversely, she expects that inducing slow gamma waves will be detrimental to the encoding of new memories. Those slow gamma waves should trigger old memories, which would interfere with new learning.

Neurons in the Brain Tune into Different Frequencies for Different Spatial Memory Tasks

Your brain transmits information about your current location and memories of past locations over the same neural pathways using different frequencies of a rhythmic electrical activity called gamma waves, report neuroscientists at The University of Texas at Austin.

The research, published in the journal Neuron on April 17, may provide insight into the cognitive and memory disruptions seen in diseases such as schizophrenia and Alzheimer’s, in which gamma waves are disturbed.

Previous research has shown that the same brain region is activated whether we’re storing memories of a new place or recalling past places we’ve been.

“Many of us leave our cars in a parking garage on a daily basis. Every morning, we create a memory of where we parked our car, which we retrieve in the evening when we pick it up,” said Laura Colgin, assistant professor of neuroscience and member of the Center for Learning and Memory in The University of Texas at Austin’s College of Natural Sciences. “How then do our brains distinguish between current location and the memory of a location? Our new findings suggest a mechanism for distinguishing these different representations.”

Memory involving location is stored in an area of the brain called the hippocampus. The neurons in the hippocampus that store spatial memories (such as the location where you parked your car) are called place cells. The same set of place cells are activated both when a new memory of a location is stored and, later, when the memory of that location is recalled or retrieved.

When the hippocampus forms a new spatial memory, it receives sensory information about your current location from a brain region called the entorhinal cortex. When the hippocampus recalls a past location, it retrieves the stored spatial memory from a subregion of the hippocampus called CA3.

The entorhinal cortex and CA3 transmit these different types of information using different frequencies of gamma waves. The entorhinal cortex uses fast gamma waves, which have a frequency of about 80 Hz (about the same frequency as a bass E note played on a piano). In contrast, CA3 sends its signals on slow gamma waves, which have a frequency of about 40 Hz.

Colgin and her colleagues hypothesized that fast gamma waves promote encoding of recent experiences, while slow gamma waves support memory retrieval.

They tested these hypotheses by recording gamma waves in the hippocampus, together with electrical signals from place cells, in rats navigating through a simple environment. They found that place cells represented the rat’s current location when cells were active on fast gamma waves. When cells were active on slow gamma waves, place cells represented locations in the direction that the rat was heading.

“These findings suggest that fast gamma waves promote current memory encoding, such as the memory of where we just parked,” said Colgin. “However, when we need to remember where we are going, like when finding our parked car later in the day, the hippocampus tunes into slow gamma waves.”

Because gamma waves are seen in many areas of the brain besides the hippocampus, Colgin’s findings may generalize beyond spatial memory. The ability for neurons to tune into different frequencies of gamma waves provides a way for the brain to traffic different types of information across the same neuronal circuits.

Colgin said one of the next steps in her team’s research will be to apply technologies that induce different types of gamma waves in rats performing memory tasks. She imagines that they will be able to improve new memory encoding by inducing fast gamma waves. Conversely, she expects that inducing slow gamma waves will be detrimental to the encoding of new memories. Those slow gamma waves should trigger old memories, which would interfere with new learning.

Filed under gamma waves entorhinal cortex hippocampus memory place cells neuroscience science

free counters