Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience 2012

130 notes

Decoding Dreams
“[I was] somewhere, in a place like a studio to make a TV program or something,” a groggy study participant recounted (in Japanese). “A male person ran with short steps from the left side to the right side. Then, he tumbled.” The participant had recently been awoken by Masako Tamaki, a postdoc in the lab of neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan. He was lying in a functional magnetic resonance imaging (fMRI) scanner, doing his best to recall what he had been dreaming about. “He stumbled over something, and stood up while laughing, and said something,” the participant continued. “He said something to persons on the left side.”
At first blush, the story doesn’t seem particularly informative. But the study subject saw a man, not a woman. And he was inside some sort of workplace. That fragmented information is enough for Kamitani and his team, who recorded dream appearances of 20 key objects, such as “male” or “room,” and used a machine-learning algorithm to correlate those concepts with the fMRI images to find patterns that could be used to predict what people were dreaming about without having to wake them. Such information could help inform the study of why people dream, an elusive question in neurobiology, Kamitani says. “Knowing what is represented during sleep would help to understand the function of dreaming.”
Analyzing more than 200 dream reports—some 30–45 hours of interviews with each of three participants—Kamitani and his colleagues built a “dream-trained decoder” based on fMRI imagery of the V1, V2, and V3 areas of the visual cortex. “We find some rule, or mapping, or pattern between what the person is seeing and what activity is happening in the brain,” Kamitani explains. And it worked, according to Kamitani, who presented the results at the Society for Neuroscience meeting in New Orleans in October 2012, predicting whether or not the 20 objects occurred in dreams with 75–80 percent accuracy.
But while Kamitani’s dream-decoding study is interesting, says neurobiologist David Kahn of Harvard Medical School, the algorithms used are quite primitive, only providing a handful of clues about the dream’s content. “We still have a long way to go before we can actually re-create the story that is the dream,” he says. “This is almost science fiction, because we’re way, way far from it … [but] this is an added tool.”
“Decoding is very primitive,” Kamitani agrees, “but I think there are a lot of potentials.” One way to get a more complete picture of the dream is to increase the complexity of the decoder, he notes. In this first study, for example, the researchers focused on nouns representing visual objects, but going forward, Kamitani says he hopes to include other concepts, like verbs. “By analyzing that aspect we may be able to add some action aspects in the dream.”
Furthermore, researchers might not have to fully interpret the dream themselves to benefit from the new decoder. Instead, the clues gleaned from the fMRI images could simply be used to jog participants’ memories. “We know that dreams—even the most vivid dreams we remember, [like] nightmares or lucid dreams—are really fragile memories,” says Antonio Zadra, an experimental psychologist at the University of Montreal. “Unless you wrote it down or told it to someone in the morning, usually even before lunch, that memory will start fading. And by night, you might just have the essence.”
Unfortunately, that failing memory was the only resource for researchers studying dreams. Now, with a little bit of supplemental information, they may be able to help participants recall dreams more precisely. “The subjective reports are never complete,” Kamitani says. “By giving the subject what we reconstructed, they may remember something more.”
At an even more basic level, the decoder could help scientists understand what’s happening in the brain during dreaming. “To create this whole virtual world out of nothing—with no visual input or auditory input—is quite fascinating and undoubtedly very complex,” Zadra says. “This research will certainly help us better understand what brain areas are doing what, to even allow for this to happen.”
In Kamitani’s study, for example, the researchers found that areas of higher-level visual processing, which respond to more abstract features, were more useful for interpreting dream content than lower-level processing areas. This makes sense, given that those lower areas of the visual cortex are more closely connected to the direct input from the retina. But, Kamitani notes, this could simply have to do with the way the study was designed. “We didn’t train the decoder with low-level visual features,” such as shape or contrast, he says. “We just used the semantic category information.”
Indeed, given the richness of the dreaming experience, such visual qualities may well be encoded during sleep. “Your brain creates a whole virtual world for you when you are dreaming, complete with characters, settings, interactions, dialogues,” says Zadra. “But you’re actually in your bed asleep; there is no visual input. So your brain is literally creating this virtual world from A to Z.”

Decoding Dreams

“[I was] somewhere, in a place like a studio to make a TV program or something,” a groggy study participant recounted (in Japanese). “A male person ran with short steps from the left side to the right side. Then, he tumbled.” The participant had recently been awoken by Masako Tamaki, a postdoc in the lab of neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan. He was lying in a functional magnetic resonance imaging (fMRI) scanner, doing his best to recall what he had been dreaming about. “He stumbled over something, and stood up while laughing, and said something,” the participant continued. “He said something to persons on the left side.”

At first blush, the story doesn’t seem particularly informative. But the study subject saw a man, not a woman. And he was inside some sort of workplace. That fragmented information is enough for Kamitani and his team, who recorded dream appearances of 20 key objects, such as “male” or “room,” and used a machine-learning algorithm to correlate those concepts with the fMRI images to find patterns that could be used to predict what people were dreaming about without having to wake them. Such information could help inform the study of why people dream, an elusive question in neurobiology, Kamitani says. “Knowing what is represented during sleep would help to understand the function of dreaming.”

Analyzing more than 200 dream reports—some 30–45 hours of interviews with each of three participants—Kamitani and his colleagues built a “dream-trained decoder” based on fMRI imagery of the V1, V2, and V3 areas of the visual cortex. “We find some rule, or mapping, or pattern between what the person is seeing and what activity is happening in the brain,” Kamitani explains. And it worked, according to Kamitani, who presented the results at the Society for Neuroscience meeting in New Orleans in October 2012, predicting whether or not the 20 objects occurred in dreams with 75–80 percent accuracy.

But while Kamitani’s dream-decoding study is interesting, says neurobiologist David Kahn of Harvard Medical School, the algorithms used are quite primitive, only providing a handful of clues about the dream’s content. “We still have a long way to go before we can actually re-create the story that is the dream,” he says. “This is almost science fiction, because we’re way, way far from it … [but] this is an added tool.”

“Decoding is very primitive,” Kamitani agrees, “but I think there are a lot of potentials.” One way to get a more complete picture of the dream is to increase the complexity of the decoder, he notes. In this first study, for example, the researchers focused on nouns representing visual objects, but going forward, Kamitani says he hopes to include other concepts, like verbs. “By analyzing that aspect we may be able to add some action aspects in the dream.”

Furthermore, researchers might not have to fully interpret the dream themselves to benefit from the new decoder. Instead, the clues gleaned from the fMRI images could simply be used to jog participants’ memories. “We know that dreams—even the most vivid dreams we remember, [like] nightmares or lucid dreams—are really fragile memories,” says Antonio Zadra, an experimental psychologist at the University of Montreal. “Unless you wrote it down or told it to someone in the morning, usually even before lunch, that memory will start fading. And by night, you might just have the essence.”

Unfortunately, that failing memory was the only resource for researchers studying dreams. Now, with a little bit of supplemental information, they may be able to help participants recall dreams more precisely. “The subjective reports are never complete,” Kamitani says. “By giving the subject what we reconstructed, they may remember something more.”

At an even more basic level, the decoder could help scientists understand what’s happening in the brain during dreaming. “To create this whole virtual world out of nothing—with no visual input or auditory input—is quite fascinating and undoubtedly very complex,” Zadra says. “This research will certainly help us better understand what brain areas are doing what, to even allow for this to happen.”

In Kamitani’s study, for example, the researchers found that areas of higher-level visual processing, which respond to more abstract features, were more useful for interpreting dream content than lower-level processing areas. This makes sense, given that those lower areas of the visual cortex are more closely connected to the direct input from the retina. But, Kamitani notes, this could simply have to do with the way the study was designed. “We didn’t train the decoder with low-level visual features,” such as shape or contrast, he says. “We just used the semantic category information.”

Indeed, given the richness of the dreaming experience, such visual qualities may well be encoded during sleep. “Your brain creates a whole virtual world for you when you are dreaming, complete with characters, settings, interactions, dialogues,” says Zadra. “But you’re actually in your bed asleep; there is no visual input. So your brain is literally creating this virtual world from A to Z.”

Filed under Neuroscience 2012 dream-trained decoder dreaming neuroscience sleep brain science

1,643 notes

The Top 5 Neuroscience Breakthroughs of 2012

More than any year before, 2012 was the year neuroscience exploded into pop culture. From mind-controlled robot hands to cyborg animals to TV specials to triumphant books, brain breakthroughs were tearing up the airwaves and the internets. From all the thrilling neurological adventures we covered over the past year, we’ve collected five stories we want to make absolutely sure you didn’t miss.

A Roadmap of Brain Wiring

Neuroscientists like to compare the task of unraveling the brain’s connections to the frustration of untangling the cords beneath your computer desk – except that in the brain, there are hundreds of millions of cords, and at least one hundred trillion plugs. Even with our most advanced computers, some researchers were despairing of ever seeing a complete connectivity map of the human brain in our lifetimes. But thanks to a team led by Van Wedeen at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital, 2012 gave us an unexpectedly clear glimpse of our brains’ large-scale wiring patterns. As it turns out, the overall pattern isn’t so much a tangle as a fabric – an intricate, multi-layered grid of cross-hatched neural highways. What’s more, it looks like our brains share this grid pattern with many other species. We’re still a long way from decoding how most of this wiring functions, but this is a big step in the right direction.

Laser-Controlled Desire

Scientists have been stimulating rats’ pleasure centers since the 1950s – but 2012 saw the widespread adoption of a new brain-stimulation method that makes all those wires and incisions look positively crude. Researchers in the blossoming field of optogenetics develop delicate devices that control the firing of targeted groups of neurons – using only light itself. By hooking rats up to a tiny fiber-optic cable and firing lasers directly into their brains, a team led by Garret D. Stuber at the University of North Carolina at Chapel Hill School of Medicine were able to isolate specific neurochemical shifts that cause rats to feel pleasure or anxiety – and switch between them at will. This method isn’t only more precise than electrical stimulation – it’s also much less damaging to the animals.

Programmable Brain Cells

Pluripotent stem cell research took off like a rocket in 2012. After discovering that skin cells can be genetically reprogrammed into stem cells, which can in turn be reprogrammed into just about any cell in the human body, a team led by Sheng Ding at UCSF managed to engineer a working network of newborn neurons from a harvest of old skin cells. In other words, the team didn’t just convert skin cells into stem cells, then into neurons – they actually kept the batch of neurons alive and functional long enough to self-organize into a primitive neural network. In the near future, it’s likely that we’ll be treating many kinds of brain injuries by growing brand-new neurons from other kinds of cells in a patient’s own body. This is already close on the horizon for liver and heart cells – but the thought of being able to technologically shape the re-growth of a damaged brain is even more exciting.

Memories on Disc

We’ve talked a lot about how easily our brains can modify and rewrite our long-term memories of facts and scenarios. In 2012, though, researchers went Full Mad Scientist with the implications of this knowledge, and blew some mouse minds in the process. One team, led by Mark Mayford of the Scripps Research Institute, took advantage of some recently invented technology that enables scientists to record and store a mouse’s memory of a familiar place on a microchip. Mayford’s team figured out how to turn specific mouse memories on and off with the flick of a switch – but they were just getting warmed up. The researchers then proceeded to record a memory in one mouse’s brain, transfer it into another mouse’s nervous system, and activate it in conjunction with one of the second mouse’s own memories. The result was a bizarre “hybrid memory” – familiarity with a place the mouse had never visited. Well, not in the flesh, anyway.

Videos of Thoughts

Our most exciting neuroscience discovery of 2012 is also one of the most controversial. A team of researchers from the Gallant lab at UC Berkeley discovered a way to reconstruct videos of entire scenes from neural activity in a person’s visual cortex. Those on the cautionary side emphasize that activity in the visual cortex is fairly easy to decode (relatively speaking, of course) and that we’re still a long, long way from decoding videos of imaginary voyages or emotional palettes. In fact, from one perspective, this isn’t much different from converting one file format into another. On the other hand, though, these videos offer the first hints of the technological reality our children may inhabit: A world where the boundaries between the objective external world and our individual subjective experiences are gradually blurred and broken down. When it comes to transforming our relationship with our own consciousness – and those of the people around us – it doesn’t get much more profound than that.

Filed under brain breakthroughs neuroscience 2012 neuroscience science

186 notes

Paralysis breakthrough: spinal cord damage repaired

I suddenly noticed I could move my pinkie. I was cruising towards the highway when this old guy tried to cross the 4-lane road really fast. He hit me and I ejected over to the opposite lane. Luckily someone found me before the traffic got to me.

Paralysis may no longer mean life in a wheelchair. A man who is paralysed from the trunk down has recovered the ability to stand and move his legs unaided thanks to training with an electrical implant.

Andrew Meas of Louisville, Kentucky, says it has changed his life. The stimulus provided by the implant is thought to have either strengthened persistent “silent” connections across his damaged spinal cord or even created new ones, allowing him to move even when the implant is switched off.

The results are potentially revolutionary, as they indicate that the spinal cord is able to recover its function years after becoming damaged.

Previous studies in animals with lower limb paralysis have shown that continuous electrical stimulation of the spinal cord below the area of damage allows an animal to stand and perform locomotion-like movements. That’s because the stimulation allows information about proprioception – the perception of body position and muscle effort – to be received from the lower limbs by the spinal cord. The spinal cord, in turn, allows lower limb muscles to react and support the body without any information being received from the brain (Journal of Neuroscience, doi.org/czq67d).

Last year, Susan Harkema and Claudia Angeli at the Frazier Rehab Institute and University of Louisville in Kentucky and colleagues tested what had been learned on animals in a man who was paralysed after being hit by a car in 2006. He was diagnosed with a “motor complete” spinal lesion in his neck, which means that no motor activity can be recorded below the lesion.

Read more …

Filed under spinal cord spinal cord injury paralysis implants Neuroscience 2012 electrical stimulation neuroscience science

24 notes

Overcoming memories that trigger cocaine relapse

Researchers identify brain mechanisms that regulating cocaine-seeking behavior

Researchers from the University of Wisconsin-Milwaukee (UWM) have identified mechanisms in the brain responsible for regulating cocaine-seeking behavior, providing an avenue for drug development that could greatly reduce the high relapse rate in cocaine addiction.

The research reveals that stimulation of certain brain receptors promotes inhibition of cocaine-associated memories, helping addicts to stop drug use. This inhibition is achieved through enhancing a process called “extinction learning,” in which cocaine-associated memories are replaced with associations that have no drug “reward.” This reduces drug-seeking behavior in rats.

The work was presented at the annual meeting of the Society for Neuroscience in New Orleans by Devin Mueller, UWM assistant professor of psychology, and doctoral student James Otis.

There are currently no FDA-approved medications to treat cocaine abuse, only treatments that address withdrawal symptoms, says Mueller. Abuse is maintained, in part, through exposure to environmental cues that trigger cocaine-related memories which lead to craving and relapse in recovering addicts. Currently, exposure therapy is used to help recovering addicts suppress their drug-seeking behavior, but with limited success. In exposure therapy, a patient is repeatedly exposed to stimuli that provoke craving. With repeated exposure, the patient experiences extinction, leading to reduced craving when presented with those stimuli.

If extinction could be strengthened, it would increase the effectiveness of exposure therapies in preventing relapse.

Isolating the receptor

The team found that a specific variant of the NMDA receptor, those which contain the NR2B subunit, are critical for extinction learning. They also discovered that drugs known to enhance NR2B function strengthened extinction because they act specifically in a region of the brain that regulates learned behaviors. In their investigation, researchers conditioned rats to associate one distinct chamber, but not another, with cocaine. Following conditioning, the rats were tested for a place preference by allowing drug-free access to both chambers. Rats demonstrating cocaine-seeking behavior spent significantly more time in the previously cocaine-associated chamber. Over several cocaine-free test sessions, addicted rats lost their place preference through extinction learning.

To examine the neural mechanisms of extinction, the researchers administered ifenprodil, which blocks NR2B-containing NMDA receptors, immediately after an extinction test. Ifenprodil-treated rats continued to spend more time in the cocaine-associated chamber even in the absence of cocaine, while saline-treated rats did not. These results were also replicated through specific infusion of ifenprodil into the brain’s infralimbic cortex, localizing a key brain structure in arresting cocaine-seeking.

Other avenues

The results indicate that enhancing NR2B function would boost the effectiveness of extinction-based exposure therapies. Although there are currently no NR2B-enhancing drugs, the NR2B containing receptor can be stimulated using other molecular pathways, says Mueller.

An example is the brain derived neurotrophic factor (BDNF) signaling cascade, which is implicated in neuron survival and growth. The authors targeted this cascade by directly administering BDNF into the infralimbic cortex. In extinction tests, administration of BDNF caused rats to lose their preference for the cocaine-associated chamber faster than rats given a placebo.

Mueller and Otis took these findings even further toward possible therapeutic intervention for addicts.

One issue with giving BDNF to humans is that it is unable to reach the brain through the bloodstream. Therefore, researchers next targeted the TrkB receptor, which is where BDNF normally binds. They did so with a newly synthesized drug that is able to reach the brain due to its small molecular size. This TrkB receptor agonist, known as 7,8 dihydroxyflavone, also strengthened extinction when given to rats during extinction training. The authors conclude that combining TrKB receptor stimulation simultaneously with exposure therapy could be an effective treatment for cocaine abuse, reducing craving and the potential for relapse.

(Source: eurekalert.org)

Filed under brain receptors NMDA cocaine addiction inhibition neuroscience Neuroscience 2012 science

184 notes


The Power of Music: Mind Control by Rhythmic Sound
You walk into a bar and music is thumping. All heads are bobbing and feet tapping in synchrony. Somehow the rhythmic sound grabs control of the brains of everyone in the room forcing them to operate simultaneously and perform the same behaviors in synchrony. How is this possible? Is this unconscious mind control by rhythmic sound only driving our bodily motions, or could it be affecting deeper mental processes?
The mystery runs deeper than previously thought, according to psychologist Annett Schirmer reporting new findings today at the Society for Neuroscience meeting in New Orleans. Rhythmic sound “not only coordinates the behavior of people in a group, it also coordinates their thinking—the mental processes of individuals in the group become synchronized.”
This finding extends the well-known power of music to tap into brain circuits controlling emotion and movement, to actually control the brain circuitry of sensory perception. This discovery helps explain how drums unite tribes in ceremony, why armies march to bugle and drum into battle, why worship and ceremonies are infused by song, why speech is rhythmic, punctuated by rhythms of emphasis on particular syllables and words, and perhaps why we dance.

Read more

The Power of Music: Mind Control by Rhythmic Sound

You walk into a bar and music is thumping. All heads are bobbing and feet tapping in synchrony. Somehow the rhythmic sound grabs control of the brains of everyone in the room forcing them to operate simultaneously and perform the same behaviors in synchrony. How is this possible? Is this unconscious mind control by rhythmic sound only driving our bodily motions, or could it be affecting deeper mental processes?

The mystery runs deeper than previously thought, according to psychologist Annett Schirmer reporting new findings today at the Society for Neuroscience meeting in New Orleans. Rhythmic sound “not only coordinates the behavior of people in a group, it also coordinates their thinking—the mental processes of individuals in the group become synchronized.”

This finding extends the well-known power of music to tap into brain circuits controlling emotion and movement, to actually control the brain circuitry of sensory perception. This discovery helps explain how drums unite tribes in ceremony, why armies march to bugle and drum into battle, why worship and ceremonies are infused by song, why speech is rhythmic, punctuated by rhythms of emphasis on particular syllables and words, and perhaps why we dance.

Read more

Filed under brain brainwaves decision making emotion music neuroscience psychology Neuroscience 2012 science

32 notes

How a Vision Prosthetic Could Bypass the Visual System

Electrical stimulation of the visual cortex may one day give image perception to blind people.

Work presented at the Society for Neuroscience meeting in New Orleans today suggests a way to create a completely new kind of visual prosthetic—one that restores vision by directly activating the brain.

In a poster session, researchers presented results showing how electrical stimulation of the visual cortex can evoke the sensation of simple flashes of light—including spatial information about those flashes.

While other researchers are trying to develop artificial retinas that feed visual signals into existing sensory pathways (see “A Retinal Prosthetic Powered by Light" and "Now I See You" for instance), the team behind the new work, from the Baylor College of Medicine and the University of Texas Health Science Center in Houston, is exploring the possibility of bypassing those routes all together. This could be vital for those whose retinas are unable to receive retinal stimulation.

The researchers used electrodes to stimulate the brains of three patients who were already undergoing brian surgery to treat epilepsy. All three were able to detect bright spots of light, called phosphenes, when certain regions of their brains were stimulated. And, in seven out of eight trials, the patients were able to correctly see the orientation of a phosphene—in one of two orientations, depending on the stimulation they received. 

The work builds upon a study published by the same team in Nature Neuroscience this summer. In that study, the researchers defined which areas of the brain produce phosphene perception when patients’ brains were electrically stimulated.

press release related to the earlier work says that the researchers “plan to conduct a larger patient study and create multiple flashes of light at the same time. Twenty-seven or so simultaneous flashes might allow participants to see the outline of a letter.”

Filed under blindness neuroscience prosthetics retina vision visual perception Neuroscience 2012 science

397 notes


Scientists read dreams: Brain scans during sleep can decode visual content of dreams
A team of researchers led by Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, used functional neuroimaging to scan the brains of three people as they slept, simultaneously recording their brain waves using electroencephalography (EEG).
The researchers woke the participants whenever they detected the pattern of brain waves associated with sleep onset, asked them what they had just dreamed about, and then asked them to go back to sleep.
This was done in three-hour blocks, and repeated between seven and ten times, on different days, for each participant. During each block, participants were woken up ten times per hour. Each volunteer reported having visual dreams six or seven times every hour, giving the researchers a total of around 200 dream reports.

Read more

Scientists read dreams: Brain scans during sleep can decode visual content of dreams

A team of researchers led by Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, used functional neuroimaging to scan the brains of three people as they slept, simultaneously recording their brain waves using electroencephalography (EEG).

The researchers woke the participants whenever they detected the pattern of brain waves associated with sleep onset, asked them what they had just dreamed about, and then asked them to go back to sleep.

This was done in three-hour blocks, and repeated between seven and ten times, on different days, for each participant. During each block, participants were woken up ten times per hour. Each volunteer reported having visual dreams six or seven times every hour, giving the researchers a total of around 200 dream reports.

Read more

Filed under brain sleep dream neuroimaging Neuroscience 2012 neuroscience psychology science

139 notes

Why crying babies are so hard to ignore: Study suggests the sound of a baby crying activates primitive parts of the brain involved in fight-or-flight responses

Ever wondered why it is so difficult to ignore the sound of a crying baby when you are trapped aboard a train or aeroplane? Scientists have found that our brains are hard-wired to respond strongly to the sound, making us more attentive and priming our bodies to help whenever we hear it – even if we’re not the baby’s parents.
"The sound of a baby cry captures your attention in a way that few other sounds in the environment generally do," said Katie Young of the University of Oxford, who led the study looking at how the brain processes a baby’s cries.
She scanned the brains of 28 people while they listened to the sound of babies and adults crying and sounds of animal distress including cats meowing and dogs whining.
Using a very fast scanning technique, called magnetoencephalography, Young found an early burst of activity in the brain in response to the sound of a baby cry, followed by an intense reaction after about 100 milliseconds. The reaction to other sounds was not as intense. “This was primarily in two regions of the brain,” said Young. “One is the middle temporal gyrus, an area previously implicated in emotional processing and speech; the other area is the orbitofrontal cortex, an area well-known for its role in reward and emotion processing.”
Young and her colleague, Christine Parsons, presented their findings this week at the annual meeting of the Society for Neuroscience in New Orleans.

Why crying babies are so hard to ignore: Study suggests the sound of a baby crying activates primitive parts of the brain involved in fight-or-flight responses

Ever wondered why it is so difficult to ignore the sound of a crying baby when you are trapped aboard a train or aeroplane? Scientists have found that our brains are hard-wired to respond strongly to the sound, making us more attentive and priming our bodies to help whenever we hear it – even if we’re not the baby’s parents.

"The sound of a baby cry captures your attention in a way that few other sounds in the environment generally do," said Katie Young of the University of Oxford, who led the study looking at how the brain processes a baby’s cries.

She scanned the brains of 28 people while they listened to the sound of babies and adults crying and sounds of animal distress including cats meowing and dogs whining.

Using a very fast scanning technique, called magnetoencephalography, Young found an early burst of activity in the brain in response to the sound of a baby cry, followed by an intense reaction after about 100 milliseconds. The reaction to other sounds was not as intense. “This was primarily in two regions of the brain,” said Young. “One is the middle temporal gyrus, an area previously implicated in emotional processing and speech; the other area is the orbitofrontal cortex, an area well-known for its role in reward and emotion processing.”

Young and her colleague, Christine Parsons, presented their findings this week at the annual meeting of the Society for Neuroscience in New Orleans.

Filed under brain Neuroscience 2012 magnetoencephalography brain activity crying baby sound neuroscience psychology science

63 notes

Research group finds blood transfusions from young mice to old improves brain function

A research team from Stanford University has found that injecting the blood of young mice into older mice can cause new neural development and improved memory. Team lead Saul Villeda presented the groups’ findings at this year’s Society for Neuroscience conference.

The researchers were following up on work by another team also led by Villeda that last year found that when younger mice were given transfusions of blood from older mice, their mental faculties aged more quickly than non transfused young mice. In their paper published in the journal Nature, the team also noted that the reverse appeared to be true as well, namely that the older mice derived a degree of mental benefit from the transfusions.

In this new research, the team connected the bloodstreams of an older mouse and a younger mouse, allowing their blood to comingle. Subsequent brain scans found that the number of neural stem cells in the brains of the older mice increased by 20 percent after just a few days, indicating that new neural connections were being made – a necessary occurrence for increased memory retention.

To find out if such differences could be measured in a behavioral sense, the team gave transfusions of blood plasma from young mice to older mice and then tested them in a standard water maze; one that requires strong memory skills. The team found that the transfused mice were able to perform as well as much younger mice, while a similar group of older mice that did not get transfusions were much less successful at solving the maze.

Villeda pointed out in his talk that his team’s findings don’t indicate that older people should try to obtain transfusions from younger people to stave off dementia or Alzheimer’s disease, as it’s not yet known if the same results might be had. What needs to happen, he said, is for researchers to look more closely at young mouse blood compared to the blood of older mice to discover what differences in it might account for the increased neural buildup it offers to older mice.

(Source: medicalxpress.com)

Filed under blood blood transfusions aging memory neural development Neuroscience 2012 neuroscience science

21 notes


UM Researchers Create Device to Help Stutterers
Drawing on one another’s expertise, a trio of University of Mississippi faculty members from different areas of campus has created a patent-pending device that could change the lives of people who stutter.
Paul Goggans, an electrical engineering professor, developed the prosthetic device, about the size of a cell phone, with Greg Snyder, associate professor of communications sciences and disorders, and Dwight Waddell, associate professor of health, exercise science and recreation management. The friends began working on the device after Snyder, himself a lifelong stutterer, demonstrated how he could speak much more fluently simply by feeling his throat while he and Waddell chatted over coffee.
“By feeling my throat vibrate when I speak, I get tactile speech feedback, which significantly reduces my stuttering,” Snyder said. “Dwight immediately understood my application of speech feedback and neural circuitry, and he then approached Paul, who agreed to make the device development a senior-level design project in his class.”
Since that time, the team has been focused on supporting and empowering the stuttering community by fighting social stigma and challenging the normal remedies associated with stuttering. “Our device is portable, battery-powered and easy to use,” said Goggans, professor of electrical engineering and lead partner in the instrument’s design and fabrication. “These are important attributes because other behavioral treatments for stuttering are more intense; they require too much concentration and are exhausting.”
A prototype of the device was presented Tuesday (Oct. 16) as a “Hot Topic” at the 2012 Society of Neuroscience conference in New Orleans. The paper is among 150 selected from thousands of submissions.

UM Researchers Create Device to Help Stutterers

Drawing on one another’s expertise, a trio of University of Mississippi faculty members from different areas of campus has created a patent-pending device that could change the lives of people who stutter.

Paul Goggans, an electrical engineering professor, developed the prosthetic device, about the size of a cell phone, with Greg Snyder, associate professor of communications sciences and disorders, and Dwight Waddell, associate professor of health, exercise science and recreation management. The friends began working on the device after Snyder, himself a lifelong stutterer, demonstrated how he could speak much more fluently simply by feeling his throat while he and Waddell chatted over coffee.

“By feeling my throat vibrate when I speak, I get tactile speech feedback, which significantly reduces my stuttering,” Snyder said. “Dwight immediately understood my application of speech feedback and neural circuitry, and he then approached Paul, who agreed to make the device development a senior-level design project in his class.”

Since that time, the team has been focused on supporting and empowering the stuttering community by fighting social stigma and challenging the normal remedies associated with stuttering.
“Our device is portable, battery-powered and easy to use,” said Goggans, professor of electrical engineering and lead partner in the instrument’s design and fabrication. “These are important attributes because other behavioral treatments for stuttering are more intense; they require too much concentration and are exhausting.”

A prototype of the device was presented Tuesday (Oct. 16) as a “Hot Topic” at the 2012 Society of Neuroscience conference in New Orleans. The paper is among 150 selected from thousands of submissions.

Filed under prosthetics stutterers tactile speech Neuroscience 2012 neuroscience science

free counters