Neuroscience

Articles and news from the latest research reports.

Posts tagged brain

7 notes

Reduction of excess brain activity improves memory in amnestic mild cognitive impairment

May 9, 2012

Research published in the May 10 issue of the journal Neuron, describes a potential new therapeutic approach for improving memory and modifying disease progression in patients with amnestic mild cognitive impairment. The study finds that excess brain activity may be doing more harm than good in some conditions that cause mild cognitive decline and memory impairment.

Elevated activity in specific parts of the hippocampus, a brain region involved in memory, is often seen in disorders associated with an increased risk for Alzheimer’s disease. Amnestic mild cognitive impairment (aMCI), where memory is worse than would be expected for a person’s age, is one such disorder. “In the case of early aMCI, it has been suggested that the increased hippocampal activation may serve a beneficial function by recruiting additional neural resources to compensate for those that are lost,” explains senior study author, Dr. Michela Gallagher, from Johns Hopkins University. “However, animal studies have raised the alternative view that this excess activation may be contributing to memory impairment.”

Dr. Gallagher and colleagues tested how a reduction of hippocampal activity would impact human patients with aMCI. The researchers used a low dose of a drug used clinically to treat epilepsy, for the purpose of reducing hippocampal activity in subjects with aMCI to levels that were similar to activity levels in healthy, age-matched subjects in a control group. The researchers found that treatment with the drug improved performance on a memory task. These findings point to the therapeutic potential of reducing excess activation in the hippocampus in aMCI.

The results also have broader significance as elevated activity in the hippocampus is also observed in other conditions that are thought to precede Alzheimer’s disease, and may be one of the underlying mechanisms of neurodegeneration. “Apart from a direct role in memory impairment, there is concern that elevated activity in vulnerable neural networks could be causing additional damage and, possibly, widespread disease-related degeneration that underlies cognitive decline and the conversion to Alzheimer’s disease,” concludes Dr. Gallagher. “Therefore, reducing the elevated activity in the hippocampus may help to restore memory and protect the brain.”

Provided by Cell Press

More information: Bakker et al.: “Reduction of hippocampal hyperactivity improves cognition in amnestic mild cognitive impairment.”,DOI:10.1016/j.neuron.2012.03.023

Source: medicalxpress.com

Filed under science neuroscience brain psychology memory

7 notes

Babies’ Brains Benefit From Music Lessons

Released: 5/9/2012 11:20 AM EDT

Newswise — After completing the first study of its kind, researchers at McMaster University have discovered that very early musical training benefits children even before they can walk or talk.

They found that one-year-old babies who participate in interactive music classes with their parents smile more, communicate better and show earlier and more sophisticated brain responses to music.

The findings were published recently in the scientific journals Developmental Science and Annals of the New York Academy of Sciences.

“Many past studies of musical training have focused on older children,” says Laurel Trainor, director of the McMaster Institute for Music and the Mind. “Our results suggest that the infant brain might be particularly plastic with regard to musical exposure.”

Trainor, together with David Gerry, a music educator and graduate student, received an award from the Grammy Foundation in 2008 to study the effects of musical training in infancy. In the recent study, groups of babies and their parents spent six months participating in one of two types of weekly music instruction.

One music class involved interactive music-making and learning a small set of lullabies, nursery rhymes and songs with actions. Parents and infants worked together to learn to play percussion instruments, take turns and sing specific songs.

In the other music class, infants and parents played at various toy stations while recordings from the popular Baby Einstein series played in the background.

Before the classes began, all the babies had shown similar communication and social development and none had previously participated in other baby music classes.

“Babies who participated in the interactive music classes with their parents showed earlier sensitivity to the pitch structure in music,” says Trainor. “Specifically, they preferred to listen to a version of a piano piece that stayed in key, versus a version that included out-of-key notes. Infants who participated in the passive listening classes did not show the same preferences. Even their brains responded to music differently. Infants from the interactive music classes showed larger and/or earlier brain responses to musical tones.”

The non-musical differences between the two groups of babies were even more surprising, say researchers.

Babies from the interactive classes showed better early communication skills, like pointing at objects that are out of reach, or waving goodbye. Socially, these babies also smiled more, were easier to soothe, and showed less distress when things were unfamiliar or didn’t go their way.

While both class types included listening to music and all the infants heard a similar amount of music at home, a big difference between the classes was the interactive exposure to music.

“There are many ways that parents can connect with their babies,” says study coordinator Andrea Unrau. “The great thing about music is, everyone loves it and everyone can learn simple interactive musical games together.”

Source: newswise

Filed under science brain neuroscience psychology

5 notes

Cellist Achieves Optimal Performance Through Neurofeedback

Released: 5/9/2012 11:00 AM EDT 

Newswise — “Practice makes perfect,” the saying goes. Optimal performance, however, can require more than talent, effort, and repetition. Training the brain to reduce stress through neurofeedback can remove barriers and enhance one’s innate abilities.

An article in the journal Biofeedback presents the narrative of a young cellist who was able to realize the potential of his talent and eliminate debilitating migraine headaches. This case study is part of a special section in the Spring 2012 issue focusing on optimal functioning.

Enhancing people’s performance in business, performing and visual arts, academia, and sports can be realized through biofeedback and neurofeedback training. Tools of stress reduction, mental imagery training, psychology, and psycho-physiological technology are combined to help people reach their goals.

The author and practitioner in this case study has combined her work and study in the fields of theater, social work, and neurofeedback. In her practice, she coaches clients to achieve outstanding performances. For example, a singer can better understand and interpret a musical selection, allowing that singer to better convey the emotion of the music, resulting in a noticeably improved performance.

William, the young musician, sought relief from migraine headaches that were affecting him almost daily. His therapy, however, did not take the approach of treating the headaches, but of focusing on William as a person and as a performer. By improving his functionality, working through moments of obsessiveness, self-criticism, fear, and anxiety, the headaches could also be resolved.

William’s therapist conducted neurofeedback — using sensors to read his brainwaves, analyzing these with NeuroOptimal™ software, and then giving feedback to the brain through a visual display and sound. With this information, the brain can learn to self-correct. This technology assists in getting people past that moment when they obsess over whether they have given the correct answer or hit the right note.

NeuroOptimal feedback, guided imagery, and coaching about decisions regarding his music helped William move beyond the difficulties he encountered. During his senior recital at his college, he was able to give a relaxed, confident performance that was met with a standing ovation.

Full text of the article, “William’s Story: A Case Study in Optimal Performance,” Biofeedback, Volume 40, Issue 1, Spring 2012, is available at http://www.aapb-biofeedback.com/

Source: newswise

Filed under science neuroscience brain psychology

0 notes

Can new diagnostic approaches help assess brain function in unconscious, brain-injured patients?

May 9, 2012

Disorders of consciousness such as coma or a vegetative state caused by severe brain injury are poorly understood and their diagnosis has relied mainly on patient responses and measures of brain activity. However, new functional and imaging-based diagnostic tests that measure communication and signaling between different brain regions may provide valuable information about the potential for consciousness in patients unable to communicate. These innovative approaches are described and compared in a Review article in the groundbreaking neuroscience journal Brain Connectivity.

Brain Connectivity is the journal of record for researchers and clinicians interested in all aspects of brain connectivity. Credit: ©2012 Mary Ann Liebert, Inc., publishers

Mélanie Boly and coauthors from University of Liège (Belgium), University of Milan (Italy), and University College London (UK) compare the benefits and limitations of three methods for studying the dynamics of brain communication and connectivity in response to internal and external stimulation: functional magnetic resonance imaging f(MRI); transcranial magnetic stimulation (TMS) combined with electroencephalograpy (EEG); and response to neuronal perturbation, measuring, for example, sensory evoked potentials (ERP). They report their findings and propose future research directions in the article “Brain Connectivity in Disorders of Consciousness.”

"In recent years, there has been a tremendous interest in gaining a better understanding of the various disorders of consciousness. A variety of methods including fMRI and PET have been used to study these disorders," says Bharat Biswal, PhD, Co-Editor-in-Chief of Brain Connectivity and Associate Professor, University of Medicine and Dentistry of New Jersey. “This article provides a comprehensive analysis using three new and innovative methods to study disorders of consciousness.”

More information: The article is available free on the Brain Connectivitywebsite at http://online.liebertpub.com/doi/full/10.1089/brain.2011.0049

Provided by Mary Ann Liebert, Inc.

Source: medicalxpress.com

Filed under science neuroscience brain psychology consciousness

6 notes

Computer Scientists Show What Makes Movie Lines Memorable

ScienceDaily (May 8, 2012) — Whether it’s a line from a movie, an advertising slogan or a politician’s catchphrase, some statements take hold in people’s minds better than others. But why?

Cornell researchers who applied computer analysis to a database of movie scripts think they may have found the secret of what makes a line memorable.

The study suggests that memorable lines use familiar sentence structure but incorporate distinctive words or phrases, and they make general statements that could apply elsewhere. The latter may explain why lines such as, “You’re gonna need a bigger boat” or “These aren’t the droids you’re looking for” (accompanied by a hand gesture) have become standing jokes. You can use them in a different context and apply the line to your own situation.

While the analysis was based on movie quotes, it could have applications in marketing, politics, entertainment and social media, the researchers said.

"Using movie scripts allowed us to study just the language, without other factors. We needed a way of asking a question just about the language, and the movies make a very nice dataset," said graduate student Cristian Danescu-Niculescu-Mizil, first author of a paper to be presented at the 50th Annual Meeting of the Association for Computational Linguistics July 8-14 in Jeju, South Korea.

The study grows out of ongoing work on how ideas travel across networks.

"We’ve been looking at things like who talks to whom," said Jon Kleinberg, a professor of computer science who worked on the study, "but we hadn’t explored how the language in which an idea was presented might have an effect."

To address that, they collaborated with Lillian Lee, a professor of computer science who specializes in computer processing of natural human language.

They obtained scripts from about 1,000 movies, and a database of memorable quotes from those movies from the Internet Movie Database. Each quote was paired with another from the movie’s script, spoken by the same character in the same scene and about the same length, to eliminate every factor except the language itself. Obi-Wan Kenobi, for example, also said, “You don’t need to see his identification,” but you don’t hear that a lot.

They asked a group of people who had not seen the movies to choose which quote in the pairs was most memorable. Two patterns emerged to identify the memorable choice: distinctiveness and generality.

Then the researchers programmed a computer with linguistic rules reflecting these concepts. A line will be less general if it contains third-person pronouns and definite articles (which refer to people, objects or events in the scene) and uses past tense (usually referring to something that happened previously in the story). Distinctive language can be identified by comparison with a database of news stories. The computer was able to choose the memorable quote an average of 64 percent of the time.

Later analysis also found subtle differences in sound and word choice: Memorable quotes use more sounds made in the front of the mouth, words with more syllables and fewer coordinating conjunctions.

In a further test, the researchers found that the same rules applied to popular advertising slogans.

Although teaching a computer how to write memorable dialogue is probably a long way off, applications might be developed to monitor the work of human writers and evaluate it in progress, Kleinberg suggested.

The researchers have set up a website where you can test your skill at identifying memorable movie quotes, and perhaps contribute some data to the research, at www.cs.cornell.edu/~cristian/memorability.html

Source: Science Daily

Filed under science neuroscience memory psychology brain

6 notes

'Blindness’ May Rapidly Enhance Other Senses

ScienceDaily (May 8, 2012) — Can blindness or other forms of visual deprivation really enhance our other senses such as hearing or touch? While this theory is widely regarded as being true, there are still many questions about the science behind it.

New findings from a Canadian research team investigating this link suggest that not only is there a real connection between vision and other senses, but that connection is important to better understand the underlying mechanisms that can quickly trigger sensory changes. This may demystify the true potential of human adaptation and, ultimately, help develop innovative and effective methods for rehabilitation following sensory loss or injury.

François Champoux, director of the University of Montreal’s Laboratory of Auditory Neuroscience Research, will present his team’s research and findings at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.

Studies have shown, in terms of hearing, that blind people are better at localizing sound. One study even suggested that blindness might improve the ability to differentiate between sound frequencies. “The supposed enhanced tactile abilities have been studied at a greater degree and can be seen as early as days or even minutes following blindness,” says Champoux. “This rapid change in auditory ability hasn’t yet been clearly demonstrated.”

Two big questions about blindness and enhanced abilities remain unanswered: Can blindness improve more complex auditory abilities and, if so, can these changes be triggered after only a few minutes of visual deprivation, similar to those seen with tactile abilities?

"When we speak or play a musical instrument, the sounds have specific harmonic relations. In other words, if we play a certain note on a piano, that note has many related ‘layers.’ However, we don’t hear all of these layers because our brain simply associates them all together and we only hear the lowest one," Champoux explains.

It’s through this complex computation based on specific components of the sound that the brain can interpret and distinguish auditory signals coming from different people or instruments. The ability to identify harmonicity — the harmonic relation between sounds — is one of the most powerful factors involved in interpreting our auditory surroundings.

"Harmonicity can easily be evaluated using a simple task in which similar harmonic layers are set up and one of them is gradually modified until the individual notices two layers instead of one," says Champoux. "In our study, healthy individuals completed such a task while blindfolded. This task was administered twice, separated by a 90-minute interval during which the participants conversed with the experimenter in a quiet room. Half of the participants kept the blindfold on during the interval period, depriving them of all visual input, while the other half removed their blindfolds."

They found no significant differences between the two groups in their ability to differentiate harmonicity prior to visual deprivation. However, the results of the testing session following visual deprivation revealed that visually deprived individuals performed significantly better than the group that took their blindfolds off.

"Regardless of the neural basis for such an enhancement, our results suggest that the potential for change in auditory perception is much greater than previously assumed," Champoux notes.

Source: Science Daily

Filed under science neuroscience brain psychology

8 notes

The Risk of Listening to Amplified Music

ScienceDaily (May 8, 2012) — Listening to amplified music for less than 1.5 hours produces measurable changes in hearing ability that may place listeners at risk of noise-induced hearing loss, new research shows. While further research is needed to firmly establish this risk, the investigation is significant because it provides the first acoustical data for a new method to assess the potential harm from a widespread cultural behavior: “leisure listening” to amplified music, whether in live environments or through headphones.

A team of Danish acoustics researchers present the results of their preliminary study at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics. Their goal is to help develop recommendations for how sound engineers, musicians, event organizers, and the general public should safely enjoy amplified music so they are protected from hearing loss — just as workers are now protected by occupational health standards.

Explains Rodrigo Ordonez, Ph.D., lead scientist of the Danish team from Aalborg University’s Department of Electronic Systems: “Modern low-distortion, high-power loudspeaker systems and headphones make it easy for people to be exposed to potentially harmful sound levels at discotheques, concerts, or while using portable music players.”

He adds that in the realm of industrial noise and work-related sound exposures, decades of experience and personal tragedy — many workers lost hearing from factory conditions — has produced the hearing-damage risk criteria currently used. Based on well-documented acoustical parameters, these criteria outline measurement procedures and expected impact on hearing.

"Yet when it comes to musical sound exposure — and in particular, amplified music — it is not known if the same measures used for industrial noise will accurately describe the effects on hearing and the risk these behaviors pose," Dr. Ordonez says.

To investigate the potential health risk from amplified music, the team measured sounds known as “otoacoustic emissions” as an index of auditory function. These are sounds generated within the inner ear in response to sound stimuli, and they can be measured in the ear canals of people who have healthy hearing. Research shows that otoacoustic emissions disappear when the inner ear is damaged. In this study, the researchers measured otoacoustic emissions to gauge changes in hearing ability before and after exposure to amplified music, testing this method in a live concert environment. Comparing how these two sets of measures change after a sound exposure with the acoustical parameters of the amplified music can lead to a better understanding of how our hearing is affected.

Results revealed two main findings: One is that it is possible to measure changes in hearing after exposures of relatively short duration, less than 1.5 hours. The second is that there are noticeable individual differences in sound exposure levels, as well as in the changes on otoacoustic emissions produced by similar exposure conditions.

Next steps in the team’s work include refining their measurement methods and describing the biophysical effects and mechanics that music sound levels have on individuals. Ultimately they hope to provide data and a scientific rationale on which to establish damage risk criteria for music sound exposure.

Source: Science Daily

Filed under science neuroscience brain psychology

9 notes

Scientists Tuning in to How You Tune out Noise

ScienceDaily (May 8, 2012) — Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others — a phenomenon known as auditory selective attention. In research that could some day lead to the development of improved devices allowing users to control things like wheelchairs through thought alone, hearing scientists at the University of Washington (UW) are attempting to tease apart the process.

The work will be presented at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.

Auditory selective attention is extremely important in everyday life, notes UW postdoctoral researcher Ross Maddox. “In situations as mundane as ordering your morning cup of coffee, you must focus on the barista while tuning out the loud hiss of the espresso machine and the annoying cell phone conversation happening in line right behind you,” says Maddox. “However, the mechanisms behind selective attention are still not well understood.” In addition, some individuals suffer from Central Auditory Processing Disorder (CAPD), “which means they have normal hearing when tested by an audiologist,” he says, “but they are completely lost in loud settings like restaurants and airports.”

To determine how auditory selective attention works — and perhaps how it fails in people with CAPD — Maddox, along with Adrian K.C. Lee, an assistant professor of speech and hearing sciences, and colleague Willy Cheung, created laboratory situations that promoted the breakdown of the process. The researchers had 10 subjects try to focus their attention on just one target sound — a continuously repeating utterance of a single letter — among a total of 4, 6, 8, or 12 such sounds. The subjects had to determine when an “oddball” item (the letter “R,” chosen because it doesn’t rhyme with any other letter) was inserted into the target sound stream.

"Most studies systematically degrade sounds and measure the effects on listeners’ performance," Maddox explains. "Here, we made the target sound as easy to distinguish from all the other sounds present as possible, and tested the upper limit on the number of sounds a listener could tune out, given all these acoustical advantages."

Unsurprisingly, it is harder to tune in to just one stream when the number of streams increases. However, study subjects did better than expected — successfully identifying the target 70 percent of the time in the most difficult conditions. Repeating letters faster did make the task harder — although with faster repetition, listeners more quickly learn what the letter they’re listening to sounds like, “so there is a tradeoff involved when deciding on repetition speed,” Maddox says.

The work, Maddox and colleagues say, is a first step toward developing an auditory brain-computer interface (BCI) — a device that reads brain activity to allow users to control computers or machines such as wheelchairs. “We hope to create a system that presents a user with an auditory ‘menu’ of sounds — similar to the letter streams here — and allows the listener to make a choice by reading their brainwaves to determine which sound they are focusing on. The more sound streams a user is able to tune out, the more menu options we can present at a single time.”

Source: Science Daily

Filed under science neuroscience psychology brain

9 notes

Gestures Fulfill a Big Role in Language

ScienceDaily (May 8, 2012) — People of all ages and cultures gesture while speaking, some much more noticeably than others. But is gesturing uniquely tied to speech, or is it, rather, processed by the brain like any other manual action?

Scientists have discovered that actual actions on objects, such as physically stirring a spoon in a cup, have less of an impact on the brain’s understanding of speech than simply gesturing as if stirring a spoon in a cup. (Credit: Image courtesy of Acoustical Society of America (ASA))

A U.S.-Netherlands research collaboration delving into this tie discovered that actual actions on objects, such as physically stirring a spoon in a cup, have less of an impact on the brain’s understanding of speech than simply gesturing as if stirring a spoon in a cup. This is surprising because there is less visual information contained in gestures than in actual actions on objects. In short: Less may actually be more when it comes to gestures and actions in terms of understanding language.

Spencer Kelly, associate professor of Psychology, director of the Neuroscience program, and co-director of the Center for Language and Brain at Colgate University, and colleagues from the National Institutes of Health and Max Planck Institute for Psycholinguistics will present their research at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.

Among their key findings is that gestures — more than actions — appear to make people pay attention to the acoustics of speech. When we see a gesture, our auditory system expects to also hear speech. But this is not what the researchers found in the case of manual actions on objects.

Just think of all the actions you’ve seen today that occurred in the absence of speech. “This special relationship is interesting because many scientists have argued that spoken language evolved from a gestural communication system — using the entire body — in our evolutionary past,” points out Kelly. “Our results provide a glimpse into this past relationship by showing that gestures still have a tight and perhaps special coupling with speech in present-day communication. In this way, gestures are not merely add-ons to language — they may actually be a fundamental part of it.”

A better understanding of the role hand gestures play in how people understand language could lead to new audio and visual instruction techniques to help people overcome major challenges with language delays and disorders or learning a second language.

What’s next for the researchers? “We’re interested in how other types of visual inputs, such as eye gaze, mouth movements, and facial expressions, combine with hand gestures to impact speech processing. This will allow us to develop even more natural and effective ways to help people understand and learn language,” says Kelly.

Source: Science Daily

Filed under science neuroscience psychology brain

38 notes

Psychologists reveal how emotion can shut down high-level mental processes without our knowledge

May 8, 2012

Psychologists at Bangor University believe that they have glimpsed for the first time, a process that takes place deep within our unconscious brain, where primal reactions interact with higher mental processes. Writing in the Journal of Neuroscience, they identify a reaction to negative language inputs which shuts down unconscious processing.

For the last quarter of a century, psychologists have been aware of, and fascinated by the fact that our brain can process high-level information such as meaning outside consciousness. What the psychologists at Bangor University have discovered is the reverse- that our brain can unconsciously ‘decide’ to withhold information by preventing access to certain forms of knowledge.

The psychologists extrapolate this from their most recent findings working with bilingual people. Building on their previous discovery that bilinguals subconsciously access their first language when reading in their second language; the psychologists at the School of Psychology and Centre for Research on Bilingualism have now made the surprising discovery that our brain shuts down that same unconscious access to the native language when faced with a negative word such as war, discomfort, inconvenience, and unfortunate.

They believe that this provides the first proven insight to a hither-to unproven process in which our unconscious mind blocks information from our conscious mind or higher mental processes.

This finding breaks new ground in our understanding of the interaction between emotion and thought in the brain. Previous work on emotion and cognition has already shown that emotion affects basic brain functions such as attention, memory, vision and motor control, but never at such a high processing level as language and understanding.

Key to this is the understanding that people have a greater reaction to emotional words and phrases in their first language- which is why people speak to their infants and children in their first language despite living in a country which speaks another language and despite fluency in the second. It has been recognised for some time that anger, swearing or discussing intimate feelings has more power in a speaker’s native language. In other words, emotional information lacks the same power in a second language as in a native language.

Dr Yan Jing Wu of the University’s School of Psychology said: “We devised this experiment to unravel the unconscious interactions between the processing of emotional content and access to the native language system. We think we’ve identified, for the first time, the mechanism by which emotion controls fundamental thought processes outside consciousness.

"Perhaps this is a process that resembles the mental repression mechanism that people have theorised about but never previously located."

So why would the brain block access to the native language at an unconscious level?

Professor Guillaume Thierry explains: “We think this is a protective mechanism. We know that in trauma for example, people behave very differently. Surface conscious processes are modulated by a deeper emotional system in the brain. Perhaps this brain mechanism spontaneously minimises negative impact of disturbing emotional content on our thinking, to prevent causing anxiety or mental discomfort.”

He continues: “We were extremely surprised by our finding. We were expecting to find modulation between the different words- and perhaps a heightened reaction to the emotional word - but what we found was the exact opposite to what we expected- a cancellation of the response to the negative words.”

The psychologists made this discovery by asking English-speaking Chinese people whether word pairs were related in meaning. Some of the word pairs were related in their Chinese translations. Although not consciously acknowledging a relation, measurements of electrical activity in the brain revealed that the bilingual participants were unconsciously translating the words. However, uncannily, this activity was not observed when the English words had a negative meaning.

Provided by Bangor University

Source: medicalxpress.com

Filed under science neuroscience brain psychology

free counters