Posts tagged science

Posts tagged science
May 9, 2012
Researchers have developed a new technique which allows them to measure brain activity in large populations of nerve cells at the resolution of individual cells. The technique, reported today in the journal Nature, has been developed in zebrafish to represent a simplified model of how brain regions work together to flexibly control behaviour.
Our thoughts and actions are the product of large populations of nerve cells, called neurons, working in harmony, often millions at a time. Measuring brain activity during behaviour at detailed resolution in these groups of cells has proved extremely challenging. Currently, scientists are restricted to measuring their activity in individual brain areas of, for example, moving rats, typically in less than a few hundred neurons.
Dr Misha Ahrens, a Sir Henry Wellcome Postdoctoral Fellow based at Harvard University and the University of Cambridge, worked with colleagues to develop a technique which allows neuroscientists to study as many as 2,000 neurons simultaneously, anywhere in the brain of a transparent zebrafish. Their work was funded by the Wellcome Trust and the National Institutes of Health.
Dr Ahrens and colleagues created a virtual environment for zebrafish, which allowed them to measure activity in the neurons as the fish ‘moved’. In reality, the zebrafish was paralysed to allow the researchers to image its brain; the fish perceived to ‘move’ through the virtual environment by activating their motor neuron axons, the cells responsible for generating movement.
Zebrafish are often used as a simple organism to study genetics and characteristics of the nervous system that are conserved in humans . They are genetically modifiable, so by manipulating the fish’s genetic make-up, Dr Ahrens and colleagues created a fish in which all neurons contained a particular protein that increases its fluorescence when the cells are active. The fish are transparent and so the team were able to use a laser-scanning microscope, to see activity in any neuron in the brain of the fish, and up to 2,000 neurons simultaneously.
Dr Ahrens explains: “Our behaviour is determined by thousands, possibly millions, of nerve cells working in harmony. The zebrafish performs complex behaviors, with a brain of about 100,000 neurons, almost all of which are accessible to optical recording of neural activity. Our new technique will help us examine how large networks mediate behaviour, while at the same time telling us what each individual cell is doing.”
Using the technique, Dr Ahrens and colleagues asked the question: dozebrafish adapt their behaviour in response to changes in their environment? To do this, they manipulated the virtual environment to simulate the fish suddenly becoming more “muscular”. This served as a simplified version of what happens when the brain needs to adapt the way it drives behavior, for example, when water temperature changes the efficacy of the muscles, or when the fish gets injured.
Dr Ahrens adds: “The paralyzed fish in the virtual world do indeed adapt their behaviour, by adjusting the amount of impulses the brain sends to the muscles. They also ‘remember’ this change for a while. Imaging the brain everywhere during this behaviour, we identified certain brain regions that were involved, most notably the cerebellum and related structures. This technique opens the possibility that eventually, the behaviour may be used to gain insights into human motor control and motor control deficits.
"Our own motor control is continuously recalibrating itself in a similar way to the fish’s to cope with ever changing conditions of our body and environment, such as when we injure a leg, or if we’re walking on a slippery floor or carrying a heavy bag. The zebrafish’s behaviour is an ultra-simplified version of this and we have been able to gain some insight into how its brain structures drive behaviour. This might someday help us understand how damage to certain brain regions in humans affects the way in which the brain integrates sensory information to control body movements."
Understanding the brain is one of the Wellcome Trust’s five strategic challenges.
Provided by Wellcome Trust
Source: medicalxpress.com
May 9, 2012
Research published in the May 10 issue of the journal Neuron, describes a potential new therapeutic approach for improving memory and modifying disease progression in patients with amnestic mild cognitive impairment. The study finds that excess brain activity may be doing more harm than good in some conditions that cause mild cognitive decline and memory impairment.
Elevated activity in specific parts of the hippocampus, a brain region involved in memory, is often seen in disorders associated with an increased risk for Alzheimer’s disease. Amnestic mild cognitive impairment (aMCI), where memory is worse than would be expected for a person’s age, is one such disorder. “In the case of early aMCI, it has been suggested that the increased hippocampal activation may serve a beneficial function by recruiting additional neural resources to compensate for those that are lost,” explains senior study author, Dr. Michela Gallagher, from Johns Hopkins University. “However, animal studies have raised the alternative view that this excess activation may be contributing to memory impairment.”
Dr. Gallagher and colleagues tested how a reduction of hippocampal activity would impact human patients with aMCI. The researchers used a low dose of a drug used clinically to treat epilepsy, for the purpose of reducing hippocampal activity in subjects with aMCI to levels that were similar to activity levels in healthy, age-matched subjects in a control group. The researchers found that treatment with the drug improved performance on a memory task. These findings point to the therapeutic potential of reducing excess activation in the hippocampus in aMCI.
The results also have broader significance as elevated activity in the hippocampus is also observed in other conditions that are thought to precede Alzheimer’s disease, and may be one of the underlying mechanisms of neurodegeneration. “Apart from a direct role in memory impairment, there is concern that elevated activity in vulnerable neural networks could be causing additional damage and, possibly, widespread disease-related degeneration that underlies cognitive decline and the conversion to Alzheimer’s disease,” concludes Dr. Gallagher. “Therefore, reducing the elevated activity in the hippocampus may help to restore memory and protect the brain.”
Provided by Cell Press
More information: Bakker et al.: “Reduction of hippocampal hyperactivity improves cognition in amnestic mild cognitive impairment.”,DOI:10.1016/j.neuron.2012.03.023
Source: medicalxpress.com
Released: 5/9/2012 11:20 AM EDT
Newswise — After completing the first study of its kind, researchers at McMaster University have discovered that very early musical training benefits children even before they can walk or talk.
They found that one-year-old babies who participate in interactive music classes with their parents smile more, communicate better and show earlier and more sophisticated brain responses to music.
The findings were published recently in the scientific journals Developmental Science and Annals of the New York Academy of Sciences.
“Many past studies of musical training have focused on older children,” says Laurel Trainor, director of the McMaster Institute for Music and the Mind. “Our results suggest that the infant brain might be particularly plastic with regard to musical exposure.”
Trainor, together with David Gerry, a music educator and graduate student, received an award from the Grammy Foundation in 2008 to study the effects of musical training in infancy. In the recent study, groups of babies and their parents spent six months participating in one of two types of weekly music instruction.
One music class involved interactive music-making and learning a small set of lullabies, nursery rhymes and songs with actions. Parents and infants worked together to learn to play percussion instruments, take turns and sing specific songs.
In the other music class, infants and parents played at various toy stations while recordings from the popular Baby Einstein series played in the background.
Before the classes began, all the babies had shown similar communication and social development and none had previously participated in other baby music classes.
“Babies who participated in the interactive music classes with their parents showed earlier sensitivity to the pitch structure in music,” says Trainor. “Specifically, they preferred to listen to a version of a piano piece that stayed in key, versus a version that included out-of-key notes. Infants who participated in the passive listening classes did not show the same preferences. Even their brains responded to music differently. Infants from the interactive music classes showed larger and/or earlier brain responses to musical tones.”
The non-musical differences between the two groups of babies were even more surprising, say researchers.
Babies from the interactive classes showed better early communication skills, like pointing at objects that are out of reach, or waving goodbye. Socially, these babies also smiled more, were easier to soothe, and showed less distress when things were unfamiliar or didn’t go their way.
While both class types included listening to music and all the infants heard a similar amount of music at home, a big difference between the classes was the interactive exposure to music.
“There are many ways that parents can connect with their babies,” says study coordinator Andrea Unrau. “The great thing about music is, everyone loves it and everyone can learn simple interactive musical games together.”
Released: 5/9/2012 11:00 AM EDT
Newswise — “Practice makes perfect,” the saying goes. Optimal performance, however, can require more than talent, effort, and repetition. Training the brain to reduce stress through neurofeedback can remove barriers and enhance one’s innate abilities.

An article in the journal Biofeedback presents the narrative of a young cellist who was able to realize the potential of his talent and eliminate debilitating migraine headaches. This case study is part of a special section in the Spring 2012 issue focusing on optimal functioning.
Enhancing people’s performance in business, performing and visual arts, academia, and sports can be realized through biofeedback and neurofeedback training. Tools of stress reduction, mental imagery training, psychology, and psycho-physiological technology are combined to help people reach their goals.
The author and practitioner in this case study has combined her work and study in the fields of theater, social work, and neurofeedback. In her practice, she coaches clients to achieve outstanding performances. For example, a singer can better understand and interpret a musical selection, allowing that singer to better convey the emotion of the music, resulting in a noticeably improved performance.
William, the young musician, sought relief from migraine headaches that were affecting him almost daily. His therapy, however, did not take the approach of treating the headaches, but of focusing on William as a person and as a performer. By improving his functionality, working through moments of obsessiveness, self-criticism, fear, and anxiety, the headaches could also be resolved.
William’s therapist conducted neurofeedback — using sensors to read his brainwaves, analyzing these with NeuroOptimal™ software, and then giving feedback to the brain through a visual display and sound. With this information, the brain can learn to self-correct. This technology assists in getting people past that moment when they obsess over whether they have given the correct answer or hit the right note.
NeuroOptimal feedback, guided imagery, and coaching about decisions regarding his music helped William move beyond the difficulties he encountered. During his senior recital at his college, he was able to give a relaxed, confident performance that was met with a standing ovation.
Full text of the article, “William’s Story: A Case Study in Optimal Performance,” Biofeedback, Volume 40, Issue 1, Spring 2012, is available at http://www.aapb-biofeedback.com/
Source: newswise
May 9, 2012
Disorders of consciousness such as coma or a vegetative state caused by severe brain injury are poorly understood and their diagnosis has relied mainly on patient responses and measures of brain activity. However, new functional and imaging-based diagnostic tests that measure communication and signaling between different brain regions may provide valuable information about the potential for consciousness in patients unable to communicate. These innovative approaches are described and compared in a Review article in the groundbreaking neuroscience journal Brain Connectivity.

Brain Connectivity is the journal of record for researchers and clinicians interested in all aspects of brain connectivity. Credit: ©2012 Mary Ann Liebert, Inc., publishers
Mélanie Boly and coauthors from University of Liège (Belgium), University of Milan (Italy), and University College London (UK) compare the benefits and limitations of three methods for studying the dynamics of brain communication and connectivity in response to internal and external stimulation: functional magnetic resonance imaging f(MRI); transcranial magnetic stimulation (TMS) combined with electroencephalograpy (EEG); and response to neuronal perturbation, measuring, for example, sensory evoked potentials (ERP). They report their findings and propose future research directions in the article “Brain Connectivity in Disorders of Consciousness.”
"In recent years, there has been a tremendous interest in gaining a better understanding of the various disorders of consciousness. A variety of methods including fMRI and PET have been used to study these disorders," says Bharat Biswal, PhD, Co-Editor-in-Chief of Brain Connectivity and Associate Professor, University of Medicine and Dentistry of New Jersey. “This article provides a comprehensive analysis using three new and innovative methods to study disorders of consciousness.”
More information: The article is available free on the Brain Connectivitywebsite at http://online.liebertpub.com/doi/full/10.1089/brain.2011.0049
Provided by Mary Ann Liebert, Inc.
Source: medicalxpress.com
ScienceDaily (May 8, 2012) — Whether it’s a line from a movie, an advertising slogan or a politician’s catchphrase, some statements take hold in people’s minds better than others. But why?
Cornell researchers who applied computer analysis to a database of movie scripts think they may have found the secret of what makes a line memorable.
The study suggests that memorable lines use familiar sentence structure but incorporate distinctive words or phrases, and they make general statements that could apply elsewhere. The latter may explain why lines such as, “You’re gonna need a bigger boat” or “These aren’t the droids you’re looking for” (accompanied by a hand gesture) have become standing jokes. You can use them in a different context and apply the line to your own situation.
While the analysis was based on movie quotes, it could have applications in marketing, politics, entertainment and social media, the researchers said.
"Using movie scripts allowed us to study just the language, without other factors. We needed a way of asking a question just about the language, and the movies make a very nice dataset," said graduate student Cristian Danescu-Niculescu-Mizil, first author of a paper to be presented at the 50th Annual Meeting of the Association for Computational Linguistics July 8-14 in Jeju, South Korea.
The study grows out of ongoing work on how ideas travel across networks.
"We’ve been looking at things like who talks to whom," said Jon Kleinberg, a professor of computer science who worked on the study, "but we hadn’t explored how the language in which an idea was presented might have an effect."
To address that, they collaborated with Lillian Lee, a professor of computer science who specializes in computer processing of natural human language.
They obtained scripts from about 1,000 movies, and a database of memorable quotes from those movies from the Internet Movie Database. Each quote was paired with another from the movie’s script, spoken by the same character in the same scene and about the same length, to eliminate every factor except the language itself. Obi-Wan Kenobi, for example, also said, “You don’t need to see his identification,” but you don’t hear that a lot.
They asked a group of people who had not seen the movies to choose which quote in the pairs was most memorable. Two patterns emerged to identify the memorable choice: distinctiveness and generality.
Then the researchers programmed a computer with linguistic rules reflecting these concepts. A line will be less general if it contains third-person pronouns and definite articles (which refer to people, objects or events in the scene) and uses past tense (usually referring to something that happened previously in the story). Distinctive language can be identified by comparison with a database of news stories. The computer was able to choose the memorable quote an average of 64 percent of the time.
Later analysis also found subtle differences in sound and word choice: Memorable quotes use more sounds made in the front of the mouth, words with more syllables and fewer coordinating conjunctions.
In a further test, the researchers found that the same rules applied to popular advertising slogans.
Although teaching a computer how to write memorable dialogue is probably a long way off, applications might be developed to monitor the work of human writers and evaluate it in progress, Kleinberg suggested.
The researchers have set up a website where you can test your skill at identifying memorable movie quotes, and perhaps contribute some data to the research, at www.cs.cornell.edu/~cristian/memorability.html
Source: Science Daily
ScienceDaily (May 8, 2012) — Researchers at the University of Alabama at Birmingham hope to one day use fluorescent light bulbs to slow nearsightedness, which affects 40 percent of American adults and can cause blindness.
In an early step in that direction, results of a study found that small increases in daily artificial light slowed the development of nearsightedness by 40 percent in tree shrews, which are close relatives of primates.
The team, led by Thomas Norton, Ph.D., professor in the UAB Department of Vision Sciences, presented the study results May 8 at the 2012 Association for Research in Vision and Ophthalmology annual meeting in Ft. Lauderdale.
People can see clearly because the front part of the eye bends light and focuses it on the retina in back. Nearsightedness, also called myopia, occurs when the physical length of the eye is too long, causing light to focus in front of the retina and blurring images.
Myopia has many causes, some related to inheritance and some to the environment. Research in recent years had, for instance, suggested that children who spent more time outdoors, presumably in brighter outdoor light, had less myopia as young adults. That raised the question of whether artificial light, like sunlight, could help reduce myopia development, without the risks of prolonged sun exposure, such as skin cancer and cataracts.
"Our hope is to develop programs that reduce the rate of myopia using energy efficient, fluorescent lights for a few hours each day in homes or classrooms," said John Siegwart, Ph.D., research assistant professor in UAB Vision Sciences and co-author of the study. "Trying to prevent myopia by fixing defective genes through gene therapy or using a drug is a multi-year, multimillion-dollar effort with no guarantee of success. We hope to make a difference just with light bulbs."
Sorting through theories
Work over 25 years had shown that putting a goggle over one eye of a study animal, one that lets in light but blurs images, causes the eye to grow too long, which in turn causes myopia. Other past studies had shown that elevated light levels could reduce myopia under these conditions, whether the light was produced by halogen lamps, metal halide bulbs or daylight. The current study is the first to show that the development of myopia can be slowed by increasing daily fluorescent light levels.
One prevailing theory on myopia-related shape changes in the eye is that they are caused by the blurriness of images experienced while reading or doing other near-work chores. Another holds some people develop myopia because they have low levels of vitamin D, which goes up with exposure to sunlight and could explain the connection between outdoor light and reduced myopia. A third theory, one reinforced by the current results, is that bright light causes an increase in levels of dopamine, a signaling molecule in the retina.
To test the theories, the team used a goggle that lets in light but no images to produce myopia in one eye of each tree shrew. They found that a group exposed to elevated fluorescent light levels for eight hours per day developed 47 percent less myopia than a control group exposed to normal indoor lighting, even though the images were neither more nor less blurry. They also found that animals fed vitamin D supplements developed myopia just like ones without the supplement. Given these results, the team is now experimenting with light levels and treatment times to see if a short, bright light treatment could be effective. They have also begun studies looking at the effect of elevated light on retinal dopamine levels as it relates to the reduction of myopia.
"If we can find the best kind of light, treatment period and light level, we’ll have the scientific justification to begin studies raising light levels in schools, for instance," said Norton. "Compact fluorescent bulbs use much less electricity than standard light bulbs, and future programs raising light levels will have more impact the less expensive they are."
Source: Science Daily
ScienceDaily (May 8, 2012) — Can blindness or other forms of visual deprivation really enhance our other senses such as hearing or touch? While this theory is widely regarded as being true, there are still many questions about the science behind it.
New findings from a Canadian research team investigating this link suggest that not only is there a real connection between vision and other senses, but that connection is important to better understand the underlying mechanisms that can quickly trigger sensory changes. This may demystify the true potential of human adaptation and, ultimately, help develop innovative and effective methods for rehabilitation following sensory loss or injury.
François Champoux, director of the University of Montreal’s Laboratory of Auditory Neuroscience Research, will present his team’s research and findings at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Studies have shown, in terms of hearing, that blind people are better at localizing sound. One study even suggested that blindness might improve the ability to differentiate between sound frequencies. “The supposed enhanced tactile abilities have been studied at a greater degree and can be seen as early as days or even minutes following blindness,” says Champoux. “This rapid change in auditory ability hasn’t yet been clearly demonstrated.”
Two big questions about blindness and enhanced abilities remain unanswered: Can blindness improve more complex auditory abilities and, if so, can these changes be triggered after only a few minutes of visual deprivation, similar to those seen with tactile abilities?
"When we speak or play a musical instrument, the sounds have specific harmonic relations. In other words, if we play a certain note on a piano, that note has many related ‘layers.’ However, we don’t hear all of these layers because our brain simply associates them all together and we only hear the lowest one," Champoux explains.
It’s through this complex computation based on specific components of the sound that the brain can interpret and distinguish auditory signals coming from different people or instruments. The ability to identify harmonicity — the harmonic relation between sounds — is one of the most powerful factors involved in interpreting our auditory surroundings.
"Harmonicity can easily be evaluated using a simple task in which similar harmonic layers are set up and one of them is gradually modified until the individual notices two layers instead of one," says Champoux. "In our study, healthy individuals completed such a task while blindfolded. This task was administered twice, separated by a 90-minute interval during which the participants conversed with the experimenter in a quiet room. Half of the participants kept the blindfold on during the interval period, depriving them of all visual input, while the other half removed their blindfolds."
They found no significant differences between the two groups in their ability to differentiate harmonicity prior to visual deprivation. However, the results of the testing session following visual deprivation revealed that visually deprived individuals performed significantly better than the group that took their blindfolds off.
"Regardless of the neural basis for such an enhancement, our results suggest that the potential for change in auditory perception is much greater than previously assumed," Champoux notes.
Source: Science Daily
ScienceDaily (May 8, 2012) — Listening to amplified music for less than 1.5 hours produces measurable changes in hearing ability that may place listeners at risk of noise-induced hearing loss, new research shows. While further research is needed to firmly establish this risk, the investigation is significant because it provides the first acoustical data for a new method to assess the potential harm from a widespread cultural behavior: “leisure listening” to amplified music, whether in live environments or through headphones.
A team of Danish acoustics researchers present the results of their preliminary study at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics. Their goal is to help develop recommendations for how sound engineers, musicians, event organizers, and the general public should safely enjoy amplified music so they are protected from hearing loss — just as workers are now protected by occupational health standards.
Explains Rodrigo Ordonez, Ph.D., lead scientist of the Danish team from Aalborg University’s Department of Electronic Systems: “Modern low-distortion, high-power loudspeaker systems and headphones make it easy for people to be exposed to potentially harmful sound levels at discotheques, concerts, or while using portable music players.”
He adds that in the realm of industrial noise and work-related sound exposures, decades of experience and personal tragedy — many workers lost hearing from factory conditions — has produced the hearing-damage risk criteria currently used. Based on well-documented acoustical parameters, these criteria outline measurement procedures and expected impact on hearing.
"Yet when it comes to musical sound exposure — and in particular, amplified music — it is not known if the same measures used for industrial noise will accurately describe the effects on hearing and the risk these behaviors pose," Dr. Ordonez says.
To investigate the potential health risk from amplified music, the team measured sounds known as “otoacoustic emissions” as an index of auditory function. These are sounds generated within the inner ear in response to sound stimuli, and they can be measured in the ear canals of people who have healthy hearing. Research shows that otoacoustic emissions disappear when the inner ear is damaged. In this study, the researchers measured otoacoustic emissions to gauge changes in hearing ability before and after exposure to amplified music, testing this method in a live concert environment. Comparing how these two sets of measures change after a sound exposure with the acoustical parameters of the amplified music can lead to a better understanding of how our hearing is affected.
Results revealed two main findings: One is that it is possible to measure changes in hearing after exposures of relatively short duration, less than 1.5 hours. The second is that there are noticeable individual differences in sound exposure levels, as well as in the changes on otoacoustic emissions produced by similar exposure conditions.
Next steps in the team’s work include refining their measurement methods and describing the biophysical effects and mechanics that music sound levels have on individuals. Ultimately they hope to provide data and a scientific rationale on which to establish damage risk criteria for music sound exposure.
Source: Science Daily
ScienceDaily (May 8, 2012) — Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others — a phenomenon known as auditory selective attention. In research that could some day lead to the development of improved devices allowing users to control things like wheelchairs through thought alone, hearing scientists at the University of Washington (UW) are attempting to tease apart the process.
The work will be presented at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Auditory selective attention is extremely important in everyday life, notes UW postdoctoral researcher Ross Maddox. “In situations as mundane as ordering your morning cup of coffee, you must focus on the barista while tuning out the loud hiss of the espresso machine and the annoying cell phone conversation happening in line right behind you,” says Maddox. “However, the mechanisms behind selective attention are still not well understood.” In addition, some individuals suffer from Central Auditory Processing Disorder (CAPD), “which means they have normal hearing when tested by an audiologist,” he says, “but they are completely lost in loud settings like restaurants and airports.”
To determine how auditory selective attention works — and perhaps how it fails in people with CAPD — Maddox, along with Adrian K.C. Lee, an assistant professor of speech and hearing sciences, and colleague Willy Cheung, created laboratory situations that promoted the breakdown of the process. The researchers had 10 subjects try to focus their attention on just one target sound — a continuously repeating utterance of a single letter — among a total of 4, 6, 8, or 12 such sounds. The subjects had to determine when an “oddball” item (the letter “R,” chosen because it doesn’t rhyme with any other letter) was inserted into the target sound stream.
"Most studies systematically degrade sounds and measure the effects on listeners’ performance," Maddox explains. "Here, we made the target sound as easy to distinguish from all the other sounds present as possible, and tested the upper limit on the number of sounds a listener could tune out, given all these acoustical advantages."
Unsurprisingly, it is harder to tune in to just one stream when the number of streams increases. However, study subjects did better than expected — successfully identifying the target 70 percent of the time in the most difficult conditions. Repeating letters faster did make the task harder — although with faster repetition, listeners more quickly learn what the letter they’re listening to sounds like, “so there is a tradeoff involved when deciding on repetition speed,” Maddox says.
The work, Maddox and colleagues say, is a first step toward developing an auditory brain-computer interface (BCI) — a device that reads brain activity to allow users to control computers or machines such as wheelchairs. “We hope to create a system that presents a user with an auditory ‘menu’ of sounds — similar to the letter streams here — and allows the listener to make a choice by reading their brainwaves to determine which sound they are focusing on. The more sound streams a user is able to tune out, the more menu options we can present at a single time.”
Source: Science Daily