Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

332 notes

Scientists use lasers and carbon nanotubes to look inside living brains
Some of the most damaging brain diseases can be traced to irregular blood delivery in the brain. Now, Stanford chemists have employed lasers and carbon nanotubes to capture an unprecedented look at blood flowing through a living brain.
The technique was developed for mice but could one day be applied to humans, potentially providing vital information in the study of stroke and migraines, and perhaps even Alzheimer’s and Parkinson’s diseases. The work is described in the journal Nature Photonics.
Current procedures for exploring the brain in living animals face significant tradeoffs. Surgically removing part of the skull offers a clear view of activity at the cellular level. But the trauma can alter the function or activity of the brain or even stimulate an immune response. Meanwhile, non-invasive techniques such as CT scans or MRI visualize function best at the whole-organ level; they cannot visualize individual vessels or groups of neurons.
The first step of the new technique, called near infrared-IIa imaging, or NIR-IIa, calls for injecting water-soluble carbon nanotubes into a live mouse’s bloodstream. The researchers then shine a near-infrared laser over the rodent’s skull.
The light causes the specially designed nanotubes to fluoresce at wavelengths of 1,300-1,400 nanometers; this range represents a sweet spot for optimal penetration with very little light scattering. The fluorescing nanotubes can then be detected to visualize the blood vessels’ structure.
Amazingly, the technique allows scientists to view about three millimeters underneath the scalp and is fine enough to visualize blood coursing through single capillaries only a few microns across, said senior author Hongjie Dai, a professor of chemistry at Stanford. Furthermore, it does not appear to have any adverse affect on innate brain functions.
"The NIR-IIa light can pass through intact scalp skin and skull and penetrate millimeters into the brain, allowing us to see vasculature in an almost non-invasive way," said first author Guosong Hong, who conducted the research as a graduate student in Dai’s lab and is now a postdoctoral fellow at Harvard. "All we have to remove is some hair."
The technique could eventually be used in human clinical trials, Hong said, but will need to be tweaked. First, the light penetration depth needs to be increased to pass deep into the human brain. Second, injecting carbon nanotubes needs approval for clinical application; the scientists are currently investigating alternative fluorescent agents.
For now, though, the technique provides a new technique for studying human cerebral-vascular diseases, such as stroke and migraines, in animal models. Other research has shown that Alzheimer’s and Parkinson’s diseases might elicit – or be caused in part by – changes in blood flow to certain parts of the brain, Hong said, and NIR-IIa imaging might offer a means of better understanding the role of healthy vasculature in those diseases.
"We could also label different neuron types in the brain with bio-markers and use this to monitor how each neuron performs," Hong said. "Eventually, we might be able to use NIR-IIa to learn how each neuron functions inside of the brain."

Scientists use lasers and carbon nanotubes to look inside living brains

Some of the most damaging brain diseases can be traced to irregular blood delivery in the brain. Now, Stanford chemists have employed lasers and carbon nanotubes to capture an unprecedented look at blood flowing through a living brain.

The technique was developed for mice but could one day be applied to humans, potentially providing vital information in the study of stroke and migraines, and perhaps even Alzheimer’s and Parkinson’s diseases. The work is described in the journal Nature Photonics.

Current procedures for exploring the brain in living animals face significant tradeoffs. Surgically removing part of the skull offers a clear view of activity at the cellular level. But the trauma can alter the function or activity of the brain or even stimulate an immune response. Meanwhile, non-invasive techniques such as CT scans or MRI visualize function best at the whole-organ level; they cannot visualize individual vessels or groups of neurons.

The first step of the new technique, called near infrared-IIa imaging, or NIR-IIa, calls for injecting water-soluble carbon nanotubes into a live mouse’s bloodstream. The researchers then shine a near-infrared laser over the rodent’s skull.

The light causes the specially designed nanotubes to fluoresce at wavelengths of 1,300-1,400 nanometers; this range represents a sweet spot for optimal penetration with very little light scattering. The fluorescing nanotubes can then be detected to visualize the blood vessels’ structure.

Amazingly, the technique allows scientists to view about three millimeters underneath the scalp and is fine enough to visualize blood coursing through single capillaries only a few microns across, said senior author Hongjie Dai, a professor of chemistry at Stanford. Furthermore, it does not appear to have any adverse affect on innate brain functions.

"The NIR-IIa light can pass through intact scalp skin and skull and penetrate millimeters into the brain, allowing us to see vasculature in an almost non-invasive way," said first author Guosong Hong, who conducted the research as a graduate student in Dai’s lab and is now a postdoctoral fellow at Harvard. "All we have to remove is some hair."

The technique could eventually be used in human clinical trials, Hong said, but will need to be tweaked. First, the light penetration depth needs to be increased to pass deep into the human brain. Second, injecting carbon nanotubes needs approval for clinical application; the scientists are currently investigating alternative fluorescent agents.

For now, though, the technique provides a new technique for studying human cerebral-vascular diseases, such as stroke and migraines, in animal models. Other research has shown that Alzheimer’s and Parkinson’s diseases might elicit – or be caused in part by – changes in blood flow to certain parts of the brain, Hong said, and NIR-IIa imaging might offer a means of better understanding the role of healthy vasculature in those diseases.

"We could also label different neuron types in the brain with bio-markers and use this to monitor how each neuron performs," Hong said. "Eventually, we might be able to use NIR-IIa to learn how each neuron functions inside of the brain."

Filed under brain activity brain imaging fluorescence imaging nanotubes neuroscience science

140 notes

Older adults have morning brains! Study shows noticeable differences in brain function across the day

Older adults who are tested at their optimal time of day (the morning), not only perform better on demanding cognitive tasks but also activate the same brain networks responsible for paying attention and suppressing distraction as younger adults, according to Canadian researchers.

image

The study, published online July 7th in the journal Psychology and Aging (ahead of print publication), has yielded some of the strongest evidence yet that there are noticeable differences in brain function across the day for older adults.

Time of day really does matter when testing older adults. This age group is more focused and better able to ignore distraction in the morning than in the afternoon,” said lead author John Anderson, a PhD candidate with the Rotman Research Institute at Baycrest Health Sciences and University of Toronto, Department of Psychology.

“Their improved cognitive performance in the morning correlated with greater activation of the brain’s attentional control regions – the rostral prefrontal and superior parietal cortex – similar to that of younger adults.” 

Asked how his team’s findings may be useful to older adults in their daily activities, Anderson recommended that older adults try to schedule their most mentally-challenging tasks for the morning time. Those tasks could include doing taxes, taking a test (such as a driver’s license renewal), seeing a doctor about a new condition, or cooking an unfamiliar recipe.

In the study, 16 younger adults (aged 19 – 30) and 16 older adults (aged 60-82) participated in a series of memory tests during the afternoon from 1 – 5 p.m. The tests involved studying and recalling a series of picture and word combinations flashed on a computer screen. Irrelevant words linked to certain pictures and irrelevant pictures linked to certain words also flashed on the screen as a distraction. During the testing, participants’ brains were scanned with fMRI which allows researchers to detect with great precision which areas of the brain are activated. Older adults were 10 percent more likely to pay attention to the distracting information than younger adults who were able to successfully focus and block this information. The fMRI data confirmed that older adults showed substantially less engagement of the attentional control areas of the brain compared to younger adults. Indeed, older adults tested in the afternoon were “idling” – showing activations in the default mode (a set of regions that come online primarily when a person is resting or thinking about nothing in particular) indicating that perhaps they were having great difficulty focusing. When a person is fully engaged with focusing, resting state activations are suppressed.

When 18 older adults were morning tested (8:30 a.m. – 10:30 a.m.) they performed noticeably better, according to two separate behavioural measures of inhibitory control. They attended to fewer distracting items than their peers tested at off-peak times of day, closing the age difference gap in performance with younger adults. Importantly, older adults tested in the morning activated the same brain areas young adults did to successfully ignore the distracting information. This suggests that when older adults are tested is important for both how they perform and what brain activity one should expert to see.

“Our research is consistent with previous science reports showing that at a time of day that matches circadian arousal patterns, older adults are able to resist distraction,” said Dr. Lynn Hasher, senior author on the paper and a leading authority in attention and inhibitory functioning in younger and older adults.

The Baycrest findings offer a cautionary flag to those who study cognitive function in older adults. “Since older adults tend to be morning-type people, ignoring time of day when testing them on some tasks may create an inaccurate picture of age differences in brain function,” said Dr. Hasher, senior scientist at Baycrest’s Rotman Research Institute and Professor of Psychology at University of Toronto.

(Source: baycrest.org)

Filed under aging cognitive performance prefrontal cortex parietal cortex brain activity brain function psychology neuroscience science

110 notes

Patients with autism spectrum disorder are not sensitive to ‘being imitated’

A Japanese research group led by Prof Norihiro Sadato, a professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), has found that people with autism spectrum disorders (ASD) have decreased activity in an area in the brain critical for understanding if his/her movement was imitated by others. These results will be published in Neuroscience Research.

image

The research group of Norihiro Sadato, a professor of NIPS, Hirotaka Kosaka, a specially-assigned associate professor of the University of Fukui, and Toshio Munesue, a professor of Kanazawa University measured brain activity by functional magnetic resonance imaging (fMRI) when one’s movement was imitated by others. The group studied brain activity when a subject saw his/her finger movement imitated or not imitated by others. Normal subjects have increased activity in the extrastriate body area (EBA) when they are imitated compared to when they are not being imitated. The EBA is a region in the visual cortex for visual processing that responds powerfully during the perception of human body parts. On the other hand, because this kind of activity in the EBA of subjects with ASD was not observed, it shows that the EBA of subjects with ASD is not working properly when imitated.

Persons with ASD are known to have difficulty in interpersonal communication and have trouble noticing that their movement was imitated. Behavioral intervention research to alleviate ASD is proceeding and indicates that training utilizing imitation is useful. The result of the above research not only provided clues to ASD, but also can be used in the evaluation of behavioral intervention to alleviate the disorder.

(Source: eurekalert.org)

Filed under autism extrastriate body area brain activity neuroimaging visual processing neuroscience science

405 notes

Our brains judge a face’s trustworthiness - Even when we can’t see it
Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.
“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.
“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.
The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.
The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.
To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.
These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.
Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.
In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.
Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.
“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Our brains judge a face’s trustworthiness - Even when we can’t see it

Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.

“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.

“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.

The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.

The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.

To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.

These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.

Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.

In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.

Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.

“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Filed under amygdala trustworthiness face perception brain activity psychology neuroscience science

84 notes

Prenatal Alcohol Exposure Alters Development of Brain Function
In the first study of its kind, Prapti Gautam, PhD, and colleagues from The Saban Research Institute of Children’s Hospital Los Angeles found that children with fetal alcohol spectrum disorders (FASD) showed weaker brain activation during specific cognitive tasks than their unaffected counterparts. These novel findings suggest a possible neural mechanism for the persistent attention problems seen in individuals with FASD. The results of this study will be published in Cerebral Cortex on August 4.
“Functional magnetic resonance imaging (fMRI) has been used to observe brain activity during mental tasks in children with FASD, but we are the first to utilize these techniques to look at brain activation over time,” says Gautam. “We wanted to see if the differences in brain activation between children with FASD and their healthy peers were static, or if they changed as children got older.”
FASD encompasses the broad spectrum of symptoms that are linked to in utero alcohol exposure, including cognitive impairment, deficits in intelligence and attention and central nervous system abnormalities. These symptoms can lead to attention problems and higher societal and economic burdens common in individuals with FASD.
During the period of childhood and adolescence, brain function, working memory and attention performance all rapidly improve, suggesting that this is a crucial time for developing brain networks. To study how prenatal alcohol exposure may alter this development, researchers observed a group of unaffected children and a group of children with FASD over two years. They used fMRI to observe brain activation through mental tasks such as visuo-spatial attention—how we visually perceive the spatial relationships among objects in our environment —and working memory.
“We found that there were significant differences in development brain activation over time between the two groups, even though they did not differ in task performance,” notes Elizabeth Sowell, PhD, director of the Developmental Cognitive Neuroimaging Laboratory at The Saban Research Institute and senior author on the manuscript. “While the healthy control group showed an increase in signal intensity over time, the children with FASD showed a decrease in brain activation during visuo-spatial attention, especially in the frontal, temporal and parietal brain regions.”
These results demonstrate that prenatal alcohol exposure can change how brain signaling develops during childhood and adolescence, long after the damaging effects of alcohol exposure in utero. The atypical development of brain activation observed in children with FASD could explain the persistent problems in cognitive and behavioral function seen in this population as they mature.

Prenatal Alcohol Exposure Alters Development of Brain Function

In the first study of its kind, Prapti Gautam, PhD, and colleagues from The Saban Research Institute of Children’s Hospital Los Angeles found that children with fetal alcohol spectrum disorders (FASD) showed weaker brain activation during specific cognitive tasks than their unaffected counterparts. These novel findings suggest a possible neural mechanism for the persistent attention problems seen in individuals with FASD. The results of this study will be published in Cerebral Cortex on August 4.

“Functional magnetic resonance imaging (fMRI) has been used to observe brain activity during mental tasks in children with FASD, but we are the first to utilize these techniques to look at brain activation over time,” says Gautam. “We wanted to see if the differences in brain activation between children with FASD and their healthy peers were static, or if they changed as children got older.”

FASD encompasses the broad spectrum of symptoms that are linked to in utero alcohol exposure, including cognitive impairment, deficits in intelligence and attention and central nervous system abnormalities. These symptoms can lead to attention problems and higher societal and economic burdens common in individuals with FASD.

During the period of childhood and adolescence, brain function, working memory and attention performance all rapidly improve, suggesting that this is a crucial time for developing brain networks. To study how prenatal alcohol exposure may alter this development, researchers observed a group of unaffected children and a group of children with FASD over two years. They used fMRI to observe brain activation through mental tasks such as visuo-spatial attention—how we visually perceive the spatial relationships among objects in our environment —and working memory.

“We found that there were significant differences in development brain activation over time between the two groups, even though they did not differ in task performance,” notes Elizabeth Sowell, PhD, director of the Developmental Cognitive Neuroimaging Laboratory at The Saban Research Institute and senior author on the manuscript. “While the healthy control group showed an increase in signal intensity over time, the children with FASD showed a decrease in brain activation during visuo-spatial attention, especially in the frontal, temporal and parietal brain regions.”

These results demonstrate that prenatal alcohol exposure can change how brain signaling develops during childhood and adolescence, long after the damaging effects of alcohol exposure in utero. The atypical development of brain activation observed in children with FASD could explain the persistent problems in cognitive and behavioral function seen in this population as they mature.

Filed under FASD working memory brain development brain activity attention neuroscience science

207 notes

New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound
Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.
In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.
To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.
With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”
Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.

New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound

Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.

In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.

To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.

With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”

Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.

Filed under sound processing brain activity auditory cortex hearing neuroscience science

187 notes

New research links anxiety to epilepsy-like seizures

New research by clinical psychologists from Arizona State University and the United Kingdom has revealed seizures that could be mistaken for epilepsy are linked to feelings of anxiety.

The team of researchers devised a new set of tests to determine whether there was a link between how people interpret and respond to anxiety, and incidences of psychogenic nonepileptic seizures (PNES).

Nicole Roberts, an associate professor in ASU’s New College of Interdisciplinary Arts and Sciences, collaborated with colleagues from the University of Lincoln, University of Nottingham and University of Sheffield in the United Kingdom. The team’s findings were published in the journal Epilepsy and Behavior.

The researchers used a series of questionnaires and computer tests to determine if a patient regularly avoids situations which might bring on anxiety.

These tests correctly predicted whether a patient had epilepsy or PNES – seizures that can be brought on by threatening situations, sensations, emotions, thoughts or memories – in 83 percent of study participants. Such seizures appear on the surface to be similar to epileptic fits, which are caused by abnormal brain activity.

“This research underscores the fact that PNES is a ‘real’ and disabling disorder with a potentially identifiable pathophysiology,” said Roberts, who directs New College’s Emotion, Culture, and Psychophysiology Laboratory, located on ASU’s West campus. “We need to continue to search for answers, not just in epilepsy clinics, but also in the realm of affective science and complex brain-behavior relationships.”

“PNES can be a very disabling condition, and it is important that we understand the triggers so that we provide the correct care and treatment,” said Lian Dimaro, a clinical psychologist based at Nottinghamshire Healthcare NHS Trust, who served as lead researcher for the study.

“This study was one of the first to bring modern psychological tools of investigation to this problem,” Dimaro said. “The findings support the idea that increasing a person’s tolerance of unpleasant emotions and reducing avoidant behavior may help with treatment, suggesting that patients could benefit from a range of therapies, including acceptance and commitment therapy to help reduce the frequency of seizures, although more research is needed in this area.”

Participants completed questionnaires to determine the level to which they suffered from anxiety, their awareness of their experiences and if they would avoid situations which would make them feel anxious.

They then completed a computer task which required rapid responses to true or false statements. This test was designed to gather data on immediate, or implicit, beliefs about anxiety. Participants also answered questions about common physical complaints that may have no medical explanation, also called somatic symptoms. These can include things like gastrointestinal problems, tiredness and back pain.

Results showed that those with PNES reported significantly more somatic symptoms than others in the study, as well as avoidance of situations which might make them anxious. The group with PNES also scored significantly higher on a measure of how aware they were of their anxiety compared with the control group.

The test subjects were 30 adults with PNES, 25 with epilepsy and 31 with no reported history of seizures who served as a nonclinical control group.

The researchers suggest that including tests to determine levels of anxiety and avoidance behavior may enable health professionals to make earlier diagnosis, and develop more effective intervention plans.

“Epileptic seizures are caused by abnormal electrical activity in the brain, while most PNES are thought to be a consequence of complex psychological processes that manifest in physical attacks,” said David Dawson, a research clinical psychologist from the University of Lincoln.

“It is believed that people suffering with PNES may have difficulty actively engaging with anxiety – a coping style known as experiential avoidance,” Dawson said. “We wanted to examine whether it was possible to make a clear link between seizure frequency and how people experience and manage anxiety. Our study is another step in understanding PNES, which could ultimately lead to better treatment and, therefore, patient outcomes in the future.”

Roberts, who received her doctorate in clinical psychology from the University of California, Berkeley, focuses her research on the study of emotion and on the cultural and biological forces that shape emotional responses. Examples include investigating how ethnicity and culture influence emotional displays and experiences; how the daily hassles of life, such as job stress and sleep deprivation, impact emotion regulation among individuals and couples; and how the emotion system breaks down in patients with psychopathology (such as PNES and post-traumatic stress disorder) or neurological dysfunction (such as epilepsy).

(Source: asunews.asu.edu)

Filed under anxiety psychogenic nonepileptic seizures seizures brain activity epilepsy neuroscience science

55 notes

Brain Response to Appetizing Food Cues Varies Among Obese People

People who have the most common genetic mutation linked to obesity respond differently to pictures of appetizing foods than overweight or obese people who do not have the genetic mutation, according to a new study published in the Endocrine Society’s Journal of Clinical Endocrinology & Metabolism (JCEM).

image

More than one-third of adults are obese. Obesity typically results from a combination of eating too much, getting too little physical activity and genetics. In particular, consumption of appetizing foods that are high in calories can lead to weight gain. Highly palatable foods such as chocolate trigger signals in the brain that give a feeling of pleasure and reward. These cravings can contribute to overeating. Reward signals are processed in specific areas of the brain, where sets of neurons release chemicals such as dopamine. However, very little is known about whether the reward centers of the brain work differently in some people who are overweight or obese.

The most common genetic cause of obesity involves mutations in the melanocortin 4 receptor (MC4R), which occur in about 1 percent of obese people and contribute to weight gain from an early age. The researchers compared three groups of people: eight people who were obese due to a problem in the MC4R gene, 10 people who were overweight or obese without the gene mutation and eight people who were normal weight. They performed functional Magnetic Resonance Imaging (fMRI) scans to look at how the reward centers in the brain were activated by pictures of appetizing food such as chocolate cake compared to bland food such as rice or broccoli and non-food items such as staplers.

“In our study, we found that people with the MC4R mutation responded in the same way as normal weight people, while the overweight people without the gene problem had a lower response,” said lead researcher Agatha van der Klaauw, MD, PhD, of the Wellcome Trust-MRC Institute of Metabolic Science at Addenbrooke’s Hospital in Cambridge, U.K. “In fact, the brain’s reward centers light up when people with the mutation and normal weight people viewed pictures of appetizing foods. But overweight people without the mutation did not have the same level of response.”

The scans revealed that obese people with the MC4R mutation had similar activity in the reward centers of the brain when shown a picture of a dessert like cake or chocolate as normal weight people. The researchers found that, in contrast, the reward centers were underactive in overweight and obese volunteers who did not have the gene mutation. This finding is intriguing as it shows a completely different response in two groups of people of the same age and weight.

“For the first time, we are seeing that the MC4R pathway is involved in the brain’s response to food cues and its underactivity in some overweight people,” van der Klaauw said. “Understanding this pathway may help in developing interventions to limit the overconsumption of highly palatable foods that can lead to weight gain.”

To address the obesity epidemic, the Cambridge team is continuing to study the pathways in the brain that coordinate the need to eat and the reward and pleasure of eating

(Source: endocrine.org)

Filed under obesity MC4R melanocortin gene mutations brain activity neuroscience science

194 notes

Autistic brain less flexible at taking on tasks

The brains of children with autism are relatively inflexible at switching from rest to task performance, according to a new brain-imaging study from the Stanford University School of Medicine.

Instead of changing to accommodate a job, connectivity in key brain networks of autistic children looks similar to connectivity in the resting brain. And the greater this inflexibility, the more severe the child’s manifestations of repetitive and restrictive behaviors that characterize autism, the study found.

image

The study, published online July 29 in Cerebral Cortex, used functional magnetic resonance imaging, or fMRI, to examine children’s brain activity at rest and during two tasks: solving simple math problems and looking at pictures of different faces. The study included an equal number of children with and without autism. The developmental disorder, which now affects one of every 68 children in the United States, is characterized by social and communication deficits, repetitive behaviors and sensory problems.

“We wanted to test the idea that a flexible brain is necessary for flexible behaviors,” said Lucina Uddin, PhD, a lead author of the study. “What we found was that across a set of brain connections known to be important for switching between different tasks, children with autism showed reduced ‘brain flexibility’ compared with typically developing peers.” Uddin, who is now an assistant professor of psychology at the University of Miami, was a postdoctoral scholar at Stanford when the research was conducted.

“The fact that we can tie this neurophysiological brain-state inflexibility to behavioral inflexibility is an important finding because it gives us clues about what kinds of processes go awry in autism,” said Vinod Menon, PhD, the Rachel L. and Walter F. Nichols, MD, professor of psychiatry and behavioral sciences at Stanford and the senior author of the study.

Tracking shifts in connectivity

The researchers focused on a network of brain areas they have studied before. These areas are involved in making decisions, performing social tasks and identifying relevant events in the environment to guide behavior. The team’s prior work showed that, in children with autism, activity in these areas was more tightly connected when the brain was at rest than it was in children who didn’t have autism.

The new research shows that, in autism, connectivity in these networks that can be seen on fMRI scans is fairly similar regardless of whether the brain is at rest or performing a task. In contrast, typically developing children have a larger shift in brain connectivity when they perform tasks.

The study looked at 34 kids with autism and 34 typically developing children. All of the children with autism received standard clinical evaluations to characterize the severity of their disorder. Then, the two groups were split in half: 17 children with autism and 17 typically developing children had their brains scanned with fMRI while at rest and while performing simple arithmetic problems. The remaining children had their brains scanned at rest and during a task that asked them to distinguish between different people’s faces. The facial recognition task was chosen because autism is characterized by social deficits; the math task was chosen to reflect an area in which children with autism do not usually have deficits.

Children with autism performed as well as their typically developing peers on both tasks — that is, they were as good at distinguishing between the faces and solving the math problems. However, their brain scan results were different. In addition to the reduced brain flexibility, the researchers showed a correlation between the degree of inflexibility and the severity of restrictive and repetitive behaviors, such as performing the same routine over and over or being obsessed with a favorite topic.

“This is the first study that has examined how the patterns of intrinsic brain connectivity change with a cognitive load in children with autism,” Menon said. The research is the first to demonstrate that brain connectivity in children with autism changes less, relative to rest, in response to a task than the brains of other children, he added.

Guidance for new therapies

“The findings may help researchers evaluate the effects of different autism therapies,” said Kaustubh Supekar, PhD, a research associate and the other lead author of the study. “Therapies that increase the brain’s flexibility at switching from rest to goal-directed behaviors may be a good target, for instance.”

“We’re making progress in identifying a brain basis of autism, and we’re starting to get traction in pinpointing systems and signaling mechanisms that are not functioning properly,” Menon said. “This is giving us a better handle both in thinking about treatment and in looking at change or plasticity in the brain.”

(Source: med.stanford.edu)

Filed under autism brain activity neuroimaging default mode network neuroscience science

145 notes

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)
New Tools Help Neuroscientists Analyze Big Data
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.
Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.
Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.
Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder
New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.
It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.
Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”
That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.
Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.
To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.
From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.
The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”
Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.
For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.
Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.
In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.
Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.
Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)

New Tools Help Neuroscientists Analyze Big Data

In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.

New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.

Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.

Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.

Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder

New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.

It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.

Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”

That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.

Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.

To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.

From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.

The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”

Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.

For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.

Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.

In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.

Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.

Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

Filed under brain activity zebrafish Thunder computational analysis neuroscience science

free counters