Neuroscience

Articles and news from the latest research reports.

Posts tagged neural activity

144 notes

Heads or tails? Random fluctuations in brain cell activity may determine toss-up decisions

Life presents us with choices all the time: salad or pizza for lunch? Tea or coffee afterward? How we make these everyday decisions has been a topic of great interest to economists, who have devised theories about how we assign values to our options and use those values to make decisions.

image

An emerging field of study known as neuroeconomics is combining the economists’ insights with scientific study of the brain to learn more about decision-making processes and how they can go awry. In the Dec. 8 issue of Neuron, one of the field’s founders reports new links between brain cell activity and choices where two options have equal appeal.

“Neuroeconomics is not only helpful for the development of better economic theory, it is also relevant from a clinical point of view,” said author Camillo Padoa-Schioppa, PhD, assistant professor of neurobiology, economics and of biomedical engineering at Washington University School of Medicine in St. Louis. “There are a number of conditions that involve impaired economic decision-making, including drug addiction, brain injury, some forms of dementia, schizophrenia and obsessive-compulsive disorder.”

Scientists know that the orbitofrontal cortex, a region of the brain behind and above the eyes, plays a key role in making decisions. Patients with injuries to this part of the brain are often spectacularly bad at making decisions. They may do things like abandon longstanding relationships, gamble away money or lose it to swindlers, or become addicted to drugs.

To study the roles brain cells play in decision-making, Padoa-Schioppa developed a system for presenting primates a choice between two drinks, such as grape juice or apple juice. The type and amount of the drink varies, and researchers record the activity of individual brain neurons as the primates choose.

Based on the decisions of a single animal over multiple trials, scientists infer the subjective value the animal assigns to each drink and then look for ways this value is encoded in brain cells.

“For example, if we offer a larger amount of apple juice versus a smaller amount of grape juice, and the primate chooses each option equally often, we infer that this primate likes the grape juice better than the apple juice,” he explained. “The primate could be getting more juice by choosing the cup with apple juice, but it doesn’t always do so. That implies that the primate values grape juice more than apple juice.”

In 2006, Padoa-Schioppa and Harvard colleague John Assad, PhD, won international attention for using this system to identify brain cells whose firing rates encoded the subjective value of drink choices.

In a new analysis of data from the original experiment, Padoa-Schioppa showed that different groups of cells in the orbitofrontal cortex reflect different stages of the decision-making process.

“Some neurons encode the value of individual drinks; other neurons encode the choice outcome in a binary way ‒ these cells are either firing or silent depending on the chosen drink,” he explained. “Yet other neurons encode the value of the chosen option.”

Padoa-Schioppa then examined how different groups of cells determine decisions between options of equal value. He showed that toss-up decisions seemed to depend on changes in the initial state of the network of neurons in the orbitofrontal cortex.

“The fluctuations in the network took place before the primates were even offered a choice of juices, but they seem to somehow bias the decision,” Padoa-Schioppa said. “Neuronal signals are always noisy. In essence, close-call decisions are partly determined by random noise.”

He also found that decisions on choices of equal value were linked to the ease or difficulty with which nerve cells in parts of the orbitofrontal cortex communicate with each other. This property, known as synaptic efficacy, can be adjusted by the brain as part of the process of encoding information.

According to Padoa-Schioppa, these results provide new insights into the neuronal circuits that underlie economic decisions. He and his colleagues are using them to create a computational model of decision-making.

“The next step is to test that model,” Padoa-Schioppa said. “For example, we would like to bias decisions by artificially manipulating the activity of specific groups of cells.”

(Source: news.wustl.edu)

Filed under decision making orbitofrontal cortex neural activity neurons neuroscience science

118 notes

Recurring memory traces boost long-lasting memories

While the human brain is in a resting state, patterns of neuronal activity which are associated to specific memories may spontaneously reappear. Such recurrences contribute to memory consolidation – i.e. to the stabilization of memory contents. Scientists of the DZNE and the University of Bonn are reporting these findings in the current issue of The Journal of Neuroscience. The researchers headed by Nikolai Axmacher performed a memory test on a series of persons while monitoring their brain activity by functional magnetic resonance imaging (fMRI). The experimental setup comprised several resting states including a nap inside a neuroimaging scanner. The study indicates that resting periods can generally promote memory performance.

Depending on one’s mood and activity different regions are active in the human brain. Perceptions and thoughts also influence this condition and this results in a pattern of neuronal activity which is linked to the experienced situation. When it is recalled, similar patterns, which are slumbering in the brain, are reactivated. How this happens, is still largely unknown.

image

The prevalent theory of memory formation assumes that memories are stored in a gradual manner. At first, the brain stores new information only temporarily. For memories to remain in the long term, a further step is required. „We call it consolidation“, Dr. Nikolai Axmacher explains, who is a researcher at the Department of Epileptology of the University of Bonn and at the Bonn site of the DZNE. “We do not know exactly how this happens. However, studies suggest that a process we call reactivation is of importance. When this occurs, the brain replays activity patterns associated with a particular memory. In principle, this is a familiar concept. It is a fact that things that are actively repeated and practiced are better memorized. However, we assume that a reactivation of memory contents may also happen spontaneously without there being an external trigger.”

A memory test inside the scanner
Axmacher and his team tested this hypothesis in an experiment that involved ten healthy participants with an average age of 24 years. They were shown a series of pictures, which displayed – among other things – frogs, trees, airplanes and people. Each of these pictures was associated with a white square as a label at a different location. The subjects were asked to memorize the position of the square. At the end of the experiment all images were shown again, but this time without the label. The study participants were then asked to indicate with a mouse cursor where the missing mark was originally located. Memory performance was measured as the distance between the correct and the indicated position.

“This is an associative task. Visual and spatial perceptions have to be linked together”, the researcher explains. “Such tasks involve several brain regions. These include the visual cortex and the hippocampus, which takes part in many memory processes.”

Brain activity was recorded by fMRI during the entire experiment, which lasted several hours and included resting periods and a nap inside the neuroimaging scanner.

Recurrent brain patterns increased the accuracy
For data processing a pattern recognition algorithm was trained to look for similarities between neuronal patterns observed during initial encoding and patterns appearing at later occasions. “This method is complex, but quite effective”, Axmacher says. “Analysis showed that neuronal activity associated with images that were shown initially did reappear during subsequent resting periods and in the sleeping phase.”

Memory performance correlated with the replay of neuronal activity patterns. “The more frequently a pattern had reappeared, the more accurate test participants could label the corresponding image”, Axmacher summarizes the findings. “These results support our assumption that neural patterns can spontaneously reappear and that they promote the formation of long-lasting memory contents. There was already evidence for this from animal studies. Our experiment shows that this phenomenon also happens in humans.”

Memory performance benefits from resting periods
The study indicates that resting periods can generally foster memory performance. “Though, our data did not show whether sleeping had a particular effect. This may be due to the experimental setup, which only allowed for a comparatively short nap”, Axmacher reckons. “By contrast, night sleep is considered to be beneficial for the consolidation of memory contents. But it usually takes many hours and includes multiple transitions between different stages of sleep. However, other studies suggest that even short naps may positively affect memory consolidation.”

An objective look at memory contents
It is up to speculation whether the recurring brain patterns triggered conscious memories or whether they remained below the threshold of perception. “I think it is reasonable to assume that during resting periods the test participants let their mind wander and that they recalled images they had just seen before. But this is a matter of subjective perception of the test participants. That’s something we did not look at because it is not essential for our investigation“, Axmacher says. “The strength of our approach lies rather in the fact that we look at memory contents from the outside, in an objective manner. And that we can evaluate them by pattern recognition. This opens ways to many questions of research. For example, brain patterns that reoccur spontaneously are also of interest in the context of experimental dream research.”

(Source: dzne.de)

Filed under brain mapping neural activity memory consolidation neuroimaging neuroscience science

152 notes

Memories Are ‘Geotagged’ With Spatial Information
Using a video game in which people navigate through a virtual town delivering objects to specific locations, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered how brain cells that encode spatial information form “geotags” for specific memories and are activated immediately before those memories are recalled.
Their work shows how spatial information is incorporated into memories and why remembering an experience can quickly bring to mind other events that happened in the same place.
"These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags," said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.
The study was led by Kahana and professor Andreas Schulze-Bonhage of Freiberg. Jonathan F. Miller, Alec Solway, Max Merkow and Sean M. Polyn, all members of Kahana’s lab, and Markus Neufang, Armin Brandt, Michael Trippel, Irina Mader and Stefan Hefft, all members of Schulze-Bonhage’s lab, contributed to the study. They also collaborated with Drexel University’s Joshua Jacobs.
Their study was published in the journal Science.
Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.
As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering. After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.
After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.
This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories: (the list of items that had been delivered).
“A challenge in studying memory in naturalistic settings is that we cannot create a realistic experience where the experimenter retains control over and can measure every aspect of what the participant does and sees. Virtual reality solves that problem,” Kahana said. “Having these patients play our games allows us to record every action they take in the game and to measure the responses of neurons both during spatial navigation and then later during verbal recall.”
By asking participants to recall the items they delivered instead of the stores they visited, the researchers could test whether their spatial memory systems were being activated even when episodic memories were being accessed. The map-like nature of the neurons associated with spatial memory made this comparison possible.
"During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device," Kahana said. "These so-called ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation."
Using the brain recordings generated while the participants navigated the city, the researchers were able to develop a neural map that corresponded to the city’s layout. As participants passed by a particular store, the researchers correlated their spatial memory of that location with the pattern of place cell activation recorded. To avoid confounding the episodic memories of the items delivered with the spatial memory of a store’s location, the researchers excluded trips that were directly to or from that store when placing it on the neural map.
With maps of place cell activations in hand, the researchers were able to cross- reference each participant’s spatial memories as they accessed their episodic memories of the delivered items. The researchers found that the neurons associated with a particular region of the map activated immediately before a participant named the item that was delivered to a store in that region.
“This means that if we were given just the place cell activations of a participant,” Kahana said, “we could predict, with better than chance accuracy, the item he or she was recalling. And while we cannot distinguish whether these spatial memories are actually helping the participants access their episodic memories or are just coming along for the ride, we’re seeing that this place cell activation plays a role in the memory retrieval processes.”
Earlier neuroscience research in both human and animal cognition had suggested the hippocampus has two distinct roles: the role of cartographer, tracking
location information for spatial memory, and the role of scribe, recording events for episodic memory. This experiment provides further evidence that these roles are intertwined.
“Our finding that spontaneous recall of a memory activates its neural geotag suggests that spatial and episodic memory functions of the hippocampus are intimately related and may reflect a common functional architecture,” Kahana said.

Memories Are ‘Geotagged’ With Spatial Information

Using a video game in which people navigate through a virtual town delivering objects to specific locations, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered how brain cells that encode spatial information form “geotags” for specific memories and are activated immediately before those memories are recalled.

Their work shows how spatial information is incorporated into memories and why remembering an experience can quickly bring to mind other events that happened in the same place.

"These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags," said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.

The study was led by Kahana and professor Andreas Schulze-Bonhage of Freiberg. Jonathan F. Miller, Alec Solway, Max Merkow and Sean M. Polyn, all members of Kahana’s lab, and Markus Neufang, Armin Brandt, Michael Trippel, Irina Mader and Stefan Hefft, all members of Schulze-Bonhage’s lab, contributed to the study. They also collaborated with Drexel University’s Joshua Jacobs.

Their study was published in the journal Science.

Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.

As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering. After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.

After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.

This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories: (the list of items that had been delivered).

“A challenge in studying memory in naturalistic settings is that we cannot create a realistic experience where the experimenter retains control over and can measure every aspect of what the participant does and sees. Virtual reality solves that problem,” Kahana said. “Having these patients play our games allows us to record every action they take in the game and to measure the responses of neurons both during spatial navigation and then later during verbal recall.”

By asking participants to recall the items they delivered instead of the stores they visited, the researchers could test whether their spatial memory systems were being activated even when episodic memories were being accessed. The map-like nature of the neurons associated with spatial memory made this comparison possible.

"During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device," Kahana said. "These so-called ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation."

Using the brain recordings generated while the participants navigated the city, the researchers were able to develop a neural map that corresponded to the city’s layout. As participants passed by a particular store, the researchers correlated their spatial memory of that location with the pattern of place cell activation recorded. To avoid confounding the episodic memories of the items delivered with the spatial memory of a store’s location, the researchers excluded trips that were directly to or from that store when placing it on the neural map.

With maps of place cell activations in hand, the researchers were able to cross- reference each participant’s spatial memories as they accessed their episodic memories of the delivered items. The researchers found that the neurons associated with a particular region of the map activated immediately before a participant named the item that was delivered to a store in that region.

“This means that if we were given just the place cell activations of a participant,” Kahana said, “we could predict, with better than chance accuracy, the item he or she was recalling. And while we cannot distinguish whether these spatial memories are actually helping the participants access their episodic memories or are just coming along for the ride, we’re seeing that this place cell activation plays a role in the memory retrieval processes.”

Earlier neuroscience research in both human and animal cognition had suggested the hippocampus has two distinct roles: the role of cartographer, tracking

location information for spatial memory, and the role of scribe, recording events for episodic memory. This experiment provides further evidence that these roles are intertwined.

“Our finding that spontaneous recall of a memory activates its neural geotag suggests that spatial and episodic memory functions of the hippocampus are intimately related and may reflect a common functional architecture,” Kahana said.

Filed under hippocampus spatial navigation episodic memory neural activity virtual reality psychology neuroscience science

191 notes

Study connects dots between genes and human behavior

Establishing links between genes, the brain and human behavior is a central issue in cognitive neuroscience research, but studying how genes influence cognitive abilities and behavior as the brain develops from childhood to adulthood has proven difficult.

Now, an international team of scientists has made inroads to understanding how genes influence brain structure and cognitive abilities and how neural circuits produce language.

The team studied individuals with a rare disorder known as Williams syndrome. By measuring neural activity in the brain associated with the distinct language skills and facial recognition abilities that are typical of the syndrome, they showed that Williams is due not to a single gene but to distinct subsets of genes, hinting that the syndrome is more complex than originally thought.

"Solutions to understanding the connections between genes, neural circuits and behavior are now emerging from a unique union of genetics and neuroscience," says Julie Korenberg, a University of Utah professor and an adjunct professor at the Salk Institute, who led the genetics aspects on the new study.

The study was led by Debra Mills, a professor of cognitive neuroscience at Bangor University in Wales. Ursula Bellugi, a professor at the Salk Institute for Biological Studies in La Jolla, was also integrally involved in the research.

Korenberg was convinced that with Mills’ approach of directly measuring the brain’s electrical firing they could solve the puzzle of precisely which genes were responsible for building the brain wiring underlying the different reaction to human faces in Williams syndrome.

"We also discovered," says Mills, "that in those with Williams syndrome, the brain processes language and faces abnormally from early childhood through middle age. This was a surprise because previous studies had suggested that part of the Williams brain functions normally in adulthood, with little understanding about how it developed."

The results of the study were published November 12, 2013 in Developmental Neuropsychology.

Williams syndrome is caused by the deletion of one of the two usual copies of approximately 25 genes from chromosome 7, resulting in mental impairment. Nearly everyone with the condition is missing these same genes, although a few rare individuals retain one or more genes that most people with Williams have lost. Korenberg was the early pioneer of studying these individuals with partial gene deletions as a way of gathering clues to the specific function of those genes and gene networks. The syndrome affects approximately 1 in 10,000 people around the world, including an estimated 20,000 to 30,000 individuals in the United States.

Although individuals with Williams experience developmental delays and learning disabilities, they are exceptionally sociable and possess remarkable verbal abilities and facial recognition skills in relation to their lower IQ. Bellugi has long observed that sociability also seems to drive language and has spent much of her career studying those with Williams syndrome.

"Williams offers us a window into how the brain works at many different levels," says Bellugi. "We have the tools to measure the different cognitive abilities associated with the syndrome, and thanks to Julie and Debbie we are now able to combine this with studies of the underlying genetic and neurological aspects."

Suspecting that specific genes might lie at the origins of brain plasticity, functional changes in the brain that occur with new knowledge or experiences, and that these genes might be linked to the unusual proficiencies of those with Williams, the team enrolled individuals of various ages in their study. They drew from children, adolescents and adults who all had the full genetic deletion for Williams syndrome and compared them with their non-affected peers. Their study is additionally significant for being one of the first to examine the brain structure and its functioning in children with Williams. And, as Korenberg predicted, a critical piece of the puzzle came from including in their study two adults with partial genetic deletions for Williams.

Using highly sensitive sensors to measure brain activity, the researchers, led by Mills, presented their study participants with both visual and auditory stimuli in the form of unfamiliar faces and spoken sentences. They charted the small changes in voltage generated by the areas of the brain responding to these stimuli, a process known as event-related potentials (ERPs). Mills was the first to publish studies on Williams syndrome using ERPs, developed the ERP markers for this study, and oversaw its design and analysis.

Mills identified ERP markers of brain plasticity in Williams syndrome in children and adults of varying ages and developmental stages. These findings are important because the brains of people with Williams are structured differently than those of people without the syndrome. In the Williams brain, the dorsal areas (along the back and top), which help control vision and spatial understanding, are undersized. The ventral areas (at the front and the bottom), which influence language, facial recognition, emotion and social drive, are relatively normal in size.

It was previously believed that in individuals with Williams, the ventral portion of the brain operated normally. What the team discovered, however, was that this area of the brain also processed information differently than those without the syndrome, and did so throughout development, from childhood to the adult years. This suggests that the brain was compensating in order to analyze information; in other words, it was exhibiting plasticity. Of additional importance, the distinct ERP markers identified by Mills are so characteristic of the different brain organization in Williams that this information alone is approximately 90 percent accurate when analyzing brain activity to identify someone with Williams syndrome.

Other key findings of the study resulted from comparing the ERPs of participants with full Williams deletion with those with partial genetic deletions. While psychological tests focused on facial recognition show no difference between these groups, the scientists found differences in these recognition abilities on the ERP measurements, which look directly at neural activity. Thus, the scientists were able to see how very slight genetic differences affected brain activity, which will allow them identify the roles of sub-sets of Williams genes in brain development and in adult facial recognition abilities.

By combining these one-in-a-million people with tools capable of directly measuring brain activity, the scientists now have the unprecedented opportunity to study the genetic underpinnings of mental disorders. The results of this study not only advance science’s understanding of the links between genes, the brain and behavior, but may lead to new insight into such disorders as autism, Down syndrome and schizophrenia.

"By greatly narrowing the specific genes involved in social disorders, our findings will help uncover targets for treatment and provide measures by which these and other treatments are successful in alleviating the desperation of autism, anxiety and other disorders," says Korenberg.

(Source: salk.edu)

Filed under williams syndrome neural activity brain activity plasticity genes brain development neuroscience science

92 notes

Swarming insect provides clues to how the brain processes smells

Our sense of smell is often the first response to environmental stimuli. Odors trigger neurons in the brain that alert us to take action. However, there is often more than one odor in the environment, such as in coffee shops or grocery stores. How does our brain process multiple odors received simultaneously?

image

Barani Raman, PhD, of the School of Engineering & Applied Science at Washington University in St. Louis, set out to find an answer. Using locusts, which have a relatively simple sensory system ideal for studying brain activity, he found the odors prompted neural activity in the brain that allowed the locust to correctly identify the stimulus, even with other odors present.

The results were published in Nature Neuroscience as the cover story of the December 2013 print issue.

The team uses a computer-controlled pneumatic pump to administer an odor puff to the locust, which has olfactory receptor neurons in its antennae, similar to sensory neurons in our nose. A few seconds after the odor puff is given, the locust gets a piece of grass as a reward, as a form of Pavlovian conditioning. As with Pavlov’s dog, which salivated when it heard a bell ring, trained locusts anticipate the reward when the odor used for training is delivered. Instead of salivating, they open their palps, or finger-like projections close to the mouthparts, when they predict the reward. Their response was less than half of a second. The locusts could recognize the trained odors even when another odor meant to distract them was introduced prior to the target cue.

“We were expecting this result, but the speed with which it was done was surprising,” says Raman, assistant professor of biomedical engineering. “It took only a few hundred milliseconds for the locust’s brain to begin tracking a novel odor introduced in its surrounding. The locusts are processing chemical cues in an extremely rapid fashion.”

“There were some interesting cues in the odors we chose,” Raman says. “Geraniol, which smells like rose to us, was an attractant to the locusts, but citral, which smells like lemon to us, is a repellant to them. This helped us identify principles that are common to the odor processing.

Raman has spent a decade learning how the human brain and olfactory system operate to process scent and odor signals. His research seeks to take inspiration from the biological olfactory system to develop a device for noninvasive chemical sensing. Such a device could be used in homeland security applications to detect volatile chemicals and in medical diagnostics, such as a device to test blood-alcohol level.

This study is the first in a series seeking to understand the principles of olfactory computation, Raman says.

“There is a precursory cue that could tell the brain there is a predator in the environment, and it has to predict what will happen next,” Raman says. “We want to determine what kinds of computations have to be done to make those predictions.”

In addition, the team is looking to answer other questions.

“Neural activity in the early processing centers does not terminate until you stop the odor pulse,” he says. “If you have a lengthy pulse – 5 or 10 seconds long – what is the role of neural activity that persists throughout the stimulus duration and often even after you terminate the stimulus? What are the roles of the neural activity generated at different points in time, and how do they help the system adapt to the environment? Those questions are still not clear.”

(Source: news.wustl.edu)

Filed under olfactory system smell neural activity pavlovian conditioning odor neuroscience science

122 notes

Stress makes snails forgetful

New research on pond snails has revealed that high levels of stress can block memory processes. Researchers from the University of Exeter and the University of Calgary trained snails and found that when they were exposed to multiple stressful events they were unable remember what they had learned.

image

Previous research has shown that stress also affects human ability to remember. This study, published in the journal PLOS ONE, found that experiencing multiple stressful events simultaneously has a cumulative detrimental effect on memory.

Dr Sarah Dalesman, a Leverhulme Trust Early Career Fellow, from Biosciences at the University of Exeter, formally at the University of Calgary, said: “It’s really important to study how different forms of stress interact as this is what animals, including people, frequently experience in real life. By training snails, and then observing their behaviour and brain activity following exposure to stressful situations, we found that a single stressful event resulted in some impairment of memory but multiple stressful events prevented any memories from being formed.” 

The pond snail, Lymnaea stagnalis, has easily observable behaviours linked to memory and large neurons in the brain, both useful benefits when studying memory processes. They also respond to stressful events in a similar way to mammals, making them a useful model species to study learning and memory.

In the study, the pond snails were trained to reduce how often they breathed outside water. Usually pond snails breathe underwater and absorb oxygen through their skin. In water with low oxygen levels the snails emerge and inhale air using a basic lung opened to the air via a breathing hole.

To train the snails not to breathe air they were placed in poorly oxygenated water and their breathing holes were gently poked every time they emerged to breathe. Snail memory was tested by observing how many times the snails attempted to breathe air after they had received their training. Memory was considered to be present if there was a reduction in the number of times they opened their breathing holes. The researchers also assessed memory by monitoring neural activity in the brain. 

Immediately before training, the snails were exposed to two different stressful experiences, low calcium - which is stressful as calcium is necessary for healthy shells - and overcrowding by other pond snails.

When faced with the stressors individually, the pond snails had reduced ability to form long term memory, but were still able to learn and form short and intermediate term memory lasting from a few minutes to hours. However, when both stressors were experienced at the same time, results showed that they had additive effects on the snails’ ability to form memory and all learning and memory processes were blocked. 

Future work will focus on the effects of stress on different populations of pond snail.

(Source: exeter.ac.uk)

Filed under snail lymnaea stagnalis memory neural activity stress neuroscience science

61 notes

Visual representations improved by reducing noise
Neuroscientist Suresh Krishna from the German Primate Center (DPZ) in cooperation with Annegret Falkner and Michael Goldberg at Columbia University, New York has revealed how the activity of neurons in an important area of the rhesus macaque’s brain becomes less variable when they represent important visual information during an eye movement task. This reduction in variability can improve the perceptual strength of attended or relevant aspects in a visual scene, and is enhanced when the animals are more motivated to perform the task. 
Humans may see the same object again and again, but their brain response will be different each time, a phenomenon called neuronal noise. The same is true for rhesus macaques, which have a visual system very similar to that of humans. This variability often limits our ability to see a dim object or hear a faint sound. On the other hand, we benefit from variable responses as they are considered an essential part of the exploration stage of learning and for generating unpredictability during competitive interactions.
Despite this importance, brain variability is poorly understood. Neuroscientists Suresh Krishna of the DPZ and his colleagues Annegret Falkner and Michael Goldberg at Columbia University in New York examined the responses of neurons in the monkey brain’s lateral intraparietal area (LIP) while the monkey planned eye movements to spots of light at different locations on a computer screen. LIP is an area in the brain that is crucial for visual attention and for actively exploring visual scenes. To measure the activity of single LIP neurons, the scientists inserted electrodes thinner than a human hair into the monkey’s brain and recorded the neurons’ electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal.
Suresh Krishna and his colleagues could show how the activity of LIP neurons becomes less variable when the macaque performs a task and plans an eye movement. The reduction in variability was particularly strong where the monkey was planning to look and when the monkey was highly motivated to perform the task. This creation of a valley of reduced variability centered on relevant and interesting aspects of a visual scene may help the brain to filter the most important aspects from the sensory information delivered by the eye. The scientists developed a simple mathematical model that captures the patterns in the data and may also be a useful framework for the analysis of other brain areas.
"Our study represents one of the most detailed descriptions of neuronal variability in the brain. It offers important insights into fascinating brain functions as diverse as the focusing of visual attention and the control of eye movements during active viewing of visual scenes. The brain’s valley of variability that we discovered may help humans and animals to interact with their complex environment.", Suresh Krishna comments on the findings.

Visual representations improved by reducing noise

Neuroscientist Suresh Krishna from the German Primate Center (DPZ) in cooperation with Annegret Falkner and Michael Goldberg at Columbia University, New York has revealed how the activity of neurons in an important area of the rhesus macaque’s brain becomes less variable when they represent important visual information during an eye movement task. This reduction in variability can improve the perceptual strength of attended or relevant aspects in a visual scene, and is enhanced when the animals are more motivated to perform the task.

Humans may see the same object again and again, but their brain response will be different each time, a phenomenon called neuronal noise. The same is true for rhesus macaques, which have a visual system very similar to that of humans. This variability often limits our ability to see a dim object or hear a faint sound. On the other hand, we benefit from variable responses as they are considered an essential part of the exploration stage of learning and for generating unpredictability during competitive interactions.

Despite this importance, brain variability is poorly understood. Neuroscientists Suresh Krishna of the DPZ and his colleagues Annegret Falkner and Michael Goldberg at Columbia University in New York examined the responses of neurons in the monkey brain’s lateral intraparietal area (LIP) while the monkey planned eye movements to spots of light at different locations on a computer screen. LIP is an area in the brain that is crucial for visual attention and for actively exploring visual scenes. To measure the activity of single LIP neurons, the scientists inserted electrodes thinner than a human hair into the monkey’s brain and recorded the neurons’ electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal.

Suresh Krishna and his colleagues could show how the activity of LIP neurons becomes less variable when the macaque performs a task and plans an eye movement. The reduction in variability was particularly strong where the monkey was planning to look and when the monkey was highly motivated to perform the task. This creation of a valley of reduced variability centered on relevant and interesting aspects of a visual scene may help the brain to filter the most important aspects from the sensory information delivered by the eye. The scientists developed a simple mathematical model that captures the patterns in the data and may also be a useful framework for the analysis of other brain areas.

"Our study represents one of the most detailed descriptions of neuronal variability in the brain. It offers important insights into fascinating brain functions as diverse as the focusing of visual attention and the control of eye movements during active viewing of visual scenes. The brain’s valley of variability that we discovered may help humans and animals to interact with their complex environment.", Suresh Krishna comments on the findings.

Filed under lateral intraparietal area neural activity neuronal noise eye movements neurons neuroscience science

70 notes

Research finds brain scans may aid in diagnosis of autism

Joint research from the University of Alabama at Birmingham Department of Psychology and Auburn University indicates that brain scans show signs of autism that could eventually support behavior-based diagnosis of autism and effective early intervention therapies. The findings appear online today in Frontiers in Human Neuroscience as part of a special issue on brain connectivity in autism.

“This research suggests brain connectivity as a neural signature of autism and may eventually support clinical testing for autism,” said Rajesh Kana, Ph.D., associate professor of psychology and the project’s senior researcher. “We found the information transfer between brain areas, causal influence of one brain area on another, to be weaker in autism.”

The investigators found that brain connectivity data from 19 paths in brain scans predicted whether the participants had autism, with an accuracy rate of 95.9 percent.

Kana, working with a team including Gopikrishna Deshpande, Ph.D., from Auburn University’s MRI Research Center, studied 15 high-functioning adolescents and adults with autism, as well as 15 typically developing control participants ages 16-34 years. Kana’s team collected all data in his autism lab at UAB that was then analyzed using a novel connectivity method at Auburn.

The current study showed that adults with autism spectrum disorders processed social cues differently than typical controls. It also revealed the disrupted brain connectivity that explains their difficulty in understanding social processes.

“We can see that there are consistently weaker brain regions due to the disrupted brain connectivity,” Kana said. “There’s a very clear difference.”

Participants in this study were asked to choose the most logical of three possible endings as they watched a series of comic strip vignettes while a functional MRI scanner measured brain activity.

The scenes included a glass about to fall off a table and a man enjoying the music of a street violinist and giving him a cash tip. Most participants in the autism group had difficulty in finding a logical end to the violinist scenario, which required an understanding of emotional and mental states. The current study showed that adults with autism spectrum disorders struggle to process subtle social cues, and altered brain connectivity may underlie their difficulty in understanding social processes.

“We can see that the weaker connectivity hinders the cross-talk among brain regions in autism,” Kana said.

Kana plans to continue his research on autism.

“Over the next five to 10 years, our research is going in the direction of finding objective ways to supplement the diagnosis of autism with medical testing and testing the effectiveness of intervention in improving brain connectivity,” Kana said.

Autism is currently diagnosed through interviews and behavioral observation. Although autism can be diagnosed by 18 months, in reality, earliest diagnoses occur around ages 4-6 as children face challenges in school or social settings.

“Parents usually have a longer road before getting a firm diagnosis for their child now,” Kana said. “You lose a lot of intervention time, which is so critical. Brain imaging may not be able to replace the current diagnostic measures; but if it can supplement them at an earlier age, that’s going to be really helpful.”

(Source: uab.edu)

Filed under autism brain mapping neural activity neuroimaging neuroscience science

124 notes

Scientists expand the genetic code of mammals to control protein activity in neurons with light 
With the flick of a light switch, researchers at the Salk Institute for Biological Studies can change the shape of a protein in the brain of a mouse, turning on the protein at the precise moment they want. This allows the scientists to observe the exact effect of the protein’s activation. The new method, described in the Oct. 16, 2013, issue of the journal Neuron, relies on specially engineered amino acids—the molecules that make up proteins—and light from an LED. Now that it has been shown to work, the technique can be adapted to give researchers control of a wide variety of other proteins in the brain to study their functions.
"What we are now able to do is not only control neuronal activity, but control a specific protein within a neuron," says senior study author Lei Wang, an associate professor in Salk’s Jack H. Skirball Center for Chemical Biology and Proteomics and holder of the Frederick B. Rentschler Developmental Chair.
If a scientist wants to know what set of neurons in the brain is responsible for a particular action or behavior, being able to turn the neurons on and off at will gives the researcher a targeted way to test the neurons’ effects. Likewise, if they want to know the role of a certain protein inside the cells, the ability to activate or inactivate the protein of interest is key to studying its biology.
Over the past decade, researchers have developed a handful of ways of activating or inactivating neurons using light, as part of the burgeoning field of so-called optogenetics. In optogenetic experiments, mice are genetically engineered to have a light-sensitive channel from algae integrated into their neurons. When exposed to light, the channel opens or closes, changing the flow of molecules into the neuron and altering its ability to pass an electrochemical message through the brain. Using such optogenetic approaches, scientists can pick and choose which neurons in the brain they want turned on or off at any given time and observe the resulting change in the engineered mice.
"There’s no question that this is a great way to control neuronal activity, by borrowing light-responsive channels or pumps from other organisms and putting them in neurons," says Wang. "But rather than put a stranger into neurons, we wanted to control the activity of proteins native to neurons."
To make proteins respond to light, Wang’s team harnessed a photo-responsive amino acid, called Cmn, which has a large chemical structure. When a pulse of light shines on the molecule, Cmn’s bulky side chain breaks off, leaving cysteine, a smaller amino acid. Wang’s group realized that if a single Cmn was integrated into the right place in the structure of a protein, the drastic change in the amino acid’s size could activate or inactivate the entire protein.
To test their idea, Wang and his colleagues engineered new versions of a potassium channel in neurons, adding Cmn to their sequence.
"Basically the idea was that when you put this amino acid in the pore of the channel, the bulky side chain entirely blocks the passage of ions through the channel," explains Ji-Yong Kang, a graduate student who works in Wang’s group, and first author of the new paper. "Then, when the bond in the amino acid breaks in response to light, the channel is opened up."
The method worked in isolated cells: after trial and error, the scientists found the ideal spot in the channel to put Cmn, so that the channel was initially blocked, but opened when light shone on it. They were able to measure the change to the channel’s properties by recording the electrical current that flowed through the cells before and after exposure to light.
But to apply the technique to living mice, Wang and his colleagues needed to change the animals’ genetic code—the built-in instructions that cells use to produce proteins based on gene sequences. The normal genetic code doesn’t contain information on Cmn, so simply injecting Cmn amino acids into mice wouldn’t lead to the molecules being integrated into proteins. In the past, the Wang group and others have expanded the genetic codes of isolated cells of simple organisms like bacteria, or yeast, inserting instructions for a new amino acid. But the approach had never been successful in mammals. Through a combination of techniques and new tricks, however, Wang’s team was able to provide embryonic mice with the instructions for the new amino acid, Cmn. With the help from Salk Professor Dennis O’Leary and his research associate Daichi Kawaguchi, they then integrated the new Cmn-containing channel into the brains of the developing mice, and showed that by shining light on the brain tissue they could force the channel open, altering patterns of neuron activity. It was not only a first for expanding the genetic code of mammals, but also for protein control.
At the surface, the new approach has the same result as optogenetic approaches to studying the brain—neurons are silenced at a precise time in response to light. But Wang’s method can now be used to study a whole cadre of different proteins in neurons. Aside from being used to open and close channels or pores that let ions flow in and out of brain cells, Cmn could be used to optically regulate protein modifications and protein-protein interactions.
"We can pinpoint exactly which protein, or even which part of a protein, is crucial for the functioning of targeted neurons," says Wang. "If you want to study something like the mechanism of memory formation, it’s not always just a matter of finding what neurons are responsible, but what molecules within those neurons are critical."
Earlier this year, President Obama announced the multi-billion dollar Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a ten-year project to map the activity of the human brain. Creating new ways to study the molecules in the brain, such as using light-responsive amino acids to study neuronal proteins, will be key to moving forward on this initiative and similar efforts to understand the brain, says Wang. His lab is now working to develop ways to not only activate proteins, but inactive them using light-sensitive amino acids, and applying the technique to proteins other than Kir2.1.

Scientists expand the genetic code of mammals to control protein activity in neurons with light

With the flick of a light switch, researchers at the Salk Institute for Biological Studies can change the shape of a protein in the brain of a mouse, turning on the protein at the precise moment they want. This allows the scientists to observe the exact effect of the protein’s activation. The new method, described in the Oct. 16, 2013, issue of the journal Neuron, relies on specially engineered amino acids—the molecules that make up proteins—and light from an LED. Now that it has been shown to work, the technique can be adapted to give researchers control of a wide variety of other proteins in the brain to study their functions.

"What we are now able to do is not only control neuronal activity, but control a specific protein within a neuron," says senior study author Lei Wang, an associate professor in Salk’s Jack H. Skirball Center for Chemical Biology and Proteomics and holder of the Frederick B. Rentschler Developmental Chair.

If a scientist wants to know what set of neurons in the brain is responsible for a particular action or behavior, being able to turn the neurons on and off at will gives the researcher a targeted way to test the neurons’ effects. Likewise, if they want to know the role of a certain protein inside the cells, the ability to activate or inactivate the protein of interest is key to studying its biology.

Over the past decade, researchers have developed a handful of ways of activating or inactivating neurons using light, as part of the burgeoning field of so-called optogenetics. In optogenetic experiments, mice are genetically engineered to have a light-sensitive channel from algae integrated into their neurons. When exposed to light, the channel opens or closes, changing the flow of molecules into the neuron and altering its ability to pass an electrochemical message through the brain. Using such optogenetic approaches, scientists can pick and choose which neurons in the brain they want turned on or off at any given time and observe the resulting change in the engineered mice.

"There’s no question that this is a great way to control neuronal activity, by borrowing light-responsive channels or pumps from other organisms and putting them in neurons," says Wang. "But rather than put a stranger into neurons, we wanted to control the activity of proteins native to neurons."

To make proteins respond to light, Wang’s team harnessed a photo-responsive amino acid, called Cmn, which has a large chemical structure. When a pulse of light shines on the molecule, Cmn’s bulky side chain breaks off, leaving cysteine, a smaller amino acid. Wang’s group realized that if a single Cmn was integrated into the right place in the structure of a protein, the drastic change in the amino acid’s size could activate or inactivate the entire protein.

To test their idea, Wang and his colleagues engineered new versions of a potassium channel in neurons, adding Cmn to their sequence.

"Basically the idea was that when you put this amino acid in the pore of the channel, the bulky side chain entirely blocks the passage of ions through the channel," explains Ji-Yong Kang, a graduate student who works in Wang’s group, and first author of the new paper. "Then, when the bond in the amino acid breaks in response to light, the channel is opened up."

The method worked in isolated cells: after trial and error, the scientists found the ideal spot in the channel to put Cmn, so that the channel was initially blocked, but opened when light shone on it. They were able to measure the change to the channel’s properties by recording the electrical current that flowed through the cells before and after exposure to light.

But to apply the technique to living mice, Wang and his colleagues needed to change the animals’ genetic code—the built-in instructions that cells use to produce proteins based on gene sequences. The normal genetic code doesn’t contain information on Cmn, so simply injecting Cmn amino acids into mice wouldn’t lead to the molecules being integrated into proteins. In the past, the Wang group and others have expanded the genetic codes of isolated cells of simple organisms like bacteria, or yeast, inserting instructions for a new amino acid. But the approach had never been successful in mammals. Through a combination of techniques and new tricks, however, Wang’s team was able to provide embryonic mice with the instructions for the new amino acid, Cmn. With the help from Salk Professor Dennis O’Leary and his research associate Daichi Kawaguchi, they then integrated the new Cmn-containing channel into the brains of the developing mice, and showed that by shining light on the brain tissue they could force the channel open, altering patterns of neuron activity. It was not only a first for expanding the genetic code of mammals, but also for protein control.

At the surface, the new approach has the same result as optogenetic approaches to studying the brain—neurons are silenced at a precise time in response to light. But Wang’s method can now be used to study a whole cadre of different proteins in neurons. Aside from being used to open and close channels or pores that let ions flow in and out of brain cells, Cmn could be used to optically regulate protein modifications and protein-protein interactions.

"We can pinpoint exactly which protein, or even which part of a protein, is crucial for the functioning of targeted neurons," says Wang. "If you want to study something like the mechanism of memory formation, it’s not always just a matter of finding what neurons are responsible, but what molecules within those neurons are critical."

Earlier this year, President Obama announced the multi-billion dollar Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a ten-year project to map the activity of the human brain. Creating new ways to study the molecules in the brain, such as using light-responsive amino acids to study neuronal proteins, will be key to moving forward on this initiative and similar efforts to understand the brain, says Wang. His lab is now working to develop ways to not only activate proteins, but inactive them using light-sensitive amino acids, and applying the technique to proteins other than Kir2.1.

Filed under brain mapping optogenetics amino acids neural activity neurons neuroscience science

138 notes

When neurons have less to say, they say it with particular emphasis
The brain is an extremely adaptable organ – but it is also very conservative according to scientists from the Max Planck Institute of Neurobiology in Martinsried in collaboration with colleagues from the Friedrich Miescher Institute in Basel and the Ruhr Institute Bochum. The researchers succeeded in demonstrating that neurons in the brain regulate their own excitability so that the activity level in the network remains as constant as possible. Even in the event of major changes, for example the complete absence of information from a sensory organ, the almost silenced neurons re-establish levels of activity similar to their previous ones after only 48 hours. The mean activity level thus achieved is a basic prerequisite for a healthy brain and the formation of new connections between neurons – an essential capacity for regeneration following injury to the brain or a sensory organ, for example.
Neurons communicate using electrical signals. They transmit these signals to neighbouring cells via special contact points known as the synapses. When a new item of information presents for processing, the cells can develop new synaptic contacts with their neighbouring cells or strengthen existing ones. To enable forgetting, these processes are also reversible. The brain is consequently in a constant state of reorganisation, through which individual neurons are prevented from becoming either too active or too inactive. The aim is to keep the level of activity constant, as the long-term overexcitement of neurons can result in damage to the brain.
Too little activity is not good either. “The cells can only re-establish connections with their neighbours when they are ‘awake’, so to speak, that is when they display a minimum level of activity,” explains Mark Hübener, head of the recently published study. The international team of researchers succeeded in demonstrating for the first time that the brain itself compensates for massive changes in neuronal activity within a period of two days, and can return to a similar level of activity to that before the change.
Up to now, the only indication of this astonishing capacity of the brain came from cell cultures. It was also unclear as to how neurons could control their own excitability in relation to the activity of the entire network. Now, the scientists have made significant progress towards finding an answer to this question. In their study, they examined the visual cortex of mice that recently went blind. As expected, but never previously demonstrated, the activity of the neurons in this area of the brain did not fall to zero but to half of the original value. “That alone was an astonishing finding, as it shows the extent to which the visual cortex also processes information from other areas of the brain,” explains Tobias Bonhoeffer, who has been researching processes in the visual cortex at his department in the Max Planck Institute of Neurobiology for many years. “However, things really became exciting when we observed the area further over the following hours and days.”
The scientists were able, under the microscope, to witness “live” how the neurons in the visual cortex became active again. After just a few hours, they could clearly observe how the points of contact between the affected cells and neighbouring cells increased in size. When synapses get bigger, they also become stronger and signals are transmitted faster and more effectively. As a result of this intensification of the contact between the neurons, the activity of the affected network returned to its starting value after a period of between 24 and 48 hours. “To put it simply, due to the absence of visual input, the cells had less to say – but when they did say something, they said it with particular emphasis,” explains Mark Hübener.
Due to the simultaneous strengthening of all of the synapses of the affected neurons, major reductions in the neuronal activity can be normalised again with surprising speed. The relatively stable activity level thereby achieved is an essential prerequisite for maintaining a healthy, adaptable brain.

When neurons have less to say, they say it with particular emphasis

The brain is an extremely adaptable organ – but it is also very conservative according to scientists from the Max Planck Institute of Neurobiology in Martinsried in collaboration with colleagues from the Friedrich Miescher Institute in Basel and the Ruhr Institute Bochum. The researchers succeeded in demonstrating that neurons in the brain regulate their own excitability so that the activity level in the network remains as constant as possible. Even in the event of major changes, for example the complete absence of information from a sensory organ, the almost silenced neurons re-establish levels of activity similar to their previous ones after only 48 hours. The mean activity level thus achieved is a basic prerequisite for a healthy brain and the formation of new connections between neurons – an essential capacity for regeneration following injury to the brain or a sensory organ, for example.

Neurons communicate using electrical signals. They transmit these signals to neighbouring cells via special contact points known as the synapses. When a new item of information presents for processing, the cells can develop new synaptic contacts with their neighbouring cells or strengthen existing ones. To enable forgetting, these processes are also reversible. The brain is consequently in a constant state of reorganisation, through which individual neurons are prevented from becoming either too active or too inactive. The aim is to keep the level of activity constant, as the long-term overexcitement of neurons can result in damage to the brain.

Too little activity is not good either. “The cells can only re-establish connections with their neighbours when they are ‘awake’, so to speak, that is when they display a minimum level of activity,” explains Mark Hübener, head of the recently published study. The international team of researchers succeeded in demonstrating for the first time that the brain itself compensates for massive changes in neuronal activity within a period of two days, and can return to a similar level of activity to that before the change.

Up to now, the only indication of this astonishing capacity of the brain came from cell cultures. It was also unclear as to how neurons could control their own excitability in relation to the activity of the entire network. Now, the scientists have made significant progress towards finding an answer to this question. In their study, they examined the visual cortex of mice that recently went blind. As expected, but never previously demonstrated, the activity of the neurons in this area of the brain did not fall to zero but to half of the original value. “That alone was an astonishing finding, as it shows the extent to which the visual cortex also processes information from other areas of the brain,” explains Tobias Bonhoeffer, who has been researching processes in the visual cortex at his department in the Max Planck Institute of Neurobiology for many years. “However, things really became exciting when we observed the area further over the following hours and days.”

The scientists were able, under the microscope, to witness “live” how the neurons in the visual cortex became active again. After just a few hours, they could clearly observe how the points of contact between the affected cells and neighbouring cells increased in size. When synapses get bigger, they also become stronger and signals are transmitted faster and more effectively. As a result of this intensification of the contact between the neurons, the activity of the affected network returned to its starting value after a period of between 24 and 48 hours. “To put it simply, due to the absence of visual input, the cells had less to say – but when they did say something, they said it with particular emphasis,” explains Mark Hübener.

Due to the simultaneous strengthening of all of the synapses of the affected neurons, major reductions in the neuronal activity can be normalised again with surprising speed. The relatively stable activity level thereby achieved is an essential prerequisite for maintaining a healthy, adaptable brain.

Filed under homeostatic plasticity plasticity neural activity synapses neuroscience science

free counters