Neuroscience

Articles and news from the latest research reports.

Posts tagged neural activity

240 notes

How the Brain Makes Sense of Spaces, Large and Small







When an animal encounters a new environment, the neurons in its brain that are responsible for mapping out the space are ready for anything. So says a new study in which scientists at the Howard Hughes Medical Institute’s Janelia Research Campus examined neuronal activity in rats as they explored an unusually large maze for the first time.
The researchers found that neurons in the brain’s hippocampus, where information about people, places, and events is stored, each contribute to an animal’s mental map at their own rate. Some neurons begin to associate themselves with the new space immediately, while others hold back, contributing only if the space expands beyond a size that can be represented by the first-line neurons. Similar mechanisms may be at play as the human brain records a new experience, says Janelia group leader Albert Lee, who led the study. Lee, graduate student Dylan Rich, and Hua Peng-Liaw, a technician in Lee’s lab, published their findings in the August 15, 2014, issue of the journal Science.
“The hippocampus has to represent arbitrary things,” Lee says. “When a new experience begins, we don’t know how long it’s going to last, and the brain has to form a new representation on the fly. This mechanism means that the hippocampus doesn’t have to adjust its representation if an environment is larger than predicted, or if an experience goes on longer than expected.”
As an animal explores a new environment, cells in its hippocampus fire to mark new places that it encounters. The cells, called place cells, fire randomly, but become associated with the shapes, smells, and other sensory cues present in that location. In humans, analogous cells store memories of people, places, facts, and events.
In rodents, about a third of the cells in the region of the hippocampus devoted to spatial learning participate in mapping a typical laboratory-sized maze. Different mazes are represented by different but overlapping sets of neurons. The differences between those sets allow the brain to distinguish between memories of different environments.
But what happens when an animal finds itself in an environment larger than a five-meter laboratory maze? In the wild, rats can traverse territories as long as 50 meters. Lee wanted to know how the hippocampus kept track of environments that placed greater demands on its neurons.
If cells continued to mark off space at the rate that scientists had observed in more confined environments, the animal’s mental map would quickly lose its uniqueness. “If every cell is active in the representation of a single space, then you can’t use this mechanism to distinguish memories of different things,” Lee points out. 
So Lee and his team stocked up on supplies from the hardware store and built their own maze, far larger than any that had been used previously to track place cell activity. The 48-meter maze wouldn’t fit inside Lee’s lab, so Lee, Rich, and Liaw set it up in a large cage-cleaning room at Janelia.
The room was busy during the week, so the team did their experiments on weekends. For multiple weekends over the course of about two years, Janelia’s vivarium staff would clear the room for them, and then the team would reassemble the maze and set up video cameras and electrophysiology equipment. The team recorded the activity of individual cells in the hippocampus as rats explored the maze for the first time. They first introduced the animals to a small portion of the maze, then gradually increased the territory to which the rats had access, monitoring how the brain added new information to its spatial map.
When the scientists analyzed their data, they discovered that from the time the rats entered the maze, their brains were ready to represent an environment of any size. “Instead of the hippocampus having to adjust in time as the animal notices that the maze gets larger, it anticipates all different sizes of mazes from the beginning,” Lee says. “It does this by dividing up its population of neurons so that certain ones are ready to represent smaller mazes, others are ready to represent medium-size mazes, and others, large ones.”
All of the neurons acted independently, firing randomly to mark off places in the maze. But some neurons had a greater propensity to mark off space than others, Lee explains. Some neurons mark space quickly and become associated with many places in the maze, whereas others are less likely to fire. These, Lee says, are reserved for mapping larger spaces.
In small environments, a subset of the cells that are most likely to mark off space – those that have a chance to fire while the animal explores – form the map on their own. In larger mazes, all of the neurons with a high propensity to mark space are recruited to the mapping effort, meaning they cannot be used to distinguish the representation of one large maze from another. That’s when the neurons with a lower tendency to fire step in, randomly marking space in a distinct, identifying set.
“There’s always a set of neurons that is just at the edge, where they are equally likely to represent any given environment versus not, regardless of what its size is,” Lee says. “Those are the neurons the brain can actually use to distinguish which environment its in.”
The system means the brain never has to adjust its representation of an environment as it is being created, Lee says. “All neurons are marking space at their own preferred rate, so there doesn’t have to be a mechanism to say, ‘you should fire because this maze is large or this maze is small.’ The hippocampus is ready for anything at any moment.”
Cells in the human brain may record events in a similar way, marking off time as an event unfolds without knowing how long it will continue, Lee says.

How the Brain Makes Sense of Spaces, Large and Small

When an animal encounters a new environment, the neurons in its brain that are responsible for mapping out the space are ready for anything. So says a new study in which scientists at the Howard Hughes Medical Institute’s Janelia Research Campus examined neuronal activity in rats as they explored an unusually large maze for the first time.

The researchers found that neurons in the brain’s hippocampus, where information about people, places, and events is stored, each contribute to an animal’s mental map at their own rate. Some neurons begin to associate themselves with the new space immediately, while others hold back, contributing only if the space expands beyond a size that can be represented by the first-line neurons. Similar mechanisms may be at play as the human brain records a new experience, says Janelia group leader Albert Lee, who led the study. Lee, graduate student Dylan Rich, and Hua Peng-Liaw, a technician in Lee’s lab, published their findings in the August 15, 2014, issue of the journal Science.

“The hippocampus has to represent arbitrary things,” Lee says. “When a new experience begins, we don’t know how long it’s going to last, and the brain has to form a new representation on the fly. This mechanism means that the hippocampus doesn’t have to adjust its representation if an environment is larger than predicted, or if an experience goes on longer than expected.”

As an animal explores a new environment, cells in its hippocampus fire to mark new places that it encounters. The cells, called place cells, fire randomly, but become associated with the shapes, smells, and other sensory cues present in that location. In humans, analogous cells store memories of people, places, facts, and events.

In rodents, about a third of the cells in the region of the hippocampus devoted to spatial learning participate in mapping a typical laboratory-sized maze. Different mazes are represented by different but overlapping sets of neurons. The differences between those sets allow the brain to distinguish between memories of different environments.

But what happens when an animal finds itself in an environment larger than a five-meter laboratory maze? In the wild, rats can traverse territories as long as 50 meters. Lee wanted to know how the hippocampus kept track of environments that placed greater demands on its neurons.

If cells continued to mark off space at the rate that scientists had observed in more confined environments, the animal’s mental map would quickly lose its uniqueness. “If every cell is active in the representation of a single space, then you can’t use this mechanism to distinguish memories of different things,” Lee points out. 

So Lee and his team stocked up on supplies from the hardware store and built their own maze, far larger than any that had been used previously to track place cell activity. The 48-meter maze wouldn’t fit inside Lee’s lab, so Lee, Rich, and Liaw set it up in a large cage-cleaning room at Janelia.

The room was busy during the week, so the team did their experiments on weekends. For multiple weekends over the course of about two years, Janelia’s vivarium staff would clear the room for them, and then the team would reassemble the maze and set up video cameras and electrophysiology equipment. The team recorded the activity of individual cells in the hippocampus as rats explored the maze for the first time. They first introduced the animals to a small portion of the maze, then gradually increased the territory to which the rats had access, monitoring how the brain added new information to its spatial map.

When the scientists analyzed their data, they discovered that from the time the rats entered the maze, their brains were ready to represent an environment of any size. “Instead of the hippocampus having to adjust in time as the animal notices that the maze gets larger, it anticipates all different sizes of mazes from the beginning,” Lee says. “It does this by dividing up its population of neurons so that certain ones are ready to represent smaller mazes, others are ready to represent medium-size mazes, and others, large ones.”

All of the neurons acted independently, firing randomly to mark off places in the maze. But some neurons had a greater propensity to mark off space than others, Lee explains. Some neurons mark space quickly and become associated with many places in the maze, whereas others are less likely to fire. These, Lee says, are reserved for mapping larger spaces.

In small environments, a subset of the cells that are most likely to mark off space – those that have a chance to fire while the animal explores – form the map on their own. In larger mazes, all of the neurons with a high propensity to mark space are recruited to the mapping effort, meaning they cannot be used to distinguish the representation of one large maze from another. That’s when the neurons with a lower tendency to fire step in, randomly marking space in a distinct, identifying set.

“There’s always a set of neurons that is just at the edge, where they are equally likely to represent any given environment versus not, regardless of what its size is,” Lee says. “Those are the neurons the brain can actually use to distinguish which environment its in.”

The system means the brain never has to adjust its representation of an environment as it is being created, Lee says. “All neurons are marking space at their own preferred rate, so there doesn’t have to be a mechanism to say, ‘you should fire because this maze is large or this maze is small.’ The hippocampus is ready for anything at any moment.”

Cells in the human brain may record events in a similar way, marking off time as an event unfolds without knowing how long it will continue, Lee says.

Filed under hippocampus neural activity place cells neurons memory neuroscience science

158 notes

Neuroscience and big data: How to find simplicity in the brain
Scientists can now monitor and record the activity of hundreds of neurons concurrently in the brain, and ongoing technology developments promise to increase this number manyfold. However, simply recording the neural activity does not automatically lead to a clearer understanding of how the brain works.
In a new review paper published in Nature Neuroscience, Carnegie Mellon University’s Byron M. Yu and Columbia University’s John P. Cunningham describe the scientific motivations for studying the activity of many neurons together, along with a class of machine learning algorithms — dimensionality reduction — for interpreting the activity.
In recent years, dimensionality reduction has provided insight into how the brain distinguishes between different odors, makes decisions in the face of uncertainty and is able to think about moving a limb without actually moving. Yu and Cunningham contend that using dimensionality reduction as a standard analytical method will make it easier to compare activity patterns in healthy and abnormal brains, ultimately leading to improved treatments and interventions for brain injuries and disorders.
"One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function. However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings," said Yu, an assistant professor of electrical and computer engineering and biomedical engineering at CMU and a faculty member in the Center for the Neural Basis of Cognition (CNBC).
The idea behind dimensionality reduction is to summarize the activity of a large number of neurons using a smaller number of latent (or hidden) variables. Dimensionality reduction methods are particularly useful to uncover inner workings of the brain, such as when we ruminate or solve a mental math problem, where all the action is going on inside the brain and not in the outside world. These latent variables can be used to trace out the path of ones thoughts.
"One of the major goals of science is to explain complex phenomena in simple terms. Traditionally, neuroscientists have sought to find simplicity with individual neurons. However, it is becoming increasingly recognized that neurons show varied features in their activity patterns that are difficult to explain by examining one neuron at a time. Dimensionality reduction provides us with a way to embrace single-neuron heterogeneity and seek simple explanations in terms of how neurons interact with each other," said Cunningham, assistant professor of statistics at Columbia.
Although dimensionality reduction is relatively new to neuroscience compared to existing analytical methods, it has already shown great promise. With Big Data getting ever bigger thanks to the continued development of neural recording technologies and the federal BRAIN Initiative, the use of dimensionality reduction and related methods will likely become increasingly essential.

Neuroscience and big data: How to find simplicity in the brain

Scientists can now monitor and record the activity of hundreds of neurons concurrently in the brain, and ongoing technology developments promise to increase this number manyfold. However, simply recording the neural activity does not automatically lead to a clearer understanding of how the brain works.

In a new review paper published in Nature Neuroscience, Carnegie Mellon University’s Byron M. Yu and Columbia University’s John P. Cunningham describe the scientific motivations for studying the activity of many neurons together, along with a class of machine learning algorithms — dimensionality reduction — for interpreting the activity.

In recent years, dimensionality reduction has provided insight into how the brain distinguishes between different odors, makes decisions in the face of uncertainty and is able to think about moving a limb without actually moving. Yu and Cunningham contend that using dimensionality reduction as a standard analytical method will make it easier to compare activity patterns in healthy and abnormal brains, ultimately leading to improved treatments and interventions for brain injuries and disorders.

"One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function. However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings," said Yu, an assistant professor of electrical and computer engineering and biomedical engineering at CMU and a faculty member in the Center for the Neural Basis of Cognition (CNBC).

The idea behind dimensionality reduction is to summarize the activity of a large number of neurons using a smaller number of latent (or hidden) variables. Dimensionality reduction methods are particularly useful to uncover inner workings of the brain, such as when we ruminate or solve a mental math problem, where all the action is going on inside the brain and not in the outside world. These latent variables can be used to trace out the path of ones thoughts.

"One of the major goals of science is to explain complex phenomena in simple terms. Traditionally, neuroscientists have sought to find simplicity with individual neurons. However, it is becoming increasingly recognized that neurons show varied features in their activity patterns that are difficult to explain by examining one neuron at a time. Dimensionality reduction provides us with a way to embrace single-neuron heterogeneity and seek simple explanations in terms of how neurons interact with each other," said Cunningham, assistant professor of statistics at Columbia.

Although dimensionality reduction is relatively new to neuroscience compared to existing analytical methods, it has already shown great promise. With Big Data getting ever bigger thanks to the continued development of neural recording technologies and the federal BRAIN Initiative, the use of dimensionality reduction and related methods will likely become increasingly essential.

Filed under neurons neural activity neural recordings neuroscience science

148 notes

Taking the Pulse of Aging: Researchers Map the Pulse Pressure and Elasticity of Arteries in the Brain

Researchers at the Beckman Institute at the University of Illinois at Urbana-Champaign have developed a new technique that can noninvasively image the pulse pressure and elasticity of the arteries of the brain, revealing correlations between arterial health and aging.

image

Brain artery support, which makes up the cerebrovascular system, is crucial for healthy brain aging and preventing diseases like Alzheimer’s and other forms of dementia.

The researchers, led by Monica Fabiani and Gabriele Gratton, psychology professors in the Cognitive Neuroscience Group, routinely record optical imaging data by shining near-infrared light into the brain to measure neural activity. Their idea to measure pulse pressure through optical imaging came from observing in previous studies that the arterial pulse produced strong signals in the optical data, which they normally do not use to study brain function. Realizing the value in this overlooked data, they launched a new study that focused on data from 53 participants aged 55-87 years. 

“When we image the brain using our optical methods, we usually remove the pulse as an artifact—we take it out in order to get to other signals from the brain,” said Fabiani. “But we are interested in aging and how the brain changes with other bodily systems, like the cardiovascular system. When thinking about this, we realized it would be useful to measure the cerebrovascular system as we worry about cognition and brain physiology.”

The initial results using this new technique find that arterial stiffness is directly correlated with cardiorespiratory fitness: the more fit people are, the more elastic their arteries. Because arterial stiffening is a cause of reduced brain blood flow, stiff arteries can lead to a faster rate of cognitive decline and an increased chance of stroke, especially in older adults.

Using this method, the researchers were able to collect additional, region-specific data.

“In particular, noninvasive optical methods can provide estimates of arterial elasticity and brain pulse pressure in different regions of the brain, which can give us clues about the how different regions of the brain contribute to our overall health,” said Gratton. “For example, if we found that a particular artery was stiff and causing decreased blood flow to and loss of brain cells in a specific area, we might find that the damage to this area is also associated with an increased likelihood of certain psychological and cognitive issues.”

The researchers are investigating ways to use this technique to measure arterial stiffness across different age groups and specific cardiovascular or stress levels. High levels of stress, especially over a long amount of time, may affect arterial health, according to the researchers. 

“This is just the beginning of what we’re able to explore with this technique. We’re looking at other age groups, and in the future we intend to study people with varying levels of long-term stress,” said Fabiani. “When people are stressed for long periods of time, like if they’re caring for a sick parent, stress might generate vasoconstriction and higher blood pressure, with significant consequences for arterial function in the brain. We are interested in knowing whether this may be an important factor leading to arterial stiffness.” 

The researchers are also able to gather information about pulse transit time, or how long it takes the blood to flow through the brain’s arteries, and visualize large arteries running along the brain surface.

“Our goal is to find more information about what causes arterial stiffness, and how regional arterial stiffness can lead to specific health problems. Our findings continue to bolster the idea that an important key to aging well is having good cerebrovascular health,” said Fabiani.

(Source: beckman.illinois.edu)

Filed under aging cardiorespiratory fitness cerebrovascular system neural activity neuroscience science

511 notes

(Image caption: This is the happiness equation, where t is the trial number, w0 is a constant term, other weights w capture the influence of different event types, 0 ≤ γ ≤ 1 is a forgetting factor that makes events in more recent trials more influential than those in earlier trials, CRj is the CR if chosen instead of a gamble on trial j, EVj is the EV of a gamble (average reward for the gamble) if chosen on trial j, and RPEj is the RPE on trial j contingent on choice of the gamble. The RPE is equal to the reward received minus the expectation in that trial EVj. If the CR was chosen, then EVj = 0 and RPEj = 0; if the gamble was chosen, then CRj = 0. The variables in the equation are quantities that the neuromodulator dopamine has been associated with in previous neuroscience studies. Credit: Robb Rutledge, UCL)
Equation to predict happiness
The happiness of over 18,000 people worldwide has been predicted by a mathematical equation developed by researchers at UCL, with results showing that moment-to-moment happiness reflects not just how well things are going, but whether things are going better than expected.
The new equation accurately predicts exactly how happy people will say they are from moment to moment based on recent events, such as the rewards they receive and the expectations they have during a decision-making task. Scientists found that overall wealth accumulated during the experiment was not a good predictor of happiness. Instead, moment-to-moment happiness depended on the recent history of rewards and expectations. These expectations depended, for example, on whether the available options could lead to good or bad outcomes.
The study, published in the Proceedings of the National Academy of Sciences, investigated the relationship between happiness and reward, and the neural processes that lead to feelings that are central to our conscious experience, such as happiness. Before now, it was known that life events affect an individual’s happiness but not exactly how happy people will be from moment to moment as they make decisions and receive outcomes resulting from those decisions, something the new equation can predict.
Scientists believe that quantifying subjective states mathematically could help doctors better understand mood disorders, by seeing how self-reported feelings fluctuate in response to events like small wins and losses in a smartphone game. A better understanding of how mood is determined by life events and circumstances, and how that differs in people suffering from mood disorders, will hopefully lead to more effective treatments.
Research examining how and why happiness changes from moment to moment in individuals could also assist governments who deploy population measures of wellbeing to inform policy, by providing quantitative insight into what the collected information means. This is especially relevant to the UK following the launch of the National Wellbeing Programme in 2010 and subsequent annual reports by the Office for National Statistics on ‘Measuring National Wellbeing’.
For the study, 26 subjects completed a decision-making task in which their choices led to monetary gains and losses, and they were repeatedly asked to answer the question ‘how happy are you right now?’. The participant’s neural activity was also measured during the task using functional MRI and from these data, scientists built a computational model in which self-reported happiness was related to recent rewards and expectations. The model was then tested on 18,420 participants in the game ‘What makes me happy?’ in a smartphone app developed at UCL called 'The Great Brain Experiment'. Scientists were surprised to find that the same equation could be used to predict how happy subjects would be while they played the smartphone game, even though subjects could win only points and not money.
Lead author of the study, Dr Robb Rutledge (UCL Wellcome Trust Centre for Neuroimaging and the new Max Planck UCL Centre for Computational Psychiatry and Ageing), said: “We expected to see that recent rewards would affect moment-to-moment happiness but were surprised to find just how important expectations are in determining happiness. In real-world situations, the rewards associated with life decisions such as starting a new job or getting married are often not realised for a long time, and our results suggest expectations related to these decisions, good and bad, have a big effect on happiness.
"Life is full of expectations - it would be difficult to make good decisions without knowing, for example, which restaurant you like better. It is often said that you will be happier if your expectations are lower. We find that there is some truth to this: lower expectations make it more likely that an outcome will exceed those expectations and have a positive impact on happiness. However, expectations also affect happiness even before we learn the outcome of a decision. If you have plans to meet a friend at your favourite restaurant, those positive expectations may increase your happiness as soon as you make the plan. The new equation captures these different effects of expectations and allows happiness to be predicted based on the combined effects of many past events.
"It’s great that the data from the large and varied population using The Great Brain Experiment smartphone app shows that the same happiness equation applies to thousands people worldwide playing our game, as with our much smaller laboratory-based experiments which demonstrate the tremendous value of this approach for studying human well-being on a large scale."
The team used functional MRI to demonstrate that neural signals during decisions and outcomes in the task in an area of the brain called the striatum can be used to predict changes in moment-to-moment happiness. The striatum has a lot of connections with dopamine neurons, and signals in this brain area are thought to depend at least partially on dopamine. These results raise the possibility that dopamine may play a role in determining happiness.

(Image caption: This is the happiness equation, where t is the trial number, w0 is a constant term, other weights w capture the influence of different event types, 0 ≤ γ ≤ 1 is a forgetting factor that makes events in more recent trials more influential than those in earlier trials, CRj is the CR if chosen instead of a gamble on trial j, EVj is the EV of a gamble (average reward for the gamble) if chosen on trial j, and RPEj is the RPE on trial j contingent on choice of the gamble. The RPE is equal to the reward received minus the expectation in that trial EVj. If the CR was chosen, then EVj = 0 and RPEj = 0; if the gamble was chosen, then CRj = 0. The variables in the equation are quantities that the neuromodulator dopamine has been associated with in previous neuroscience studies. Credit: Robb Rutledge, UCL)

Equation to predict happiness

The happiness of over 18,000 people worldwide has been predicted by a mathematical equation developed by researchers at UCL, with results showing that moment-to-moment happiness reflects not just how well things are going, but whether things are going better than expected.

The new equation accurately predicts exactly how happy people will say they are from moment to moment based on recent events, such as the rewards they receive and the expectations they have during a decision-making task. Scientists found that overall wealth accumulated during the experiment was not a good predictor of happiness. Instead, moment-to-moment happiness depended on the recent history of rewards and expectations. These expectations depended, for example, on whether the available options could lead to good or bad outcomes.

The study, published in the Proceedings of the National Academy of Sciences, investigated the relationship between happiness and reward, and the neural processes that lead to feelings that are central to our conscious experience, such as happiness. Before now, it was known that life events affect an individual’s happiness but not exactly how happy people will be from moment to moment as they make decisions and receive outcomes resulting from those decisions, something the new equation can predict.

Scientists believe that quantifying subjective states mathematically could help doctors better understand mood disorders, by seeing how self-reported feelings fluctuate in response to events like small wins and losses in a smartphone game. A better understanding of how mood is determined by life events and circumstances, and how that differs in people suffering from mood disorders, will hopefully lead to more effective treatments.

Research examining how and why happiness changes from moment to moment in individuals could also assist governments who deploy population measures of wellbeing to inform policy, by providing quantitative insight into what the collected information means. This is especially relevant to the UK following the launch of the National Wellbeing Programme in 2010 and subsequent annual reports by the Office for National Statistics on ‘Measuring National Wellbeing’.

For the study, 26 subjects completed a decision-making task in which their choices led to monetary gains and losses, and they were repeatedly asked to answer the question ‘how happy are you right now?’. The participant’s neural activity was also measured during the task using functional MRI and from these data, scientists built a computational model in which self-reported happiness was related to recent rewards and expectations. The model was then tested on 18,420 participants in the game ‘What makes me happy?’ in a smartphone app developed at UCL called 'The Great Brain Experiment'. Scientists were surprised to find that the same equation could be used to predict how happy subjects would be while they played the smartphone game, even though subjects could win only points and not money.

Lead author of the study, Dr Robb Rutledge (UCL Wellcome Trust Centre for Neuroimaging and the new Max Planck UCL Centre for Computational Psychiatry and Ageing), said: “We expected to see that recent rewards would affect moment-to-moment happiness but were surprised to find just how important expectations are in determining happiness. In real-world situations, the rewards associated with life decisions such as starting a new job or getting married are often not realised for a long time, and our results suggest expectations related to these decisions, good and bad, have a big effect on happiness.

"Life is full of expectations - it would be difficult to make good decisions without knowing, for example, which restaurant you like better. It is often said that you will be happier if your expectations are lower. We find that there is some truth to this: lower expectations make it more likely that an outcome will exceed those expectations and have a positive impact on happiness. However, expectations also affect happiness even before we learn the outcome of a decision. If you have plans to meet a friend at your favourite restaurant, those positive expectations may increase your happiness as soon as you make the plan. The new equation captures these different effects of expectations and allows happiness to be predicted based on the combined effects of many past events.

"It’s great that the data from the large and varied population using The Great Brain Experiment smartphone app shows that the same happiness equation applies to thousands people worldwide playing our game, as with our much smaller laboratory-based experiments which demonstrate the tremendous value of this approach for studying human well-being on a large scale."

The team used functional MRI to demonstrate that neural signals during decisions and outcomes in the task in an area of the brain called the striatum can be used to predict changes in moment-to-moment happiness. The striatum has a lot of connections with dopamine neurons, and signals in this brain area are thought to depend at least partially on dopamine. These results raise the possibility that dopamine may play a role in determining happiness.

Filed under happiness reward decision making neural activity neuroimaging striatum dopamine mathematical equation neuroscience science

103 notes

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)
Complexity of eye-hand coordination
People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.
"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."
The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.
"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."
A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.
"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."
Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”
He often describes the behavior of high level ping pong players to explain how it works.
"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.
"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.
"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."
Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.
Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.
To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.
"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."
As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.
"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."
School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)

Complexity of eye-hand coordination

People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.

"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."

The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.

"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."

A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.

"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."

Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”

He often describes the behavior of high level ping pong players to explain how it works.

"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.

"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.

"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."

Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.

Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.

To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.

"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."

As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.

"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."

School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

Filed under eye-hand coordination eye movements parietal cortex prosthetics neural activity psychology neuroscience science

77 notes

Researchers identify brain mechanism for motion detection in fruit flies

A team of scientists has identified the neurons used in certain types of motion detection—findings that deepen our understanding of how the visual system functions.

image

“Our results show how neurons in the brain work together as part of an intricate process used to detect motion,” says Claude Desplan, a professor in NYU’s Department of Biology and the study’s senior author.

The study, whose authors included Rudy Behnia, an NYU post-doctoral fellow, as well as researchers from the NYU Center for Neural Science and Yale and Stanford universities, appears in the journal Nature.

The researchers sought to explain some of the neurological underpinnings of a long-established and influential model, the Hassenstein–Reichardt correlator. It posits that motion detection relies on separate input channels that are processed in the brain in ways that coordinate these distinct inputs. The Nature study focused on neurons acting in this processing.

The researchers examined the fruit fly Drosophila, which is commonly used in biological research as a model system to decipher basic principles that direct the functions of the brain.

Previously, scientists studying Drosophila have identified two parallel pathways that respond to either moving light, or dark edges—a dynamic that underscores much of what flies see in detecting motion. For instance, a bird is an object whose dark edges flies see as it first moves across the bright light of the sky; after it passes through their field of view, flies see the light edge of the background sky.

However, the nature of the underlying neurological processing had not been clear.

In their study, the researchers analyzed the neuronal activity of particular neurons used to detect these movements. Specifically, they found that four neurons in the brain’s medulla implement two processing steps. Two neurons— Tm1 and Tm2—respond to brightness decrements (central to the detection of moving dark edges); by contrast, two other neurons— Mi1 and Tm3—respond to brightness increments (or light edges). Moreover, Tm1 responds slower than does Tm2 while Mi1 responds slower than does Tm3, a difference in kinetics that fundamental to the Hassenstein-Reichardt correlator.

In sum, these neurons process the two inputs that precede the coordination outlined by the Hassenstein–Reichardt correlator, thereby revealing elements of the long-sought neural activity of motion detection in the fly.

(Source: nyu.edu)

Filed under fruit flies motion detection neural activity neurons neuroscience science

402 notes

How the brain stabilizes its connections in order to learn better
Throughout our lives, our brains adapt to what we learn and memorise. The brain is indeed made up of complex networks of neurons and synapses that are constantly re-configured. However, in order for learning to leave a trace, connections must be stabilized. A team at the University of Geneva (UNIGE) discovered a new cellular mechanism involved in the long-term stabilization of neuron connections, in which non-neuronal cells, called astrocytes, play a role unidentified until now. These results, published in Current Biology, will lead to a better understanding of neurodegenerative and neurodevelopmental diseases.
The central nervous system excitatory synapses – points of contact between neurons that allow them to transmit signals – are highly dynamic structures, which are continuously forming and dissolving. They are surrounded by non-neuronal cells, or glial cells, which include the distinctively star-shaped astrocytes. These cells form complex structures around synapses, and play a role in the transmission of cerebral information which was widely unknown before.
Plasticity and Stability
By increasing neuronal activity through whiskers stimulation of adult mice, the scientists were able to observe, in both the somatosensory cortex and the hippocampus, that this increased neuronal activity provokes an increase in astrocytes movements around synapses. The synapses, surrounded by astrocytes, re-organise their architecture, which protects them and increases their longevity. The team of researchers led by Dominique Muller, Professor in the Department of Fundamental Neuroscience of the Faculty of Medicine at UNIGE, developed new techniques that allowed them to specifically “control” the different synaptic structures, and to show that the phenomenon took place exclusively in the connections between neurons involved in learning. “In summary, the more the astrocytes surround the synapses, the longer the synapses last, thus allowing learning to leave a mark on memory,” explained Yann Bernardinelli, the lead author on this study.
This study identifies a new, two-way interaction between neurons and astrocytes, in which the learning process regulates the structural plasticity of astrocytes, who in turn determine the fate of the synapses. This mechanism indicates that astrocytes apparently play an important role in the processes of learning and memory, which present abnormally in various neurodegenerative and neurodevelopmental diseases, among which Alzheimer’s, autism, or Fragile X syndrome.
This discovery highlights the until now underestimated importance of cells which, despite being non-neuronal, participate in a crucial way in the cerebral mechanisms that allow us to learn and retain memories of what we have learned.

How the brain stabilizes its connections in order to learn better

Throughout our lives, our brains adapt to what we learn and memorise. The brain is indeed made up of complex networks of neurons and synapses that are constantly re-configured. However, in order for learning to leave a trace, connections must be stabilized. A team at the University of Geneva (UNIGE) discovered a new cellular mechanism involved in the long-term stabilization of neuron connections, in which non-neuronal cells, called astrocytes, play a role unidentified until now. These results, published in Current Biology, will lead to a better understanding of neurodegenerative and neurodevelopmental diseases.

The central nervous system excitatory synapses – points of contact between neurons that allow them to transmit signals – are highly dynamic structures, which are continuously forming and dissolving. They are surrounded by non-neuronal cells, or glial cells, which include the distinctively star-shaped astrocytes. These cells form complex structures around synapses, and play a role in the transmission of cerebral information which was widely unknown before.

Plasticity and Stability

By increasing neuronal activity through whiskers stimulation of adult mice, the scientists were able to observe, in both the somatosensory cortex and the hippocampus, that this increased neuronal activity provokes an increase in astrocytes movements around synapses. The synapses, surrounded by astrocytes, re-organise their architecture, which protects them and increases their longevity. The team of researchers led by Dominique Muller, Professor in the Department of Fundamental Neuroscience of the Faculty of Medicine at UNIGE, developed new techniques that allowed them to specifically “control” the different synaptic structures, and to show that the phenomenon took place exclusively in the connections between neurons involved in learning. “In summary, the more the astrocytes surround the synapses, the longer the synapses last, thus allowing learning to leave a mark on memory,” explained Yann Bernardinelli, the lead author on this study.

This study identifies a new, two-way interaction between neurons and astrocytes, in which the learning process regulates the structural plasticity of astrocytes, who in turn determine the fate of the synapses. This mechanism indicates that astrocytes apparently play an important role in the processes of learning and memory, which present abnormally in various neurodegenerative and neurodevelopmental diseases, among which Alzheimer’s, autism, or Fragile X syndrome.

This discovery highlights the until now underestimated importance of cells which, despite being non-neuronal, participate in a crucial way in the cerebral mechanisms that allow us to learn and retain memories of what we have learned.

Filed under astrocytes neurons neural activity learning synapses hippocampus plasticity neuroscience science

379 notes

Study cracks how the brain processes emotions
Although feelings are personal and subjective, the human brain turns them into a standard code that objectively represents emotions across different senses, situations and even people, reports a new study by Cornell University neuroscientist Adam Anderson.
“We discovered that fine-grained patterns of neural activity within the orbitofrontal cortex, an area of the brain associated with emotional processing, act as a neural code which captures an individual’s subjective feeling,” says Anderson, associate professor of human development in Cornell’s College of Human Ecology and senior author of the study. “Population coding of affect across stimuli, modalities and individuals,” published online in Nature Neuroscience.
Their findings provide insight into how the brain represents our innermost feelings – what Anderson calls the last frontier of neuroscience – and upend the long-held view that emotion is represented in the brain simply by activation in specialized regions for positive or negative feelings, he says.
“If you and I derive similar pleasure from sipping a fine wine or watching the sun set, our results suggest it is because we share similar fine-grained patterns of activity in the orbitofrontal cortex,” Anderson says.
“It appears that the human brain generates a special code for the entire valence spectrum of pleasant-to-unpleasant, good-to-bad feelings, which can be read like a ‘neural valence meter’ in which the leaning of a population of neurons in one direction equals positive feeling and the leaning in the other direction equals negative feeling,” Anderson explains.
For the study, the researchers presented participants with a series of pictures and tastes during functional neuroimaging, then analyzed participants’ ratings of their subjective experiences along with their brain activation patterns.
Anderson’s team found that valence was represented as sensory-specific patterns or codes in areas of the brain associated with vision and taste, as well as sensory-independent codes in the orbitofrontal cortices (OFC), suggesting, the authors say, that representation of our internal subjective experience is not confined to specialized emotional centers, but may be central to perception of sensory experience.
They also discovered that similar subjective feelings – whether evoked from the eye or tongue – resulted in a similar pattern of activity in the OFC, suggesting the brain contains an emotion code common across distinct experiences of pleasure (or displeasure), they say. Furthermore, these OFC activity patterns of positive and negative experiences were partly shared across people.
“Despite how personal our feelings feel, the evidence suggests our brains use a standard code to speak the same emotional language,” Anderson concludes.

Study cracks how the brain processes emotions

Although feelings are personal and subjective, the human brain turns them into a standard code that objectively represents emotions across different senses, situations and even people, reports a new study by Cornell University neuroscientist Adam Anderson.

“We discovered that fine-grained patterns of neural activity within the orbitofrontal cortex, an area of the brain associated with emotional processing, act as a neural code which captures an individual’s subjective feeling,” says Anderson, associate professor of human development in Cornell’s College of Human Ecology and senior author of the study. “Population coding of affect across stimuli, modalities and individuals,” published online in Nature Neuroscience.

Their findings provide insight into how the brain represents our innermost feelings – what Anderson calls the last frontier of neuroscience – and upend the long-held view that emotion is represented in the brain simply by activation in specialized regions for positive or negative feelings, he says.

“If you and I derive similar pleasure from sipping a fine wine or watching the sun set, our results suggest it is because we share similar fine-grained patterns of activity in the orbitofrontal cortex,” Anderson says.

“It appears that the human brain generates a special code for the entire valence spectrum of pleasant-to-unpleasant, good-to-bad feelings, which can be read like a ‘neural valence meter’ in which the leaning of a population of neurons in one direction equals positive feeling and the leaning in the other direction equals negative feeling,” Anderson explains.

For the study, the researchers presented participants with a series of pictures and tastes during functional neuroimaging, then analyzed participants’ ratings of their subjective experiences along with their brain activation patterns.

Anderson’s team found that valence was represented as sensory-specific patterns or codes in areas of the brain associated with vision and taste, as well as sensory-independent codes in the orbitofrontal cortices (OFC), suggesting, the authors say, that representation of our internal subjective experience is not confined to specialized emotional centers, but may be central to perception of sensory experience.

They also discovered that similar subjective feelings – whether evoked from the eye or tongue – resulted in a similar pattern of activity in the OFC, suggesting the brain contains an emotion code common across distinct experiences of pleasure (or displeasure), they say. Furthermore, these OFC activity patterns of positive and negative experiences were partly shared across people.

“Despite how personal our feelings feel, the evidence suggests our brains use a standard code to speak the same emotional language,” Anderson concludes.

Filed under emotions orbitofrontal cortex neural activity feelings neuroscience science

228 notes

Controlling movement with light
For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.  
Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.
In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.
Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.
Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.
“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.
Muscle control
Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.
To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.
When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.
“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”
“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.
Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.
“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

Controlling movement with light

For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.  

Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.

In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.

Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.

Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.

“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.

Muscle control

Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.

To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.

When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.

“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”

“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.

Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.

“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

Filed under optogenetics muscle movement spinal cord neural activity neurons neuroscience science

484 notes

Finding thoughts in speech
For the first time, neuroscientists were able to find out how different thoughts are reflected in neuronal activity during natural conversations. Johanna Derix, Olga Iljina and the interdisciplinary team of Dr. Tonio Ball from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and the Epilepsy Center of the University Medical Center Freiburg (Freiburg, Germany) report on the link between speech, thoughts and brain responses in a special issue of Frontiers in Human Neuroscience.
"Thoughts are difficult to investigate, as one cannot observe in a direct manner what the person is thinking about. Language, however, reflects the underlying mental processes, so we can perform linguistic analyses of the subjects’ speech and use such information as a "bridge" between the neuronal processes and the subject’s thoughts," explains neuroscientist Johanna Derix.
The novelty of the authors’ approach is that the participants were not instructed to think and talk about a given topic in an experimental setting. Instead, the researchers analysed everyday conversations and the underlying brain activity, which was recorded directly from the cortical surface. This study was possible owing to the help of epilepsy patients in whom recordings of neural activity had to be obtained over several days for the purpose of pre-neurosurgical diagnostics.
For a start, borders between individual thoughts in continuous conversations had to be identified. Earlier psycholinguistic research indicates that a simple sentence is a suitable unit to contain a single thought, so the researchers opted for linguistic segmentation into simple sentences. The resulting “idea” units were classified into different categories. These included, for example, whether or not a sentence expressed memory- or self-related content. Then, the researchers analysed content-specific neural responses and observed clearly visible patterns of brain activity.
Thus, the neuroscientists from Freiburg have demonstrated the feasibility of their innovative approach to investigate, via speech, how the human brain processes thoughts during real-life conditions.

Finding thoughts in speech

For the first time, neuroscientists were able to find out how different thoughts are reflected in neuronal activity during natural conversations. Johanna Derix, Olga Iljina and the interdisciplinary team of Dr. Tonio Ball from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and the Epilepsy Center of the University Medical Center Freiburg (Freiburg, Germany) report on the link between speech, thoughts and brain responses in a special issue of Frontiers in Human Neuroscience.

"Thoughts are difficult to investigate, as one cannot observe in a direct manner what the person is thinking about. Language, however, reflects the underlying mental processes, so we can perform linguistic analyses of the subjects’ speech and use such information as a "bridge" between the neuronal processes and the subject’s thoughts," explains neuroscientist Johanna Derix.

The novelty of the authors’ approach is that the participants were not instructed to think and talk about a given topic in an experimental setting. Instead, the researchers analysed everyday conversations and the underlying brain activity, which was recorded directly from the cortical surface. This study was possible owing to the help of epilepsy patients in whom recordings of neural activity had to be obtained over several days for the purpose of pre-neurosurgical diagnostics.

For a start, borders between individual thoughts in continuous conversations had to be identified. Earlier psycholinguistic research indicates that a simple sentence is a suitable unit to contain a single thought, so the researchers opted for linguistic segmentation into simple sentences. The resulting “idea” units were classified into different categories. These included, for example, whether or not a sentence expressed memory- or self-related content. Then, the researchers analysed content-specific neural responses and observed clearly visible patterns of brain activity.

Thus, the neuroscientists from Freiburg have demonstrated the feasibility of their innovative approach to investigate, via speech, how the human brain processes thoughts during real-life conditions.

Filed under speech production neural activity thinking prefrontal cortex communication autobiographical memory neuroscience science

free counters