Neuroscience

Articles and news from the latest research reports.

Posts tagged neural activity

402 notes

Schizophrenia linked to abnormal brain waves
Neuroscientists discover neurological hyperactivity that produces disordered thinking
Schizophrenia patients usually suffer from a breakdown of organized thought, often accompanied by delusions or hallucinations. For the first time, MIT neuroscientists have observed the neural activity that appears to produce this disordered thinking.
The researchers found that mice lacking the brain protein calcineurin have hyperactive brain-wave oscillations in the hippocampus while resting, and are unable to mentally replay a route they have just run, as normal mice do.
Mutations in the gene for calcineurin have previously been found in some schizophrenia patients. Ten years ago, MIT researchers led by Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, created mice lacking the gene for calcineurin in the forebrain; these mice displayed several behavioral symptoms of schizophrenia, including impaired short-term memory, attention deficits, and abnormal social behavior.
In the new study, which appears in the Oct. 16 issue of the journal Neuron, Tonegawa and colleagues at the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory recorded the electrical activity of individual neurons in the hippocampus of these knockout mice as they ran along a track.
Previous studies have shown that in normal mice, “place cells” in the hippocampus, which are linked to specific locations along the track, fire in sequence when the mice take breaks from running the course. This mental replay also occurs when the mice are sleeping. These replays occur in association with very high frequency brain-wave oscillations known as ripple events.
In mice lacking calcineurin, the researchers found that brain activity was normal as the mice ran the course, but when they paused, their ripple events were much stronger and more frequent. Furthermore, the firing of the place cells was abnormally augmented and in no particular order, indicating that the mice were not replaying the route they had just run.
This pattern helps to explain some of the symptoms seen in schizophrenia, the researchers say.
“We think that in this mouse model, we may have some kind of indication that there’s a disorganized thinking process going on,” says Junghyup Suh, a research scientist at the Picower Institute and one of the paper’s lead authors. “During ripple events in normal mice we know there is a sequential replay event. This mutant mouse doesn’t seem to have that kind of replay of a previous experience.”
The paper’s other lead author is David Foster, a former MIT postdoc. Other authors are Heydar Davoudi and Matthew Wilson, the Sherman Fairchild Professor of Neuroscience at MIT and a member of the Picower Institute.
The researchers speculate that in normal mice, the role of calcineurin is to suppress the connections between neurons, known as synapses, in the hippocampus. In mice without calcineurin, a phenomenon known as long-term potentiation (LTP) becomes more prevalent, making synapses stronger. Also, the opposite effect, known as long-term depression (LTD), is suppressed.
“It looks like this abnormally high LTP has an impact on activity of these cells specifically during resting periods, or post exploration periods. That’s a very interesting specificity,” Tonegawa says. “We don’t know why it’s so specific.”
The researchers believe the abnormal hyperactivity they found in the hippocampus may represent a disruption of the brain’s “default mode network” — a communication network that connects the hippocampus, prefrontal cortex (where most thought and planning occurs), and other parts of the cortex.
This network is more active when a person (or mouse) is resting between goal-oriented tasks. When the brain is focusing on a specific goal or activity, the default mode network gets turned down. However, this network is hyperactive in schizophrenic patients before and during tasks that require the brain to focus, and patients do not perform well in these tasks.
Further studies of these mice could help reveal more about the role of the default mode network in schizophrenia, Tonegawa says.

Schizophrenia linked to abnormal brain waves

Neuroscientists discover neurological hyperactivity that produces disordered thinking

Schizophrenia patients usually suffer from a breakdown of organized thought, often accompanied by delusions or hallucinations. For the first time, MIT neuroscientists have observed the neural activity that appears to produce this disordered thinking.

The researchers found that mice lacking the brain protein calcineurin have hyperactive brain-wave oscillations in the hippocampus while resting, and are unable to mentally replay a route they have just run, as normal mice do.

Mutations in the gene for calcineurin have previously been found in some schizophrenia patients. Ten years ago, MIT researchers led by Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, created mice lacking the gene for calcineurin in the forebrain; these mice displayed several behavioral symptoms of schizophrenia, including impaired short-term memory, attention deficits, and abnormal social behavior.

In the new study, which appears in the Oct. 16 issue of the journal Neuron, Tonegawa and colleagues at the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory recorded the electrical activity of individual neurons in the hippocampus of these knockout mice as they ran along a track.

Previous studies have shown that in normal mice, “place cells” in the hippocampus, which are linked to specific locations along the track, fire in sequence when the mice take breaks from running the course. This mental replay also occurs when the mice are sleeping. These replays occur in association with very high frequency brain-wave oscillations known as ripple events.

In mice lacking calcineurin, the researchers found that brain activity was normal as the mice ran the course, but when they paused, their ripple events were much stronger and more frequent. Furthermore, the firing of the place cells was abnormally augmented and in no particular order, indicating that the mice were not replaying the route they had just run.

This pattern helps to explain some of the symptoms seen in schizophrenia, the researchers say.

“We think that in this mouse model, we may have some kind of indication that there’s a disorganized thinking process going on,” says Junghyup Suh, a research scientist at the Picower Institute and one of the paper’s lead authors. “During ripple events in normal mice we know there is a sequential replay event. This mutant mouse doesn’t seem to have that kind of replay of a previous experience.”

The paper’s other lead author is David Foster, a former MIT postdoc. Other authors are Heydar Davoudi and Matthew Wilson, the Sherman Fairchild Professor of Neuroscience at MIT and a member of the Picower Institute.

The researchers speculate that in normal mice, the role of calcineurin is to suppress the connections between neurons, known as synapses, in the hippocampus. In mice without calcineurin, a phenomenon known as long-term potentiation (LTP) becomes more prevalent, making synapses stronger. Also, the opposite effect, known as long-term depression (LTD), is suppressed.

“It looks like this abnormally high LTP has an impact on activity of these cells specifically during resting periods, or post exploration periods. That’s a very interesting specificity,” Tonegawa says. “We don’t know why it’s so specific.”

The researchers believe the abnormal hyperactivity they found in the hippocampus may represent a disruption of the brain’s “default mode network” — a communication network that connects the hippocampus, prefrontal cortex (where most thought and planning occurs), and other parts of the cortex.

This network is more active when a person (or mouse) is resting between goal-oriented tasks. When the brain is focusing on a specific goal or activity, the default mode network gets turned down. However, this network is hyperactive in schizophrenic patients before and during tasks that require the brain to focus, and patients do not perform well in these tasks.

Further studies of these mice could help reveal more about the role of the default mode network in schizophrenia, Tonegawa says.

Filed under prefrontal cortex neural activity LTP hippocampus depression schizophrenia calcineurin psychology neuroscience science

504 notes

A Blueprint for Restoring Touch with a Prosthetic Hand
New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.
The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.
“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”
Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.
Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.
The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.
Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.
Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.
The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.
“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

A Blueprint for Restoring Touch with a Prosthetic Hand

New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.

The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.

“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”

Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.

Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.

The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.

Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.

Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.

The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.

“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

Filed under BCI neural activity robotics prosthetics touch technology neuroscience science

45 notes

Enigmatic Neurons Help Flies Get Oriented
Neurons deep in the fly’s brain tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.
As a tiny fruit fly navigates through its environment, it relies on visual landmarks to orient itself. Now, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have identified neurons deep in the fly’s brain that tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.
In Vivek Jayaraman’s lab at Janelia, researchers are studying fly neural circuits with the goal of understanding fundamental principles of information processing. “Our hope is that over time we will get a clear picture of the neural transformations and algorithms involved in creating actions from sensory and motor information,” Vivek says. In a study published October 9, 2013, in the journal Nature, Vivek and postdoctoral researcher Johannes Seelig report on visual representations in a region of the fly brain thought to be important for visual learning.
Researchers have gathered compelling evidence that fruit flies recognize and remember visual features in their environment. Flies can use that information to seek out safe spaces or to avoid uncomfortable ones. Genetic studies have indicated that a region deep in the fly brain called the central complex is critical for these behaviors.
The central complex is found in the brains of insects and some crustaceans. “It’s not purely involved in visual learning, and is quite likely to be broadly important for sensory-motor integration in all these critters,” Vivek says, noting that in butterflies and locusts, the central complex may facilitate the use of polarized light for navigation during migration. Also, studies in cockroaches have found that it is important for turning in response to antennal touch. But in flies, no one had yet examined the activity of the neurons in the central complex to characterize their role. “It really was quite a mystery what was going on in this part of the fly brain,” Seelig says, adding that this study is only one step on a long road.
Technical limitations had prevented researchers from measuring neuronal activity in the fly’s central complex, where neurons are far smaller than they are in larger insects. Available techniques required flies to be immobilized, so scientists were limited to studying parts of the nervous system that detected sensory information, rather than those that processed that information or converted it into motor activity. But in 2010, Seelig and colleagues in Vivek’s lab at Janelia developed a method that enabled them to peer into the interior of a fly’s brain with a two-photon microscope, while the insect maintained the freedom to walk and move its wings. The microscope can detect genetically encoded proteins that light up when a nerve cell fires, due to the surge of calcium ions that accompanies a nerve impulse. “Once we had these tools, we really wanted to apply them to this central brain area,” Seelig says.
Using genetically modified strains of flies, Vivek and Seelig focused their experiments on specific classes of neurons and collected more comprehensive data about the activity of those populations than had been done in other species. They chose to zero in on a class of neurons known as ring neurons, on which the dendrites—the branching structures that connect to neighboring cells—were densely concentrated in specific spots within a region neighboring the central complex.
To test the ring neurons’ response to visual stimuli, Seelig placed the flies into a small virtual reality arena in which the flies could be presented with simple patterns of light. By monitoring the calcium-indicating dyes in the cells, Seelig could visualize nerve activity as each fly was exposed to different stimuli.
The researchers found that each neuron responded to visual stimuli in specific regions of the fly’s field of view. “Each cell seemed to have its receptive field in a slightly different area of that space,” Vivek explains. Further, they found that the orientation of the patterns that they projected onto the walls of the arena influenced the ring cells’ response: for example, vertical bars elicited a stronger response than horizontal bars for most cells.
Flies have an innate tendency to walk or fly toward vertically-oriented stimuli, but Vivek and Seelig were nonetheless surprised by the ring neurons’ strong bias towards detecting such patterns. Further, Seelig says, this preference for specific orientations parallels what others have found in larger animals. Neurons in the primary visual cortex of mammalian brains known as simple cells function similarly—identifying basic visual patterns and being tuned to their orientation. “A wide range of visual animals seem to use the same basic feature set when they break down the visual scene,” Vivek says, explaining that in humans, such simple features are combined by later brain regions into increasingly complex ones to eventually produce representations for faces.
He says it is not clear whether fruit flies reassemble the features in their visual field in the same way, or whether basic representations are instead converted directly into guidance for actions. “It’s an open question how complex a shape a fly needs to recognize and respond to,” he says.
The scientists also found that the ring neurons responded similarly to the visual environment regardless of whether the flies were stationary or walking. Flying diminished the response somewhat, but overall, Seelig says, visual patterns influenced the neurons’ activity far more than the insects’ behavior. “These particular neurons seem to filter out visual features, then send that information to other parts of the central complex that may transform that information into a behavioral signal. So this may be one of the major entry points for visual information to the region,” says Seelig.
Determining what happens next to the information received by ring neurons is an important question for Vivek and Seelig, who say they will expand their studies by testing the activity of other neurons in the central complex. “By marching through these networks, we hope to begin to understand how sensory information is integrated to make motor decisions,” Vivek explains.

Enigmatic Neurons Help Flies Get Oriented

Neurons deep in the fly’s brain tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.

As a tiny fruit fly navigates through its environment, it relies on visual landmarks to orient itself. Now, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have identified neurons deep in the fly’s brain that tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.

In Vivek Jayaraman’s lab at Janelia, researchers are studying fly neural circuits with the goal of understanding fundamental principles of information processing. “Our hope is that over time we will get a clear picture of the neural transformations and algorithms involved in creating actions from sensory and motor information,” Vivek says. In a study published October 9, 2013, in the journal Nature, Vivek and postdoctoral researcher Johannes Seelig report on visual representations in a region of the fly brain thought to be important for visual learning.

Researchers have gathered compelling evidence that fruit flies recognize and remember visual features in their environment. Flies can use that information to seek out safe spaces or to avoid uncomfortable ones. Genetic studies have indicated that a region deep in the fly brain called the central complex is critical for these behaviors.

The central complex is found in the brains of insects and some crustaceans. “It’s not purely involved in visual learning, and is quite likely to be broadly important for sensory-motor integration in all these critters,” Vivek says, noting that in butterflies and locusts, the central complex may facilitate the use of polarized light for navigation during migration. Also, studies in cockroaches have found that it is important for turning in response to antennal touch. But in flies, no one had yet examined the activity of the neurons in the central complex to characterize their role. “It really was quite a mystery what was going on in this part of the fly brain,” Seelig says, adding that this study is only one step on a long road.

Technical limitations had prevented researchers from measuring neuronal activity in the fly’s central complex, where neurons are far smaller than they are in larger insects. Available techniques required flies to be immobilized, so scientists were limited to studying parts of the nervous system that detected sensory information, rather than those that processed that information or converted it into motor activity. But in 2010, Seelig and colleagues in Vivek’s lab at Janelia developed a method that enabled them to peer into the interior of a fly’s brain with a two-photon microscope, while the insect maintained the freedom to walk and move its wings. The microscope can detect genetically encoded proteins that light up when a nerve cell fires, due to the surge of calcium ions that accompanies a nerve impulse. “Once we had these tools, we really wanted to apply them to this central brain area,” Seelig says.

Using genetically modified strains of flies, Vivek and Seelig focused their experiments on specific classes of neurons and collected more comprehensive data about the activity of those populations than had been done in other species. They chose to zero in on a class of neurons known as ring neurons, on which the dendrites—the branching structures that connect to neighboring cells—were densely concentrated in specific spots within a region neighboring the central complex.

To test the ring neurons’ response to visual stimuli, Seelig placed the flies into a small virtual reality arena in which the flies could be presented with simple patterns of light. By monitoring the calcium-indicating dyes in the cells, Seelig could visualize nerve activity as each fly was exposed to different stimuli.

The researchers found that each neuron responded to visual stimuli in specific regions of the fly’s field of view. “Each cell seemed to have its receptive field in a slightly different area of that space,” Vivek explains. Further, they found that the orientation of the patterns that they projected onto the walls of the arena influenced the ring cells’ response: for example, vertical bars elicited a stronger response than horizontal bars for most cells.

Flies have an innate tendency to walk or fly toward vertically-oriented stimuli, but Vivek and Seelig were nonetheless surprised by the ring neurons’ strong bias towards detecting such patterns. Further, Seelig says, this preference for specific orientations parallels what others have found in larger animals. Neurons in the primary visual cortex of mammalian brains known as simple cells function similarly—identifying basic visual patterns and being tuned to their orientation. “A wide range of visual animals seem to use the same basic feature set when they break down the visual scene,” Vivek says, explaining that in humans, such simple features are combined by later brain regions into increasingly complex ones to eventually produce representations for faces.

He says it is not clear whether fruit flies reassemble the features in their visual field in the same way, or whether basic representations are instead converted directly into guidance for actions. “It’s an open question how complex a shape a fly needs to recognize and respond to,” he says.

The scientists also found that the ring neurons responded similarly to the visual environment regardless of whether the flies were stationary or walking. Flying diminished the response somewhat, but overall, Seelig says, visual patterns influenced the neurons’ activity far more than the insects’ behavior. “These particular neurons seem to filter out visual features, then send that information to other parts of the central complex that may transform that information into a behavioral signal. So this may be one of the major entry points for visual information to the region,” says Seelig.

Determining what happens next to the information received by ring neurons is an important question for Vivek and Seelig, who say they will expand their studies by testing the activity of other neurons in the central complex. “By marching through these networks, we hope to begin to understand how sensory information is integrated to make motor decisions,” Vivek explains.

Filed under learning brain mapping neural circuits vision neural activity neuroscience science

224 notes

Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures. 
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for  interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart  – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."

Stanford scientists build a ‘brain stethoscope’ to turn seizures into music

When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.

Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.

He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.

Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.

He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.

Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]

"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."

If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.

Parvizi and Chafe dubbed the device a “brain stethoscope.”

The sound of a seizure

The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.

Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.

In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.

After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.

It’s the perfect representation of the three phases of a seizure event, Parvizi said.

Part art exhibit, part experiment

Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.

To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.

"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.

The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.

The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for  interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.

From a clinical perspective, the work is still very experimental.

"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."

Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.

"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart  – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."

Filed under brainwaves EEG neural activity seizures music brain stethoscope biofeedback neuroscience science

122 notes

Nanoscale neuronal activity measured for the first time

A new technique that allows scientists to measure the electrical activity in the communication junctions of the nervous systems has been developed by a researcher at Queen Mary University of London.

The junctions in the central nervous systems that enable the information to flow between neurons, known as synapses, are around 100 times smaller than the width of a human hair (one micrometer and less) and as such are difficult to target let alone measure.

image

By applying a high-resolution scanning probe microscopy that allows three-dimensional visualisation of the structures, the team were able to measure and record the flow of current in small synaptic terminals for the first time.

“We replaced the conventional low-resolution optical system with a high-resolution microscope based on a nanopipette,” said Dr Pavel Novak, a bioengineering specialist from Queen Mary’s School of Engineering and Materials Science.

“The nanopipette hovers above the surface of the sample and scans the structure to reveal its three-dimensional topography. The same nanopipette then attaches to the surface at selected locations on the structure to record electrical activity. By repeating the same procedure for different locations of the neuronal network we can obtain a three-dimensional map of its electrical properties and activity.”

The research, published (Wednesday 18 September) in Neuron, opens a new window into the neuronal activity at nanometre scale, and may contribute to the wider effort of understanding the function of the brain represented by the Brain Activity Map Project (BRAIN initiative), which aims to map the function of each individual neuron in the human brain.

(Source: qmul.ac.uk)

Filed under neural activity BRAIN initiative nervous system CNS synapses ion channels neuroscience science

57 notes

Capturing brain activity with sculpted light
Researchers in Vienna develop new imaging technique to study the function of entire nervous systems. Scientists at the Campus Vienna Biocenter (Austria) have found a way to overcome some of the limitations of light microscopy. Applying the new technique, they can record the activity of a worm’s brain with high temporal and spatial resolution, ultimately linking brain anatomy to brain function. The journal Nature Methods publishes the details in its current issue.
A major aim of today’s neuroscience is to understand how an organism’s nervous system processes sensory input and generates behavior. To achieve this goal, scientists must obtain detailed maps of how the nerve cells are wired up in the brain, as well as information on how these networks interact in real time.
The organism many neuroscientists turn to in order to study brain function is a tiny, transparent worm found in rotting soil. The simple nematode C. elegans is equipped with just 302 neurons that are connected by roughly 8000 synapses. It is the only animal for which a complete nervous system has been anatomically mapped.
Researchers have so far focused on studying the activity of single neurons and small networks in the worm, but have not been able to establish a functional map of the entire nervous system. This is mainly due to limitations in the imaging-techniques they employ: the activity of single cells can be resolved with high precision, but simultaneously looking at the function of all neurons that comprise entire brains has been a major challenge. Thus, there was always a trade-off between spatial or temporal accuracy and the size of brain regions that could be studied.
Scientists at Vienna’s Research Institute of Molecular Pathology (IMP), the Max Perutz Laboratories (MFPL), and the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna have now closed this gap and developed a high speed imaging technique with single neuron resolution that bypasses these limitations. In a paper published online in Nature Methods, the teams of Alipasha Vaziri and Manuel Zimmer describe the technique which is based on their ability to “sculpt” the three-dimensional distribution of light in the sample. With this new kind of microscopy, they are able to record the activity of 70% of the nerve cells in a worm’s head with high spatial and temporal resolution. 
“Previously, we would have to scan the focused light by the microscope in all three dimensions”, says quantum physicist Robert Prevedel. “That takes far too long to record the activity of all neurons at the same time. The trick we invented tinkers with the light waves in a way that allows us to generate “discs” of light in the sample. Therefore, we only have to scan in one dimension to get the information we need. We end up with three-dimensional videos that show the simultaneous activities of a large number of neurons and how they change over time.” Robert Prevedel is a Senior Postdoc in the lab of Alipasha Vaziri, who is an IMP-MFPL Group Leader and is heading the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna, where the new technique was developed.
However, the new microscopic method is only half the story. Visualising the neurons requires tagging them with a fluorescent protein that lights up when it binds to calcium, signaling the nerve cells’ activity. “The neurons in a worm’s head are so densely packed that we could not distinguish them on our first images”, explains neurobiologist Tina Schrödel, co-first author of the study. “Our solution was to insert the calcium sensor into the nuclei rather than the entire cells, thereby sharpening the image so we could identify single neurons.” Tina Schrödel is a Doctoral Student in the lab of the IMP Group Leader Manuel Zimmer.
The new technique that came about by a close collaboration of physicists and neurobiologists has great potentials beyond studies in worms, according to the researchers. It will open up the way for experiments that were not possible before. One of the questions that will be addressed is how the brain processes sensory information to “plan” specific movements and then executes them. This ambitious project will require further refinement of both the microscopy methods and computational methods in order to study freely moving animals. The team in Vienna is set to achieve this goal in the coming two years. 

Capturing brain activity with sculpted light

Researchers in Vienna develop new imaging technique to study the function of entire nervous systems. Scientists at the Campus Vienna Biocenter (Austria) have found a way to overcome some of the limitations of light microscopy. Applying the new technique, they can record the activity of a worm’s brain with high temporal and spatial resolution, ultimately linking brain anatomy to brain function. The journal Nature Methods publishes the details in its current issue.

A major aim of today’s neuroscience is to understand how an organism’s nervous system processes sensory input and generates behavior. To achieve this goal, scientists must obtain detailed maps of how the nerve cells are wired up in the brain, as well as information on how these networks interact in real time.

The organism many neuroscientists turn to in order to study brain function is a tiny, transparent worm found in rotting soil. The simple nematode C. elegans is equipped with just 302 neurons that are connected by roughly 8000 synapses. It is the only animal for which a complete nervous system has been anatomically mapped.

Researchers have so far focused on studying the activity of single neurons and small networks in the worm, but have not been able to establish a functional map of the entire nervous system. This is mainly due to limitations in the imaging-techniques they employ: the activity of single cells can be resolved with high precision, but simultaneously looking at the function of all neurons that comprise entire brains has been a major challenge. Thus, there was always a trade-off between spatial or temporal accuracy and the size of brain regions that could be studied.

Scientists at Vienna’s Research Institute of Molecular Pathology (IMP), the Max Perutz Laboratories (MFPL), and the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna have now closed this gap and developed a high speed imaging technique with single neuron resolution that bypasses these limitations. In a paper published online in Nature Methods, the teams of Alipasha Vaziri and Manuel Zimmer describe the technique which is based on their ability to “sculpt” the three-dimensional distribution of light in the sample. With this new kind of microscopy, they are able to record the activity of 70% of the nerve cells in a worm’s head with high spatial and temporal resolution. 

“Previously, we would have to scan the focused light by the microscope in all three dimensions”, says quantum physicist Robert Prevedel. “That takes far too long to record the activity of all neurons at the same time. The trick we invented tinkers with the light waves in a way that allows us to generate “discs” of light in the sample. Therefore, we only have to scan in one dimension to get the information we need. We end up with three-dimensional videos that show the simultaneous activities of a large number of neurons and how they change over time.” Robert Prevedel is a Senior Postdoc in the lab of Alipasha Vaziri, who is an IMP-MFPL Group Leader and is heading the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna, where the new technique was developed.

However, the new microscopic method is only half the story. Visualising the neurons requires tagging them with a fluorescent protein that lights up when it binds to calcium, signaling the nerve cells’ activity. “The neurons in a worm’s head are so densely packed that we could not distinguish them on our first images”, explains neurobiologist Tina Schrödel, co-first author of the study. “Our solution was to insert the calcium sensor into the nuclei rather than the entire cells, thereby sharpening the image so we could identify single neurons.” Tina Schrödel is a Doctoral Student in the lab of the IMP Group Leader Manuel Zimmer.

The new technique that came about by a close collaboration of physicists and neurobiologists has great potentials beyond studies in worms, according to the researchers. It will open up the way for experiments that were not possible before. One of the questions that will be addressed is how the brain processes sensory information to “plan” specific movements and then executes them. This ambitious project will require further refinement of both the microscopy methods and computational methods in order to study freely moving animals. The team in Vienna is set to achieve this goal in the coming two years. 

Filed under brain function nerve cells C. elegans nervous system neural activity neuroscience science

86 notes

Discovery helps to unlock brain’s speech-learning mechanism

USC scientists have discovered a population of neurons in the brains of juvenile songbirds that are necessary for allowing the birds to recognize the vocal sounds they are learning to imitate.

image

These neurons encode a memory of learned vocal sounds and form a crucial (and hitherto only theorized) part of the neural system that allows songbirds to hear, imitate and learn its species’ songs — just as human infants acquire speech sounds.

The discovery will allow scientists to uncover the exact neural mechanisms that allow songbirds to hear their own self-produced songs, compare them to the memory of the song that they are trying to imitate and then adjust their vocalizations accordingly.

Because this brain-behavior system is thought to be a model for how human infants learn to speak, understanding it could prove crucial to future understanding and treatment of language disorders in children. In both songbirds and humans, feedback of self-produced vocalizations is compared to memorized vocal sounds and progressively refined to achieve a correct imitation.

“Every neurodevelopmental disorder you can think of — including Tourette syndrome, autism and Rett syndrome — entails in some way a breakdown in auditory processing and vocal communication,” said Sarah Bottjer, senior author of an article on the research that appears in the Journal of Neuroscience on Sept. 4. “Understanding mechanisms of vocal learning at a cellular level is a huge step toward being able to someday address the biological issues behind the behavioral issues.”

Bottjer professor of neurobiology at the USC Dornsife College of Letters, Arts and Sciences, collaborated with lead author Jennifer Achiro, a graduate student at USC, to examine the activity of neurons in songbirds’ brains using electrodes to record the activity of individual neurons.

In the basal ganglia — a complex system of neurons in the brain responsible for, among other things, procedural learning — Bottjer and Achiro were able to isolate two different types of neurons in young songbirds: ones that were activated only when the birds heard themselves singing and others that were activated only when the birds heard the songs of adult birds that they were trying to imitate.

The two sets of neurons allow the songbirds to recognize both their current behavior and a goal behavior that they would like to achieve.

“The process of learning speech requires the brain to compare feedback of current vocal behavior to a memory of target vocal sounds,” Achiro said. “The discovery of these two distinct populations of neurons means that this brain region contains separate neural representation of current and goal behaviors. Now, for the first time, we can test how these two neural representations are compared so that correct matches between the two are somehow rewarded.”

The next step for scientists will be to learn how the brain rewards correct matches between feedback of current vocal behavior and the goal memory that depicts memorized vocal sounds as songbirds make progress in bringing their current behavior closer to their goal behavior, Bottjer said.

(Source: news.usc.edu)

Filed under songbirds neural activity basal ganglia vocal learning speech neuroscience science

65 notes

First to measure the concerted activity of a neuronal circuit

Neurobiologists from the Friedrich Miescher Institute for Biomedical Research have been the first to measure the concerted activity of a neuronal circuit in the retina as it extracts information about a moving object. With their novel and powerful approach they can now not only visualize networks of neurons but can also measure functional aspects. These insights are direly needed for a better understanding of the processes in the brain in health and disease.

image

For many decades electrophysiology and genetics have been the main tools in the toolbox of approaches to study individual neurons in the central nervous system to understand perception and behavior. In the last five years however, neurobiology has been riding a wave of technological advances that brought unprecedented insights: Optogenetics and genetically encoded activity sensors has allowed scientists to control and measure the activity of clearly defined neurons; the application of rabies viruses enabled the visualization of networks of interconnected nerve cells. What was still missing, was the link between neural circuit and monitoring of activity.

Scientists from the Friedrich Miescher Institute for Biomedical Research have now been the first to measure the concerted activity of a neuronal circuit in the retina as it extracts information about the movement of an object.

In a world defined through eyesight, it is crucial to be able to discern whether something moves towards us, moves away or moves next to us. It comes as no surprise then that in the retina several parallel neuronal circuits are reserved for the extraction of information about movement and that most of them are dedicated to the analysis of the direction of motion.

As they report online in Neuron, Keisuke Yonehara and Karl Farrow, two Postdoctoral Fellows in Botond Roska’s team at the FMI, have now been able to monitor the activity of all circuit elements in a motion sensitive retinal circuit at once, and pinpoint the site, at a subcellular level, where the information about the direction of the movement becomes encoded. To achieve this, they used genetically altered rabies viruses expressing calcium sensors developed by the laboratory of Klaus Conzelmann in Munich. The special property of rabies viruses is that they move across connected neurons and therefore are able to deliver the sensors to all circuit elements within a defined neuronal circuit. Simultaneous two-photon imaging allowed them then to monitor activity in every part of the neuronal circuit at once, even in subcellular compartments, such as axons, synapses and dendrites.

"We are extremely thrilled that with this new method, which combines the power of genetically altered rabies viruses with very powerful two-photon microscopy, we are now able to link circuit architecture with activity and ultimately function," comments Yonehara. "We have illustrated the power of the method for a better understanding of the perception of movement and are convinced that the method will allow us to reach a better understanding of many processes in the retina and in other parts of the brain."

(Source: medicalxpress.com)

Filed under optogenetics neural activity retina retinal circuit nerve cells neuroscience science

378 notes

Harvard creates brain-to-brain interface, allows humans to control other animals with thoughts alone
Researchers at Harvard University have created the first noninvasive brain-to-brain interface (BBI) between a human… and a rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat’s tail. This is one of the most important steps towards BBIs that allow for telepathic links between two or more humans — which is a good thing in the case of friends and family, but terrifying if you stop to think about the nefarious possibilities of a fascist dictatorship with mind control tech.
In recent years there have been huge advances in the field of brain-computer interfaces, where your thoughts are detected and “understood” by a sensor attached to a computer, but relatively little work has been done in the opposite direction (computer-brain interfaces). This is because it’s one thing for a computer to work out what a human is thinking (by asking or observing their actions), but another thing entirely to inject new thoughts into a human brain. To put it bluntly, we have almost no idea of how thoughts are encoded by neurons in the brain. For now, the best we can do is create a computer-brain interface that stimulates a region of the brain that’s known to create a certain reaction — such as the specific part of the motor cortex that’s in charge of your fingers. We don’t have the power to move your fingers in a specific way — that would require knowing the brain’s encoding scheme — but we can make them jerk around.
Which brings us neatly onto Harvard’s human-mouse brain-to-brain interface. The human wears a run-of-the-mill EEG-based BCI, while the mouse is equipped with a focused ultrasound (FUS) computer-brain interface (CBI). FUS is a relatively new technology that allows the researchers to excite a very specific region of neurons in the rat’s brain using an ultrasound signal. The main advantage of FUS is that, unlike most brain-stimulation techniques, such as DBS, it isn’t invasive. For now it looks like the FUS equipment is fairly bulky, but future versions might be small enough for use in everyday human CBIs. 
With the EEG equipped, the BCI detects whenever the human looks at a specific pattern on a computer screen. The BCI then fires off a command to rat’s CBI, which causes ultrasound to be beamed into the region of the rat’s motor cortex that deals with tail movement. As you can see in the video above, this causes the rat’s tail to move. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 seconds for the entire process — from the human deciding to look at the screen, through to the movement of the rat’s tail. In theory, the human could trigger a rodent tail-wag by simply thinking about it, rather than having to look at a specific pattern — but presumably, for the sake of this experiment, the researchers wanted to focus on the FUS CBI, rather than the BCI.
Moving forward, the researchers now need to work on the transmitting of more complex ideas, such as hunger or sexual arousal, from human to rat. At some point, they’ll also have to put the FUS CBI on a human, to see if thoughts can be transferred in the opposite direction. Finally, we’ll need to combine an EEG and FUS into a single unit, to allow for bidirectional sharing of thoughts and ideas. Human-to-human telepathy is the most obvious use, but what if the same bidirectional technology also allows us to really communicate with animals, such as dogs? There would be huge ethical concerns, of course, especially if a dictatorial tyrant uses the tech to control our thoughts — but the same can be said of almost every futuristic, transhumanist technology.

Harvard creates brain-to-brain interface, allows humans to control other animals with thoughts alone

Researchers at Harvard University have created the first noninvasive brain-to-brain interface (BBI) between a human… and a rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat’s tail. This is one of the most important steps towards BBIs that allow for telepathic links between two or more humans — which is a good thing in the case of friends and family, but terrifying if you stop to think about the nefarious possibilities of a fascist dictatorship with mind control tech.

In recent years there have been huge advances in the field of brain-computer interfaces, where your thoughts are detected and “understood” by a sensor attached to a computer, but relatively little work has been done in the opposite direction (computer-brain interfaces). This is because it’s one thing for a computer to work out what a human is thinking (by asking or observing their actions), but another thing entirely to inject new thoughts into a human brain. To put it bluntly, we have almost no idea of how thoughts are encoded by neurons in the brain. For now, the best we can do is create a computer-brain interface that stimulates a region of the brain that’s known to create a certain reaction — such as the specific part of the motor cortex that’s in charge of your fingers. We don’t have the power to move your fingers in a specific way — that would require knowing the brain’s encoding scheme — but we can make them jerk around.

Which brings us neatly onto Harvard’s human-mouse brain-to-brain interface. The human wears a run-of-the-mill EEG-based BCI, while the mouse is equipped with a focused ultrasound (FUS) computer-brain interface (CBI). FUS is a relatively new technology that allows the researchers to excite a very specific region of neurons in the rat’s brain using an ultrasound signal. The main advantage of FUS is that, unlike most brain-stimulation techniques, such as DBS, it isn’t invasive. For now it looks like the FUS equipment is fairly bulky, but future versions might be small enough for use in everyday human CBIs.

With the EEG equipped, the BCI detects whenever the human looks at a specific pattern on a computer screen. The BCI then fires off a command to rat’s CBI, which causes ultrasound to be beamed into the region of the rat’s motor cortex that deals with tail movement. As you can see in the video above, this causes the rat’s tail to move. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 seconds for the entire process — from the human deciding to look at the screen, through to the movement of the rat’s tail. In theory, the human could trigger a rodent tail-wag by simply thinking about it, rather than having to look at a specific pattern — but presumably, for the sake of this experiment, the researchers wanted to focus on the FUS CBI, rather than the BCI.

Moving forward, the researchers now need to work on the transmitting of more complex ideas, such as hunger or sexual arousal, from human to rat. At some point, they’ll also have to put the FUS CBI on a human, to see if thoughts can be transferred in the opposite direction. Finally, we’ll need to combine an EEG and FUS into a single unit, to allow for bidirectional sharing of thoughts and ideas. Human-to-human telepathy is the most obvious use, but what if the same bidirectional technology also allows us to really communicate with animals, such as dogs? There would be huge ethical concerns, of course, especially if a dictatorial tyrant uses the tech to control our thoughts — but the same can be said of almost every futuristic, transhumanist technology.

Filed under brain-to-brain interface transcranial focused ultrasound neural activity BCI neuroscience science

76 notes

A faster vessel for charting the brain

Princeton University researchers have created “souped up” versions of the calcium-sensitive proteins that for the past decade or so have given scientists an unparalleled view and understanding of brain-cell communication.

Reported July 18 in the journal Nature Communications, the enhanced proteins developed at Princeton respond more quickly to changes in neuron activity, and can be customized to react to different, faster rates of neuron activity. Together, these characteristics would give scientists a more precise and comprehensive view of neuron activity.

The researchers sought to improve the function of proteins known as green fluorescent protein/calmodulin protein (GCaMP) sensors, an amalgam of various natural proteins that are a popular form of sensor proteins known as genetically encoded calcium indicators, or GECIs. Once introduced into the brain via the bloodstream, GCaMPs react to the various calcium ions involved in cell activity by glowing fluorescent green. Scientists use this fluorescence to trace the path of neural signals throughout the brain as they happen.

GCaMPs and other GECIs have been invaluable to neuroscience, said corresponding author Samuel Wang, a Princeton associate professor of molecular biology and the Princeton Neuroscience Institute. Scientists have used the sensors to observe brain signals in real time, and to delve into previously obscure neural networks such as those in the cerebellum. GECIs are necessary for the BRAIN Initiative President Barack Obama announced in April, Wang said. The estimated $3 billion project to map the activity of every neuron in the human brain cannot be done with traditional methods, such as probes that attach to the surface of the brain. “There is no possible way to complete that project with electrodes, so you have to do it with other tools — GECIs are those tools,” he said.

Despite their value, however, the proteins are still limited when it comes to keeping up with the fast-paced, high-voltage ways of brain cells, and various research groups have attempted to address these limitations over the years, Wang said.

“GCaMPs have made significant contributions to neuroscience so far, but there have been some limits and researchers are running up against those limits,” Wang said.

One shortcoming is that GCaMPs are about one-tenth of a second slower than neurons, which can fire hundreds of times per second, Wang said. The proteins activate after neural signals begin, and mark the end of a signal when brain cells have (by neuronal terms) long since moved on to something else, Wang said. A second current limitation is that GCaMPs can only bind to four calcium ions at a time. Higher rates of cell activity cannot be fully explored because GCaMPs fill up quickly on the accompanying rush of calcium.

The Princeton GCaMPs respond more quickly to changes in calcium so that changes in neural activity are seen more immediately, Wang said. By making the sensors a bit more sensitive and fragile — the proteins bond more quickly with calcium and come apart more readily to stop glowing when calcium is removed — the researchers whittled down the roughly 20 millisecond response time of existing GCaMPs to about 10 milliseconds, Wang said.

The researchers also tweaked certain GCaMPs to be sensitive to different types of calcium ion concentrations, meaning that high rates of neural activity can be better explored. “Each probe is sensitive to one range or another, but when we put them together they make a nice choir,” Wang said.

The researchers’ work also revealed the location of a “bottleneck” in GCaMPs that occurs when calcium concentration is high, which poses a third limitation of the existing sensors, Wang said. “Now that we know where that bottle neck is, we think we can design the next generation of proteins to get around it,” Wang said. “We think if we open up that bottleneck, we can get a probe that responds to neuronal signals in one millisecond.”

The faster protein that the Princeton researchers developed could pair with work in other laboratories to improve other areas of GCaMP function, Wang said. For instance, a research group out of the Howard Hughes Medical Institute reported in Nature July 17 that it developed a GCaMP with a brighter fluorescence. Such improvements on existing sensors gradually open up more of the brain to exploration and understanding, said Wang, adding that the Princeton researchers will soon introduce their sensor into fly and mammalian brains.

“At some level, what we’ve done is like taking apart an engine, lubing up the parts and putting it back together. We took what was the best version of the protein at the time and made changes to the letter code of the protein,” Wang said. “We want to watch the whole symphony of thousands of neurons do their thing, and we think this variant of GCaMPs will help us do that better than anyone else has.”

(Source: blogs.princeton.edu)

Filed under neural activity proteins GCaMP calcium ions neuroscience science

free counters