Neuroscience

Articles and news from the latest research reports.

97 notes

Discovery could eventually help diagnose and treat chronic pain
More than 100 million Americans suffer from chronic pain. But treating and studying chronic pain is complex and presents many challenges. Scientists have long searched for a method to objectively measure pain and a new study from Brigham and Women’s Hospital advances that effort. The study appears in the January 2013 print edition of the journal Pain.
"While we need to be cautious in the interpretation of our results, this has the potential to be an exciting discovery for anyone who suffers from chronic pain," said Marco Loggia, PhD, the lead author of the study and a researcher in the Pain Management Center at BWH and the Department of Radiology at Massachusetts General Hospital. "We showed that specific brain patterns appear to track the severity of pain reported by patients, and can predict who is more likely to experience a worsening of chronic back pain while performing maneuvers designed to induce pain. If further research shows this metric is reliable, this is a step toward developing an objective scale for measuring pain in humans."
Specifically, researchers studied 16 adults with chronic back pain and 16 adults without pain and used a brain imaging technique called arterial spin labeling to examine patterns of brain connectivity (that is, to examine how different brain regions interact, or “talk to each other”). They found that when a patient moved in a way that increased their back pain, a network of brain regions called Default Mode Network exhibited changes in its connections. Regions within the network (such as the medial prefrontal cortex) became less connected with the rest of the network, whereas regions outside network (such as the insula) became connected with this network. Some of these observations have been noted in previous studies of fibromyalgia patients, which suggests these changes in brain connectivity might reflect a general feature of chronic pain, possibly common to different patient populations.
"This is the first study using arterial spin labeling to show common networking properties of the brain are affected by chronic pain," said study author Ajay Wasan, MD, MSc, Director of the Section of Clinical Pain Research at BWH. "This novel research supports the use of arterial spin labeling as a tool to evaluate how the brain encodes and is affected by clinical pain, and the use of resting default mode network connectivity as a potential neuroimaging biomarker for chronic pain perception."

Discovery could eventually help diagnose and treat chronic pain

More than 100 million Americans suffer from chronic pain. But treating and studying chronic pain is complex and presents many challenges. Scientists have long searched for a method to objectively measure pain and a new study from Brigham and Women’s Hospital advances that effort. The study appears in the January 2013 print edition of the journal Pain.

"While we need to be cautious in the interpretation of our results, this has the potential to be an exciting discovery for anyone who suffers from chronic pain," said Marco Loggia, PhD, the lead author of the study and a researcher in the Pain Management Center at BWH and the Department of Radiology at Massachusetts General Hospital. "We showed that specific brain patterns appear to track the severity of pain reported by patients, and can predict who is more likely to experience a worsening of chronic back pain while performing maneuvers designed to induce pain. If further research shows this metric is reliable, this is a step toward developing an objective scale for measuring pain in humans."

Specifically, researchers studied 16 adults with chronic back pain and 16 adults without pain and used a brain imaging technique called arterial spin labeling to examine patterns of brain connectivity (that is, to examine how different brain regions interact, or “talk to each other”). They found that when a patient moved in a way that increased their back pain, a network of brain regions called Default Mode Network exhibited changes in its connections. Regions within the network (such as the medial prefrontal cortex) became less connected with the rest of the network, whereas regions outside network (such as the insula) became connected with this network. Some of these observations have been noted in previous studies of fibromyalgia patients, which suggests these changes in brain connectivity might reflect a general feature of chronic pain, possibly common to different patient populations.

"This is the first study using arterial spin labeling to show common networking properties of the brain are affected by chronic pain," said study author Ajay Wasan, MD, MSc, Director of the Section of Clinical Pain Research at BWH. "This novel research supports the use of arterial spin labeling as a tool to evaluate how the brain encodes and is affected by clinical pain, and the use of resting default mode network connectivity as a potential neuroimaging biomarker for chronic pain perception."

Filed under pain chronic pain brain imaging arterial spin brain connectivity neuroscience science

44 notes

Research offers new targets for stroke treatments
New research from the University of Georgia identifies the mechanisms responsible for regenerating blood vessels in the brain.
Looking for ways to improve outcomes for stroke patients, researchers led by the UGA College of Pharmacy assistant dean for clinical programs Susan Fagan used candesartan, a commonly prescribed medication for lowering blood pressure, to identify specific growth factors in the brain responsible for recovery after a stroke.
The results were published online Dec. 4 in the Journal of Pharmacology and Experimental Therapeutics
Although candesartan has been shown to protect the brain after a stroke, its use is generally avoided because lowering a person’s blood pressure quickly after a stroke can cause problems-like decreasing much-needed oxygen to the brain-during the critical period of time following a stroke.
"The really unique thing we found is that candesartan can increase the secretion of brain derived neurotrophic factor, and the effect is separate from the blood pressure lowering effect," said study coauthor Ahmed Alhusban, who is a doctoral candidate in the College of Pharmacy. "This will support a new area for treatments of stroke and other brain injury."
Alhusban and Fagan worked with Anna Kozak, a research scientist in the college, and Adviye Ergul, a professor and director of the physiology graduate program at Georgia Health Sciences University. They are the first to show that the positive effects of candesartan on brain blood vessel growth are caused by brain derived neurotrophic factor, or BDNF.
The research shows that when candesartan blocks the angiotensin II type 1 receptor, which lowers blood pressure, it stimulates the AT2 receptor and increases the secretion of BDNF, which encourages brain repair through the growth of new blood vessels.
"BDNF is a key player in learning and memory," said Fagan, the Albert W. Jowdy Professor. "A reduction of BDNF in the brain has been associated with Alzheimer’s disease and depression, so increasing this growth factor with a common medication is exciting."
AT2 is a brain receptor responsible for angiogenesis, or the growth of new blood vessels from pre-existing vessels. Angiogenesis is a normal and vital process in human growth and development-as well as in healing.
(Image: iStock)

Research offers new targets for stroke treatments

New research from the University of Georgia identifies the mechanisms responsible for regenerating blood vessels in the brain.

Looking for ways to improve outcomes for stroke patients, researchers led by the UGA College of Pharmacy assistant dean for clinical programs Susan Fagan used candesartan, a commonly prescribed medication for lowering blood pressure, to identify specific growth factors in the brain responsible for recovery after a stroke.

The results were published online Dec. 4 in the Journal of Pharmacology and Experimental Therapeutics

Although candesartan has been shown to protect the brain after a stroke, its use is generally avoided because lowering a person’s blood pressure quickly after a stroke can cause problems-like decreasing much-needed oxygen to the brain-during the critical period of time following a stroke.

"The really unique thing we found is that candesartan can increase the secretion of brain derived neurotrophic factor, and the effect is separate from the blood pressure lowering effect," said study coauthor Ahmed Alhusban, who is a doctoral candidate in the College of Pharmacy. "This will support a new area for treatments of stroke and other brain injury."

Alhusban and Fagan worked with Anna Kozak, a research scientist in the college, and Adviye Ergul, a professor and director of the physiology graduate program at Georgia Health Sciences University. They are the first to show that the positive effects of candesartan on brain blood vessel growth are caused by brain derived neurotrophic factor, or BDNF.

The research shows that when candesartan blocks the angiotensin II type 1 receptor, which lowers blood pressure, it stimulates the AT2 receptor and increases the secretion of BDNF, which encourages brain repair through the growth of new blood vessels.

"BDNF is a key player in learning and memory," said Fagan, the Albert W. Jowdy Professor. "A reduction of BDNF in the brain has been associated with Alzheimer’s disease and depression, so increasing this growth factor with a common medication is exciting."

AT2 is a brain receptor responsible for angiogenesis, or the growth of new blood vessels from pre-existing vessels. Angiogenesis is a normal and vital process in human growth and development-as well as in healing.

(Image: iStock)

Filed under brain blood vessels stroke brain injury candesartan blood pressure medicine science

123 notes

Evolution: It’s all in how you splice it
MIT biologists find that alternative splicing of RNA rewires signaling in different tissues and may often contribute to species differences.
When genes were first discovered, the canonical view was that each gene encodes a unique protein. However, biologists later found that segments of genes can be combined in different ways, giving rise to many different proteins.
This phenomenon, known as alternative RNA splicing, often alters the outputs of signaling networks in different tissues and may contribute disproportionately to differences between species, according to a new study from MIT biologists.
After analyzing vast amounts of genetic data, the researchers found that the same genes are expressed in the same tissue types, such as liver or heart, across mammalian species. However, alternative splicing patterns — which determine the segments of those genes included or excluded — vary from species to species.
“The core things that make a heart a heart are mostly determined by a heart-specific gene expression signature. But the core things that make a mouse a mouse may disproportionately derive from splicing patterns that differ from those of rats or other mammals” says Chris Burge, an MIT professor of biology and biological engineering, and senior author of a paper on the findings in the Dec. 20 online edition of Science.
Lead author of the paper is MIT biology graduate student Jason Merkin. Other authors are Caitlin Russell, a former technician in Burge’s lab, and Ping Chen, a visiting grad student at MIT.
Read more

Evolution: It’s all in how you splice it

MIT biologists find that alternative splicing of RNA rewires signaling in different tissues and may often contribute to species differences.

When genes were first discovered, the canonical view was that each gene encodes a unique protein. However, biologists later found that segments of genes can be combined in different ways, giving rise to many different proteins.

This phenomenon, known as alternative RNA splicing, often alters the outputs of signaling networks in different tissues and may contribute disproportionately to differences between species, according to a new study from MIT biologists.

After analyzing vast amounts of genetic data, the researchers found that the same genes are expressed in the same tissue types, such as liver or heart, across mammalian species. However, alternative splicing patterns — which determine the segments of those genes included or excluded — vary from species to species.

“The core things that make a heart a heart are mostly determined by a heart-specific gene expression signature. But the core things that make a mouse a mouse may disproportionately derive from splicing patterns that differ from those of rats or other mammals” says Chris Burge, an MIT professor of biology and biological engineering, and senior author of a paper on the findings in the Dec. 20 online edition of Science.

Lead author of the paper is MIT biology graduate student Jason Merkin. Other authors are Caitlin Russell, a former technician in Burge’s lab, and Ping Chen, a visiting grad student at MIT.

Read more

Filed under evolution splicing RNA splicing gene expression genetics neuroscience science

149 notes

Researchers uncover major source of evolutionary differences among species
University of Toronto Faculty of Medicine researchers have uncovered a genetic basis for fundamental differences between humans and other vertebrates that could also help explain why humans are susceptible to diseases not found in other species.
Scientists have wondered why vertebrate species, which look and behave very differently from one another, nevertheless share very similar repertoires of genes. For example, despite obvious physical differences, humans and chimpanzees share a nearly identical set of genes.
The team sequenced and compared the composition of hundreds of thousands of genetic messages in equivalent organs, such as brain, heart and liver, from 10 different vertebrate species, ranging from human to frog. They found that alternative splicing — a process by which a single gene can give rise to multiple proteins — has dramatically changed the structure and complexity of genetic messages during vertebrate evolution.
The results suggest that differences in the ways genetic messages are spliced have played a major role in the evolution of fundamental characteristics of species. However, the same process that makes species look different from one another could also account for differences in their disease susceptibility.
"The same genetic mechanisms responsible for a species’ identity could help scientists understand why humans are prone to certain diseases such as Alzheimer’s and particular types of cancer that are not found in other species," says Nuno Barbosa-Morais, the study’s lead author and a computational biologist in U of T Faculty of Medicine’s Donnelly Centre for Cellular and Biomolecular Research. "Our research may lead to the design of improved approaches to study and treat human diseases."
One of the team’s major findings is that the alternative splicing process is more complex in humans and other primates compared to species such as mouse, chicken and frog.
"Our observations provide new insight into the genetic basis of complexity of organs such as the human brain," says Benjamin Blencowe, Professor in U of T’s Banting and Best Department of Research and the Department of Molecular Genetics, and the study’s senior author.
"The fact that alternative splicing is very different even between closely related vertebrate species could ultimately help explain how we are unique."

Researchers uncover major source of evolutionary differences among species

University of Toronto Faculty of Medicine researchers have uncovered a genetic basis for fundamental differences between humans and other vertebrates that could also help explain why humans are susceptible to diseases not found in other species.

Scientists have wondered why vertebrate species, which look and behave very differently from one another, nevertheless share very similar repertoires of genes. For example, despite obvious physical differences, humans and chimpanzees share a nearly identical set of genes.

The team sequenced and compared the composition of hundreds of thousands of genetic messages in equivalent organs, such as brain, heart and liver, from 10 different vertebrate species, ranging from human to frog. They found that alternative splicing — a process by which a single gene can give rise to multiple proteins — has dramatically changed the structure and complexity of genetic messages during vertebrate evolution.

The results suggest that differences in the ways genetic messages are spliced have played a major role in the evolution of fundamental characteristics of species. However, the same process that makes species look different from one another could also account for differences in their disease susceptibility.

"The same genetic mechanisms responsible for a species’ identity could help scientists understand why humans are prone to certain diseases such as Alzheimer’s and particular types of cancer that are not found in other species," says Nuno Barbosa-Morais, the study’s lead author and a computational biologist in U of T Faculty of Medicine’s Donnelly Centre for Cellular and Biomolecular Research. "Our research may lead to the design of improved approaches to study and treat human diseases."

One of the team’s major findings is that the alternative splicing process is more complex in humans and other primates compared to species such as mouse, chicken and frog.

"Our observations provide new insight into the genetic basis of complexity of organs such as the human brain," says Benjamin Blencowe, Professor in U of T’s Banting and Best Department of Research and the Department of Molecular Genetics, and the study’s senior author.

"The fact that alternative splicing is very different even between closely related vertebrate species could ultimately help explain how we are unique."

Filed under diseases evolution genes genetics splicing vertebrates neuroscience science

65 notes

ucsdhealthsciences:

Genomic “Hotspots” Offer Clues to Causes of Autism, Other Disorders

An international team, led by researchers from the University of California, San Diego School of Medicine, has discovered that “random” mutations in the genome are not quite so random after all. Their study, to be published in the journal Cell on December 21, shows that the DNA sequence in some regions of the human genome is quite volatile and can mutate ten times more frequently than the rest of the genome. Genes that are linked to autism and a variety of other disorders have a particularly strong tendency to mutate.

Clusters of mutations or “hotspots” are not unique to the autism genome but instead are an intrinsic characteristic of the human genome, according to principal investigator Jonathan Sebat, PhD, professor of psychiatry and cellular and molecule medicine, and chief of the Beyster Center for Molecular Genomics of Neuropsychiatric Diseases at UC San Diego.

“Our findings provide some insights into the underlying basis of autism—that, surprisingly, the genome is not shy about tinkering with its important genes” said Sebat.  “To the contrary, disease-causing genes tend to be hypermutable.”

Read more

66 notes

Method offers DNA blueprint of a single human cell
Humans, strawberries, honeybees, chickens and rats are among the many organisms to have their DNA sequenced. But although sequencing an individual species is challenging, it is much harder to sequence the DNA of a single cell.
To get enough DNA for sequencing, thousands or even millions of cells are usually required. And finding out which mutations are in which cells is almost impossible, and mutations present in only a few cells (like early cancerous cells) are hidden altogether.
But a technique reported today in Science provides a way to copy DNA so that more than 90% of the genome of a single cell can be sequenced. The method also makes it easier to detect minor DNA sequence variations in single cells and, so, to find genetic differences between individual cells. Such differences can help to explain how cancer becomes more malignant, how reproductive cells emerge and even how individual neurons differ.
Sunney Xie, a chemical biologist at Harvard University in Cambridge, Massachusetts, and his colleagues have developed a technique, called multiple annealing and looping-based amplification cycles (MALBAC), that allows them to sequence 93% of the genome of a human cell. In MALBAC, DNA from a single cell is isolated, then short DNA molecules called primers are added. These are complementary to random parts of the DNA, which makes them stick to the strands and act as starting points for DNA replication.
The primers consist of two parts - a sticky eight-nucleotide portion that varies and binds to the DNA, plus a common sequence of 27 nucleotides. This common sequence stops the DNA from being copied too many times and massively cuts down the amplification bias. It does this by incorporating itself into the newly copied strands so that they loop back on themselves, which prevents over-copying.
Easy Recipe
“MALBAC opens a door to many critical questions,” says Bing Ren, who studies gene regulation at the University of California, San Diego. For example, it can be used to examine how quickly mutations accumulate, and to find variations in gene-copy number and chromosomal abnormalities across a population of cells. It also helps to detects variants across more of the genome than other sequencing methods.
“I think people are going to start using it right away,” agrees James Eberwine, who works on single-cell genetics at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia. He adds that researchers may have to tweak conditions — such as the ratio of primers to genomic DNA — to get experiments to work.
But although MALBAC covers the genome more thoroughly than other techniques, it is not perfect. It still misses perhaps one-third of single-nucleotide variations. Also, the enzyme that copies the DNA is error prone, so the copying process itself can introduce variants that were not present in the cell.
Xie was able to weed out all false positives, but only by comparing individually sequenced genomes from three closely related cells. That will increase costs, and could prove impossible for certain tissue samples, says Nicholas Navin at the MD Anderson Cancer Center in Houston, Texas, who has developed his own techniques for single-cell sequencing.

Method offers DNA blueprint of a single human cell

Humans, strawberries, honeybees, chickens and rats are among the many organisms to have their DNA sequenced. But although sequencing an individual species is challenging, it is much harder to sequence the DNA of a single cell.

To get enough DNA for sequencing, thousands or even millions of cells are usually required. And finding out which mutations are in which cells is almost impossible, and mutations present in only a few cells (like early cancerous cells) are hidden altogether.

But a technique reported today in Science provides a way to copy DNA so that more than 90% of the genome of a single cell can be sequenced. The method also makes it easier to detect minor DNA sequence variations in single cells and, so, to find genetic differences between individual cells. Such differences can help to explain how cancer becomes more malignant, how reproductive cells emerge and even how individual neurons differ.

Sunney Xie, a chemical biologist at Harvard University in Cambridge, Massachusetts, and his colleagues have developed a technique, called multiple annealing and looping-based amplification cycles (MALBAC), that allows them to sequence 93% of the genome of a human cell. In MALBAC, DNA from a single cell is isolated, then short DNA molecules called primers are added. These are complementary to random parts of the DNA, which makes them stick to the strands and act as starting points for DNA replication.

The primers consist of two parts - a sticky eight-nucleotide portion that varies and binds to the DNA, plus a common sequence of 27 nucleotides. This common sequence stops the DNA from being copied too many times and massively cuts down the amplification bias. It does this by incorporating itself into the newly copied strands so that they loop back on themselves, which prevents over-copying.

Easy Recipe

“MALBAC opens a door to many critical questions,” says Bing Ren, who studies gene regulation at the University of California, San Diego. For example, it can be used to examine how quickly mutations accumulate, and to find variations in gene-copy number and chromosomal abnormalities across a population of cells. It also helps to detects variants across more of the genome than other sequencing methods.

“I think people are going to start using it right away,” agrees James Eberwine, who works on single-cell genetics at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia. He adds that researchers may have to tweak conditions — such as the ratio of primers to genomic DNA — to get experiments to work.

But although MALBAC covers the genome more thoroughly than other techniques, it is not perfect. It still misses perhaps one-third of single-nucleotide variations. Also, the enzyme that copies the DNA is error prone, so the copying process itself can introduce variants that were not present in the cell.

Xie was able to weed out all false positives, but only by comparing individually sequenced genomes from three closely related cells. That will increase costs, and could prove impossible for certain tissue samples, says Nicholas Navin at the MD Anderson Cancer Center in Houston, Texas, who has developed his own techniques for single-cell sequencing.

Filed under DNA sequencing single-cell sequencing MALBAC genomes mutations genetics science

116 notes

Doing the math for how songbirds learn to sing

Scientists studying how songbirds stay on key have developed a statistical explanation for why some things are harder for the brain to learn than others.

“We’ve built the first mathematical model that uses a bird’s previous sensorimotor experience to predict its ability to learn,” says Emory biologist Samuel Sober. “We hope it will help us understand the math of learning in other species, including humans.”

Sober conducted the research with physiologist Michael Brainard of the University of California, San Francisco.

Their results, showing that adult birds correct small errors in their songs more rapidly and robustly than large errors, were published in the Proceedings of the National Academy of Sciences (PNAS).

Sober’s lab uses Bengalese finches as a model for researching the mechanisms of how the brain learns to correct vocal mistakes.

The researchers wanted to quantify the relationship between the size of a vocal error, and the probability of the brain making a sensorimotor correction. The experiments were conducted on adult Bengalese finches outfitted with light-weight, miniature headphones.

As a bird sang into a microphone, the researchers used sound-processing equipment to trick the bird into thinking it was making vocal mistakes, by changing the bird’s pitch and altering the way the bird heard itself, in real-time.

“When we made small pitch shifts, the birds learned really well and corrected their errors rapidly,” Sober says. “As we made the pitch shifts bigger, the birds learned less well, until at a certain pitch, they stopped learning.”

The researchers used the data to develop a statistical model for the size of a vocal error and whether a bird learns, including the cut-off point for learning from sensorimotor mistakes. They are now developing additional experiments to test and refine the model.

“We hope that our mathematical framework for how songbirds learn to sing could help in the development of human behavioral therapies for vocal rehabilitation, as well as increase our general understanding of how the brain learns,” Sober says.

Filed under vocal learning sensorimotor learning songbirds mathematical model neuroscience science

144 notes

Dragonflies have human-like ‘selective attention’
In a discovery that may prove important for cognitive science, our understanding of nature and applications for robot vision, researchers at the University of Adelaide have found evidence that the dragonfly is capable of higher-level thought processes when hunting its prey.
The discovery, published online in the journal Current Biology, is the first evidence that an invertebrate animal has brain cells for selective attention, which has so far only been demonstrated in primates.
Dr Steven Wiederman and Associate Professor David O’Carroll from the University of Adelaide’s Centre for Neuroscience Research have been studying insect vision for many years.
Using a tiny glass probe with a tip that is only 60 nanometres wide - 1500 times smaller than the width of a human hair - the researchers have discovered neuron activity in the dragonfly’s brain that enables this selective attention.
They found that when presented with more than one visual target, the dragonfly brain cell ‘locks on’ to one target and behaves as if the other targets don’t exist.
"Selective attention is fundamental to humans’ ability to select and respond to one sensory stimulus in the presence of distractions," Dr Wiederman says.
Associate Professor O’Carroll says this brain activity makes the dragonfly a more efficient and effective predator.
"Recent studies reveal similar mechanisms at work in the primate brain, but you might expect it there. We weren’t expecting to find something so sophisticated in lowly insects from a group that’s been around for 325 million years, Associate Professor O’Carroll says.
"We believe our work will appeal to neuroscientists and engineers alike. For example, it could be used as a model system for robotic vision. Because the insect brain is simple and accessible, future work may allow us to fully understand the underlying network of neurons and copy it into intelligent robots," he says.

Dragonflies have human-like ‘selective attention’

In a discovery that may prove important for cognitive science, our understanding of nature and applications for robot vision, researchers at the University of Adelaide have found evidence that the dragonfly is capable of higher-level thought processes when hunting its prey.

The discovery, published online in the journal Current Biology, is the first evidence that an invertebrate animal has brain cells for selective attention, which has so far only been demonstrated in primates.

Dr Steven Wiederman and Associate Professor David O’Carroll from the University of Adelaide’s Centre for Neuroscience Research have been studying insect vision for many years.

Using a tiny glass probe with a tip that is only 60 nanometres wide - 1500 times smaller than the width of a human hair - the researchers have discovered neuron activity in the dragonfly’s brain that enables this selective attention.

They found that when presented with more than one visual target, the dragonfly brain cell ‘locks on’ to one target and behaves as if the other targets don’t exist.

"Selective attention is fundamental to humans’ ability to select and respond to one sensory stimulus in the presence of distractions," Dr Wiederman says.

Associate Professor O’Carroll says this brain activity makes the dragonfly a more efficient and effective predator.

"Recent studies reveal similar mechanisms at work in the primate brain, but you might expect it there. We weren’t expecting to find something so sophisticated in lowly insects from a group that’s been around for 325 million years, Associate Professor O’Carroll says.

"We believe our work will appeal to neuroscientists and engineers alike. For example, it could be used as a model system for robotic vision. Because the insect brain is simple and accessible, future work may allow us to fully understand the underlying network of neurons and copy it into intelligent robots," he says.

Filed under dragonflies selective attention insect vision brain cells neuron activity neuroscience science

164 notes

Will we ever… have cyborg brains?
For the first time in over 15 years, Cathy Hutchinson brought a coffee to her lips and smiled. Cathy had suffered from the paralysing effects of a stroke, but when neurosurgeons implanted tiny recording devices in her brain, she could use her thought patterns to guide a robot arm that delivered her hot drink. This week, it was reported that Jan Scheuermann, who is paralysed from the neck down, could grasp and move a variety of objects by controlling a robotic arm with her mind.
In both cases the implants convert brain signals into digital commands that a robotic device can follow. It’s a remarkable achievement, one that could transform the lives of people debilitated through illness.
Yet it’s still a far cry from the visions of man fused with machine, or cyborgs, that grace computer games or sci-fi. The dream is to create the type of brain augmentations we see in fiction that provide cyborgs with advantages or superhuman powers. But the ones being made in the lab only aim to restore lost functionality – whether it’s brain implants that restore limb control, or cochlear implants for hearing.
Creating implants that improve cognitive capabilities, such as an enhanced vision “gadget” that can be taken from a shelf and plugged into our brain, or implants that can restore or enhance brain function is understandably a much tougher task. But some research groups are being to make some inroads.
For instance, neuroscientists Matti Mintz from Tel Aviv University and Paul Verschure from Universitat Pompeu Fabra in Barcelona, Spain, are trying to develop an implantable chip that can restore lost movement through the ability to learn new motor functions, rather than regaining limb control. Verschure’s team has developed a mathematical model that mimics the flow of signals in the cerebellum, the region of the brain that plays an important role in movement control. The researchers programmed this model onto a circuit and connected it with electrodes to a rat’s brain. If they tried to teach the rat a conditioned motor reflex – to blink its eye when it sensed an air puff – while its cerebellum was “switched off” by being anaesthetised, it couldn’t respond. But when the team switched the chip on, this recorded the signal from the air puff, processed it, and sent electrical impulses to the rat’s motor neurons. The rat blinked, and the effect lasted even after it woke up.
Continue reading

Will we ever… have cyborg brains?

For the first time in over 15 years, Cathy Hutchinson brought a coffee to her lips and smiled. Cathy had suffered from the paralysing effects of a stroke, but when neurosurgeons implanted tiny recording devices in her brain, she could use her thought patterns to guide a robot arm that delivered her hot drink. This week, it was reported that Jan Scheuermann, who is paralysed from the neck down, could grasp and move a variety of objects by controlling a robotic arm with her mind.

In both cases the implants convert brain signals into digital commands that a robotic device can follow. It’s a remarkable achievement, one that could transform the lives of people debilitated through illness.

Yet it’s still a far cry from the visions of man fused with machine, or cyborgs, that grace computer games or sci-fi. The dream is to create the type of brain augmentations we see in fiction that provide cyborgs with advantages or superhuman powers. But the ones being made in the lab only aim to restore lost functionality – whether it’s brain implants that restore limb control, or cochlear implants for hearing.

Creating implants that improve cognitive capabilities, such as an enhanced vision “gadget” that can be taken from a shelf and plugged into our brain, or implants that can restore or enhance brain function is understandably a much tougher task. But some research groups are being to make some inroads.

For instance, neuroscientists Matti Mintz from Tel Aviv University and Paul Verschure from Universitat Pompeu Fabra in Barcelona, Spain, are trying to develop an implantable chip that can restore lost movement through the ability to learn new motor functions, rather than regaining limb control. Verschure’s team has developed a mathematical model that mimics the flow of signals in the cerebellum, the region of the brain that plays an important role in movement control. The researchers programmed this model onto a circuit and connected it with electrodes to a rat’s brain. If they tried to teach the rat a conditioned motor reflex – to blink its eye when it sensed an air puff – while its cerebellum was “switched off” by being anaesthetised, it couldn’t respond. But when the team switched the chip on, this recorded the signal from the air puff, processed it, and sent electrical impulses to the rat’s motor neurons. The rat blinked, and the effect lasted even after it woke up.

Continue reading

Filed under brain robotics prosthetics implants bionics neuroscience science

400 notes

Human hands have ‘evolved for fighting’

Compared with apes, humans have shorter palms and fingers and longer, stronger flexible thumbs. Experts have long assumed these features evolved to help our ancestors make and use tools. But new evidence from the US suggests it was not just dexterity that shaped the human hand, but violence also.


Hands largely evolved through natural selection to form a punching fist, it is claimed. ”The role aggression has played in our evolution has not been adequately appreciated,” said Professor David Carrier, from the University of Utah.
”There are people who do not like this idea but it is clear that compared with other mammals, great apes are a relatively aggressive group with lots of fighting and violence, and that includes us. We’re the poster children for violence.”
The forces of natural selection that drove hands to become nimble-fingered also turned them into weapons, Prof Carrier believes.
”Individuals who could strike with a clenched fish could hit harder without injuring themselves, so they were better able to fight for mates and thus be more likely to reproduce,” he said.
”If a fist posture does provide a performance advantage for punching, the proportions of our hands also may have evolved in response to selection for fighting ability, in addition to selection for dexterity.”

Human hands have ‘evolved for fighting’

Compared with apes, humans have shorter palms and fingers and longer, stronger flexible thumbs. Experts have long assumed these features evolved to help our ancestors make and use tools. But new evidence from the US suggests it was not just dexterity that shaped the human hand, but violence also.

Hands largely evolved through natural selection to form a punching fist, it is claimed. ”The role aggression has played in our evolution has not been adequately appreciated,” said Professor David Carrier, from the University of Utah.

”There are people who do not like this idea but it is clear that compared with other mammals, great apes are a relatively aggressive group with lots of fighting and violence, and that includes us. We’re the poster children for violence.”

The forces of natural selection that drove hands to become nimble-fingered also turned them into weapons, Prof Carrier believes.

”Individuals who could strike with a clenched fish could hit harder without injuring themselves, so they were better able to fight for mates and thus be more likely to reproduce,” he said.

”If a fist posture does provide a performance advantage for punching, the proportions of our hands also may have evolved in response to selection for fighting ability, in addition to selection for dexterity.”

Filed under evolution aggression natural selection science

free counters