Neuroscience

Articles and news from the latest research reports.

329 notes

Learn Dutch in your sleep
When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.
​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.
Quiet playback
In their trial, which has been published in the journal “Cerebral Cortex”, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.
The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.
Reinforcement of spontaneous activation
Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.
Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Learn Dutch in your sleep

When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.

​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.

Quiet playback

In their trial, which has been published in the journal “Cerebral Cortex, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.

The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.

Reinforcement of spontaneous activation

Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.

Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Filed under language sleep memory consolidation memory psychology neuroscience science

56 notes

(Image caption: The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Credit: Photo courtesy of Warren Grill.)
Watching Individual Neurons Respond to Magnetic Therapy
Engineers and neuroscientists at Duke University have developed a method to measure the response of an individual neuron to transcranial magnetic stimulation (TMS) of the brain. The advance will help researchers understand the underlying physiological effects of TMS — a procedure used to treat psychiatric disorders — and optimize its use as a therapeutic treatment.
TMS uses magnetic fields created by electric currents running through a wire coil to induce neural activity in the brain. With the flip of a switch, researchers can cause a hand to move or influence behavior. The technique has long been used in conjunction with other treatments in the hopes of improving treatment for conditions including depression and substance abuse.
While studies have demonstrated the efficacy of TMS, the technique’s physiological mechanisms have long been lost in a “black box.” Researchers know what goes into the treatment and the results that come out, but do not understand what’s happening in between.
Part of the reason for this mystery lies in the difficulty of measuring neural responses during the procedure; the comparatively tiny activity of a single neuron is lost in the tidal wave of current being generated by TMS. But the new study demonstrates a way to remove the proverbial haystack.
The results were published online June 29 in Nature Neuroscience.
“Nobody really knows what TMS is doing inside the brain, and given that lack of information, it has been very hard to interpret the outcomes of studies or to make therapies more effective,” said Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology at Duke. “We set out to try to understand what’s happening inside that black box by recording activity from single neurons during the delivery of TMS in a non-human primate. Conceptually, it was a very simple goal. But technically, it turned out to be very challenging.”
First, Grill and his colleagues in the Duke Institute for Brain Sciences (DIBS) engineered new hardware that could separate the TMS current from the neural response, which is thousands of times smaller. Once that was achieved, however, they discovered that their recording instrument was doing more than simply recording.
The TMS magnetic field was creating an electric current through the electrode measuring the neuron, raising the possibility that this current, instead of the TMS, was causing the neural response. The team had to characterize this current and make it small enough to ignore.
Finally, the researchers had to account for vibrations caused by the large current passing through the TMS device’s small coil of wire — a design problem in and of itself, because the typical TMS coil is too large for a non-human primate’s head. Because the coil is physically connected to the skull, the vibration was jostling the measurement electrode.
The researchers were able to compensate for each artifact, however, and see for the first time into the black box of TMS. They successfully recorded the action potentials of an individual neuron moments after TMS pulses and observed changes in its activity that significantly differed from activity following placebo treatments.
Grill worked with Angel Peterchev, assistant professor in psychiatry and behavioral science, biomedical engineering, and electrical and computer engineering, on the design of the coil. The team also included Michael Platt, director of DIBS and professor of neurobiology, and Mark Sommer, a professor of biomedical engineering.
They demonstrated that the technique could be recreated in different labs. “So, any modern lab working with non-human primates and electrophysiology can use this same approach in their studies,” said Grill.
The researchers hope that many others will take their method and use it to reveal the effects TMS has on neurons. Once a basic understanding is gained of how TMS interacts with neurons on an individual scale, its effects could be amplified and the therapeutic benefits of TMS increased.
“Studies with TMS have all been empirical,” said Grill. “You could look at the effects and change the coil, frequency, duration or many other variables. Now we can begin to understand the physiological effects of TMS and carefully craft protocols rather than relying on trial and error. I think that is where the real power of this research is going to come from.”

(Image caption: The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Credit: Photo courtesy of Warren Grill.)

Watching Individual Neurons Respond to Magnetic Therapy

Engineers and neuroscientists at Duke University have developed a method to measure the response of an individual neuron to transcranial magnetic stimulation (TMS) of the brain. The advance will help researchers understand the underlying physiological effects of TMS — a procedure used to treat psychiatric disorders — and optimize its use as a therapeutic treatment.

TMS uses magnetic fields created by electric currents running through a wire coil to induce neural activity in the brain. With the flip of a switch, researchers can cause a hand to move or influence behavior. The technique has long been used in conjunction with other treatments in the hopes of improving treatment for conditions including depression and substance abuse.

While studies have demonstrated the efficacy of TMS, the technique’s physiological mechanisms have long been lost in a “black box.” Researchers know what goes into the treatment and the results that come out, but do not understand what’s happening in between.

Part of the reason for this mystery lies in the difficulty of measuring neural responses during the procedure; the comparatively tiny activity of a single neuron is lost in the tidal wave of current being generated by TMS. But the new study demonstrates a way to remove the proverbial haystack.

The results were published online June 29 in Nature Neuroscience.

“Nobody really knows what TMS is doing inside the brain, and given that lack of information, it has been very hard to interpret the outcomes of studies or to make therapies more effective,” said Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology at Duke. “We set out to try to understand what’s happening inside that black box by recording activity from single neurons during the delivery of TMS in a non-human primate. Conceptually, it was a very simple goal. But technically, it turned out to be very challenging.”

First, Grill and his colleagues in the Duke Institute for Brain Sciences (DIBS) engineered new hardware that could separate the TMS current from the neural response, which is thousands of times smaller. Once that was achieved, however, they discovered that their recording instrument was doing more than simply recording.

The TMS magnetic field was creating an electric current through the electrode measuring the neuron, raising the possibility that this current, instead of the TMS, was causing the neural response. The team had to characterize this current and make it small enough to ignore.

Finally, the researchers had to account for vibrations caused by the large current passing through the TMS device’s small coil of wire — a design problem in and of itself, because the typical TMS coil is too large for a non-human primate’s head. Because the coil is physically connected to the skull, the vibration was jostling the measurement electrode.

The researchers were able to compensate for each artifact, however, and see for the first time into the black box of TMS. They successfully recorded the action potentials of an individual neuron moments after TMS pulses and observed changes in its activity that significantly differed from activity following placebo treatments.

Grill worked with Angel Peterchev, assistant professor in psychiatry and behavioral science, biomedical engineering, and electrical and computer engineering, on the design of the coil. The team also included Michael Platt, director of DIBS and professor of neurobiology, and Mark Sommer, a professor of biomedical engineering.

They demonstrated that the technique could be recreated in different labs. “So, any modern lab working with non-human primates and electrophysiology can use this same approach in their studies,” said Grill.

The researchers hope that many others will take their method and use it to reveal the effects TMS has on neurons. Once a basic understanding is gained of how TMS interacts with neurons on an individual scale, its effects could be amplified and the therapeutic benefits of TMS increased.

“Studies with TMS have all been empirical,” said Grill. “You could look at the effects and change the coil, frequency, duration or many other variables. Now we can begin to understand the physiological effects of TMS and carefully craft protocols rather than relying on trial and error. I think that is where the real power of this research is going to come from.”

Filed under transcranial magnetic stimulation neurons neuroscience science

302 notes

The secrets of children’s chatter: research shows boys and girls learn language differently
Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.
However, most research into understanding how these processes work has been carried out with adults.
“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.
“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”
A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.
They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.
The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.
Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.
The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.
She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.
“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”
Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects
(Image: Getty Images)

The secrets of children’s chatter: research shows boys and girls learn language differently

Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.

However, most research into understanding how these processes work has been carried out with adults.

“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.

“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”

A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.

They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.

The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.

Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.

The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.

She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.

“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”

Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects

(Image: Getty Images)

Filed under language memory children child development sex differences psychology neuroscience science

198 notes

Noninvasive brain control
Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.
MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.
This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.
Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.
The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.
Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.
Mining nature’s diversity
To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.
Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.
Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.
“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.
Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.
A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.
“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”
Restoring vision
Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.
Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.
This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.
Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Noninvasive brain control

Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.

MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.

This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.

Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.

The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.

Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.

Mining nature’s diversity

To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.

Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.

Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.

“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.

Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.

A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.

“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”

Restoring vision

Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.

Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.

This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.

Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Filed under optogenetics brain activity opsins vision neuroscience science

1,567 notes

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too. 
It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”
The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too.

It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”

The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Filed under zeitraffer phenomenon akinetopsia motion perception psychology neuroscience science

140 notes

Monkeys also believe in winning streaks
Humans have a well-documented tendency to see winning and losing streaks in situations that, in fact, are random. But scientists disagree about whether the “hot-hand bias” is a cultural artifact picked up in childhood or a predisposition deeply ingrained in the structure of our cognitive architecture.
Now in the first study in non-human primates of this systematic error in decision making, researchers find that monkeys also share our unfounded belief in winning and losing streaks. The results suggests that the penchant to see patterns that actually don’t exist may be inherited—an evolutionary adaptation that may have provided our ancestors a selective advantage when foraging for food in the wild, according to lead author Tommy Blanchard, a doctoral candidate in brain and cognitive sciences at the University of Rochester.
The cognitive bias may be difficult to override even in situations that are truly random. This inborn tendency to feel that we are on a roll or in a slump may help explain why gambling can be so alluring and why the stock market is so prone to wild swings, said coauthor Benjamin Hayden, assistant professor brain and cognitive sciences at the University of Rochester.
Hayden, Blanchard, and Andreas Wilke, an assistant professor of psychology at Clarkson University, reported their findings in the July issue of the Journal of Experimental Psychology: Animal Learning and Cognition.
To measure whether monkeys actually believe in winning streaks, the researchers had to create a computerized game that was so captivating monkeys would want to play for hours. “Luckily, monkeys love to gamble,” said Blanchard. So the team devised a fast-paced task in which each monkey could choose right or left and receive a reward when they guessed correctly.
The researchers created three types of play, two with clear patterns (the correct answer tended to repeat on one side or to alternate from side to side) and a third in which the lucky pick was completely random. Where clear patterns existed, the three rhesus monkeys in the study quickly guessed the correct sequence. But in the random scenarios, the monkeys continued to make choices as if they expected a “streak”. In other words, even when rewards were random, the monkeys favored one side.
The monkeys showed the hot-hand bias consistently over weeks of play and an average of 1,244 trials per condition. “They had lots and lots of opportunities to get over this bias, to learn and change, and yet they continued to show the same tendency,” said Blanchard.
So why do monkeys and humans share this false belief in a run of luck even when faced over and over with evidence that the results are random? The authors speculate that the distribution of food in the wild, which is not random, may be the culprit. “If you find a nice juicy beetle on the underside of a log, this is pretty good evidence that there might be a beetle in a similar location nearby, because beetles, like most food sources, tend to live near each other,” explained Hayden.
Evolution has also primed our brains to look for patterns, added Hayden. “We have this incredible drive to see patterns in the world, and we also have this incredible drive to learn. I think it’s very related to why we like music, and why we like to do crossword puzzles, Sudoku, and things like that. If there’s a pattern there, we’re on top of it. And if there may or may not be a pattern there, that’s even more interesting.”
Understanding the hot-hand bias could inform treatment for gambling addiction and provide insights for investors, said Hayden. “If a belief in winning streaks is hardwired, then we may want to look for more rigorous retaining for individuals who cannot control their gambling. And investors should keep in mind that humans have an inherited bias to believe that if a stock goes up one day, it will continue to go up.”
The results also could provide nuance to our understanding of free will, said Blanchard, who was drawn to the study of decision making during prior graduate training in philosophy. “Biases in our decision-making mechanisms, like this bias towards belief in winning and losing streaks, say something really deep about what sorts of creatures we are. We often like to think we make decisions based only on the information we’re conscious of. But we’re not always aware of why we make certain decisions or believe certain things.
“We’re a complex mix of biases and heuristics and statistical reasoning. When you put it all together, that’s how you get sophisticated behavior. We don’t know where a lot of these biases come from, but this study—and others like it—suggest many of them are due to cognitive mechanisms we share with our primate relatives,” said Blanchard.

Monkeys also believe in winning streaks

Humans have a well-documented tendency to see winning and losing streaks in situations that, in fact, are random. But scientists disagree about whether the “hot-hand bias” is a cultural artifact picked up in childhood or a predisposition deeply ingrained in the structure of our cognitive architecture.

Now in the first study in non-human primates of this systematic error in decision making, researchers find that monkeys also share our unfounded belief in winning and losing streaks. The results suggests that the penchant to see patterns that actually don’t exist may be inherited—an evolutionary adaptation that may have provided our ancestors a selective advantage when foraging for food in the wild, according to lead author Tommy Blanchard, a doctoral candidate in brain and cognitive sciences at the University of Rochester.

The cognitive bias may be difficult to override even in situations that are truly random. This inborn tendency to feel that we are on a roll or in a slump may help explain why gambling can be so alluring and why the stock market is so prone to wild swings, said coauthor Benjamin Hayden, assistant professor brain and cognitive sciences at the University of Rochester.

Hayden, Blanchard, and Andreas Wilke, an assistant professor of psychology at Clarkson University, reported their findings in the July issue of the Journal of Experimental Psychology: Animal Learning and Cognition.

To measure whether monkeys actually believe in winning streaks, the researchers had to create a computerized game that was so captivating monkeys would want to play for hours. “Luckily, monkeys love to gamble,” said Blanchard. So the team devised a fast-paced task in which each monkey could choose right or left and receive a reward when they guessed correctly.

The researchers created three types of play, two with clear patterns (the correct answer tended to repeat on one side or to alternate from side to side) and a third in which the lucky pick was completely random. Where clear patterns existed, the three rhesus monkeys in the study quickly guessed the correct sequence. But in the random scenarios, the monkeys continued to make choices as if they expected a “streak”. In other words, even when rewards were random, the monkeys favored one side.

The monkeys showed the hot-hand bias consistently over weeks of play and an average of 1,244 trials per condition. “They had lots and lots of opportunities to get over this bias, to learn and change, and yet they continued to show the same tendency,” said Blanchard.

So why do monkeys and humans share this false belief in a run of luck even when faced over and over with evidence that the results are random? The authors speculate that the distribution of food in the wild, which is not random, may be the culprit. “If you find a nice juicy beetle on the underside of a log, this is pretty good evidence that there might be a beetle in a similar location nearby, because beetles, like most food sources, tend to live near each other,” explained Hayden.

Evolution has also primed our brains to look for patterns, added Hayden. “We have this incredible drive to see patterns in the world, and we also have this incredible drive to learn. I think it’s very related to why we like music, and why we like to do crossword puzzles, Sudoku, and things like that. If there’s a pattern there, we’re on top of it. And if there may or may not be a pattern there, that’s even more interesting.”

Understanding the hot-hand bias could inform treatment for gambling addiction and provide insights for investors, said Hayden. “If a belief in winning streaks is hardwired, then we may want to look for more rigorous retaining for individuals who cannot control their gambling. And investors should keep in mind that humans have an inherited bias to believe that if a stock goes up one day, it will continue to go up.”

The results also could provide nuance to our understanding of free will, said Blanchard, who was drawn to the study of decision making during prior graduate training in philosophy. “Biases in our decision-making mechanisms, like this bias towards belief in winning and losing streaks, say something really deep about what sorts of creatures we are. We often like to think we make decisions based only on the information we’re conscious of. But we’re not always aware of why we make certain decisions or believe certain things.

“We’re a complex mix of biases and heuristics and statistical reasoning. When you put it all together, that’s how you get sophisticated behavior. We don’t know where a lot of these biases come from, but this study—and others like it—suggest many of them are due to cognitive mechanisms we share with our primate relatives,” said Blanchard.

Filed under hot-hand fallacy decision making primates gambling psychology neuroscience science

155 notes

Upfront and personal: Scientists model human reasoning in the brain’s prefrontal cortex
Located at the forward end of the brain’s frontal lobe, the mammalian prefrontal cortex (PFC) is the seat of many of our most unique cognitive abilities – collectively referred to as executive function – including planning, decision-making, and coordinating thoughts and actions with internal goals. That said, perhaps its most important attribute – one that is apparently unique to H. sapiens – is reasoning which, based on Bayesian, or probabilistic, inference, mitigates uncertainty by informing adaptive behavior. While the structural details of this remarkable process have historically remained elusive, scientists at Institut National de la Santé et de la Recherche Médicale, Paris, and Ecole Normale Supérieure, Paris and Université Pierre et Marie Curie, Paris have recently employed computational modeling and neuroimaging to show that the human prefrontal cortex involves two interactive reasoning pathways that embody hypothesis testing for evaluating, accepting and rejecting behavioral strategies. More specifically, their model describes behavior guided by reason in the form of an online algorithm combining Bayesian inference applied to multiple stored strategies with hypothesis testing that can update these strategies. In addition – as proposed in a previous work – the scientists conclude that since the frontopolar cortex (FPC), located in the anterior-most portion of the frontal lobes, is human-specific and is a key component in executive function decision-making, the ability to make inferences on concurrent strategies and decide to switch directly to one of these alternative strategies is unique to humans as well.
Prof. Etienne Koechlin discussed the paper that he, Dr. Maël Donoso and Dr. Anne G. E. Collins published in Science.
Read more

Upfront and personal: Scientists model human reasoning in the brain’s prefrontal cortex

Located at the forward end of the brain’s frontal lobe, the mammalian prefrontal cortex (PFC) is the seat of many of our most unique cognitive abilities – collectively referred to as executive function – including planning, decision-making, and coordinating thoughts and actions with internal goals. That said, perhaps its most important attribute – one that is apparently unique to H. sapiens – is reasoning which, based on Bayesian, or probabilistic, inference, mitigates uncertainty by informing adaptive behavior. While the structural details of this remarkable process have historically remained elusive, scientists at Institut National de la Santé et de la Recherche Médicale, Paris, and Ecole Normale Supérieure, Paris and Université Pierre et Marie Curie, Paris have recently employed computational modeling and neuroimaging to show that the human prefrontal cortex involves two interactive reasoning pathways that embody hypothesis testing for evaluating, accepting and rejecting behavioral strategies. More specifically, their model describes behavior guided by reason in the form of an online algorithm combining Bayesian inference applied to multiple stored strategies with hypothesis testing that can update these strategies. In addition – as proposed in a previous work – the scientists conclude that since the frontopolar cortex (FPC), located in the anterior-most portion of the frontal lobes, is human-specific and is a key component in executive function decision-making, the ability to make inferences on concurrent strategies and decide to switch directly to one of these alternative strategies is unique to humans as well.

Prof. Etienne Koechlin discussed the paper that he, Dr. Maël Donoso and Dr. Anne G. E. Collins published in Science.

Read more

Filed under prefrontal cortex executive function decision making reasoning neuroscience science

212 notes

Little or poor sleep may be associated with worse brain function when aging
Research published today in PLOS ONE by researchers at the University of Warwick indicates that sleep problems are associated with worse memory and executive function in older people.
Analysis of sleep and cognitive (brain function) data from 3,968 men and 4,821 women who took part in the English Longitudinal Study of Ageing (ELSA), was conducted in a study funded by the Economic and Social Research Council (ESRC). Respondents reported on the quality and quantity of sleep over the period of a month.
The study showed that there is an association between both quality and duration of sleep and brain function which changes with age.
In adults aged between 50 and 64 years of age, short sleep (<6hrs per night) and long sleep (>8hrs per night) were associated with lower brain function scores. By contrast, in older adults (65-89 years) lower brain function scores were only observed in long sleepers.
Dr Michelle A Miller says “6-8 hours of sleep per night is particularly important for optimum brain function, in younger adults”. These results are consistent with our previous research, which showed that 6-8 hours of sleep per night was optimal for physical health, including lowest risk of developing obesity, hypertension, diabetes, heart disease and stroke”.
Interestingly, in the younger pre-retirement aged adults, sleep quality did not have any significant association with brain function scores, whereas in the older adults (>65 years), there was a significant relationship between sleep quality and the observed scores.
“Sleep is important for good health and mental wellbeing” says Professor Francesco Cappuccio, “Optimising sleep at an older age may help to delay the decline in brain function seen with age, or indeed may slow or prevent the rapid decline that leads to dementia”.
Dr Miller concludes that “if poor sleep is causative of future cognitive decline, non-pharmacological improvements in sleep may provide an alternative low-cost and more accessible Public Health intervention, to delay or slow the rate of cognitive decline”.

Little or poor sleep may be associated with worse brain function when aging

Research published today in PLOS ONE by researchers at the University of Warwick indicates that sleep problems are associated with worse memory and executive function in older people.

Analysis of sleep and cognitive (brain function) data from 3,968 men and 4,821 women who took part in the English Longitudinal Study of Ageing (ELSA), was conducted in a study funded by the Economic and Social Research Council (ESRC). Respondents reported on the quality and quantity of sleep over the period of a month.

The study showed that there is an association between both quality and duration of sleep and brain function which changes with age.

In adults aged between 50 and 64 years of age, short sleep (<6hrs per night) and long sleep (>8hrs per night) were associated with lower brain function scores. By contrast, in older adults (65-89 years) lower brain function scores were only observed in long sleepers.

Dr Michelle A Miller says “6-8 hours of sleep per night is particularly important for optimum brain function, in younger adults”. These results are consistent with our previous research, which showed that 6-8 hours of sleep per night was optimal for physical health, including lowest risk of developing obesity, hypertension, diabetes, heart disease and stroke”.

Interestingly, in the younger pre-retirement aged adults, sleep quality did not have any significant association with brain function scores, whereas in the older adults (>65 years), there was a significant relationship between sleep quality and the observed scores.

Sleep is important for good health and mental wellbeing” says Professor Francesco Cappuccio, “Optimising sleep at an older age may help to delay the decline in brain function seen with age, or indeed may slow or prevent the rapid decline that leads to dementia”.

Dr Miller concludes that “if poor sleep is causative of future cognitive decline, non-pharmacological improvements in sleep may provide an alternative low-cost and more accessible Public Health intervention, to delay or slow the rate of cognitive decline”.

Filed under brain function cognitive impairment memory sleep aging psychology neuroscience science

371 notes

Brain fills gaps to produce a likely picture
Researchers at Radboud University use visual illusions to demonstrate to what extent the brain interprets visual signals. They were surprised to discover that active interpretation occurs early on in signal processing. In other words, we see not only with our eyes, but with our brain, too. Current Biology is publishing these results in the July issue.
The results obtained by the Radboud University researchers are illustrated, for example, by the visual illusion on the left: we see a triangle that in fact is not there. The triangle is only suggested because of the way the ‘Pac-Man’ shapes are positioned; there appears to be a light-grey triangle on top of three black circles.
Seen in the fMRIHow does the brain do that? That was the question Peter Kok and Floris de Lange, from the Donders Institute at Radboud University in Nijmegen, asked themselves. Using fMRI, they discovered that the triangle – although non-existent – activates the primary visual brain cortex. This is the first area in the cortex to deal with a signal from the eyes.
The primary visual brain cortex is normally regarded as the area where eye signals are merely processed, but that has now been refuted by the results Kok and De Lange obtained.
Active interpretationRecent theories assume that the brain does not simply process or filter external information, but actively interprets it. In the example described above, the brain decides it is more likely that a triangle would be on top of black circles than that three such circles, each with a bite taken out, would by coincidence point in a particular direction. After all, when we look around, we see triangles and circles more often than Pac-Man shapes.
Furthermore, objects very often lie on top of other things; just think of the books and piles of paper on your desk. The imaginary triangle is a feasible explanation for the bites taken out of the circles; the brain ‘understands’ they are ‘merely’ partly covered black circles.
The unexpected requires more processingKok and De Lange also noticed that whenever the Pac-Man shapes do not form a triangle, more brain activity is required. In the above image on the right, we see that the three Pac-Man shapes ‘underneath’ the triangle cause little brain activity (coloured blue), but the separate Pac-Man on the right causes more activity. This also fits in with the theory that perception is a question of interpretation: if something is easy to explain, less brain activity is needed to process that information, compared to when something is unexpected or difficult to account for – as in the adjacent diagram.

Brain fills gaps to produce a likely picture

Researchers at Radboud University use visual illusions to demonstrate to what extent the brain interprets visual signals. They were surprised to discover that active interpretation occurs early on in signal processing. In other words, we see not only with our eyes, but with our brain, too. Current Biology is publishing these results in the July issue.

The results obtained by the Radboud University researchers are illustrated, for example, by the visual illusion on the left: we see a triangle that in fact is not there. The triangle is only suggested because of the way the ‘Pac-Man’ shapes are positioned; there appears to be a light-grey triangle on top of three black circles.

Seen in the fMRI
How does the brain do that? That was the question Peter Kok and Floris de Lange, from the Donders Institute at Radboud University in Nijmegen, asked themselves. Using fMRI, they discovered that the triangle – although non-existent – activates the primary visual brain cortex. This is the first area in the cortex to deal with a signal from the eyes.

The primary visual brain cortex is normally regarded as the area where eye signals are merely processed, but that has now been refuted by the results Kok and De Lange obtained.

Active interpretation
Recent theories assume that the brain does not simply process or filter external information, but actively interprets it. In the example described above, the brain decides it is more likely that a triangle would be on top of black circles than that three such circles, each with a bite taken out, would by coincidence point in a particular direction. After all, when we look around, we see triangles and circles more often than Pac-Man shapes.

Furthermore, objects very often lie on top of other things; just think of the books and piles of paper on your desk. The imaginary triangle is a feasible explanation for the bites taken out of the circles; the brain ‘understands’ they are ‘merely’ partly covered black circles.

The unexpected requires more processing
Kok and De Lange also noticed that whenever the Pac-Man shapes do not form a triangle, more brain activity is required. In the above image on the right, we see that the three Pac-Man shapes ‘underneath’ the triangle cause little brain activity (coloured blue), but the separate Pac-Man on the right causes more activity. This also fits in with the theory that perception is a question of interpretation: if something is easy to explain, less brain activity is needed to process that information, compared to when something is unexpected or difficult to account for – as in the adjacent diagram.

Filed under visual illusions visual cortex brain activity neuroimaging shape perception neuroscience science

92 notes

Blocking key enzyme minimizes stroke injury

A drug that blocks the action of the enzyme Cdk5 could substantially reduce brain damage if administered shortly after a stroke, UT Southwestern Medical Center research suggests.

The findings, reported in the June 11 issue of the Journal of Neuroscience, determined in rodent models that aberrant Cdk5 activity causes nerve cell death during stroke.

“If you inhibit Cdk5, then the vast majority of brain tissue stays alive without oxygen for up to one hour,” said Dr. James Bibb, Associate Professor of Psychiatry and Neurology and Neurotherapeutics at UT Southwestern and senior author of the study. “This result tells us that Cdk5 is a central player in nerve cell death.”

More importantly, development of a Cdk5 inhibitor as an acute neuroprotective therapy has the potential to reduce stroke injury.

“If we could block Cdk5 in patients who have just suffered a stroke, we may be able to reduce the number of patients in our hospitals who become disabled or die from stroke. Doing so would have a major impact on health care,” Dr. Bibb said.

While several pharmaceutical companies worked to develop Cdk5 inhibitors years ago, these efforts were largely abandoned since research indicated blocking Cdk5 long-term could have detrimental effects. At the time, many scientists thought aberrant Cdk5 activity played a major role in the development of Alzheimer’s disease and that Cdk5 inhibition might be beneficial as a treatment.

Based on Dr. Bibb’s research and that of others, Cdk5 has both good and bad effects. When working normally, Cdk5 adds phosphates to other proteins that are important to healthy brain function. On the flip side, researchers have found that aberrant Cdk5 activity contributes to nerve cell death following brain injury and can lead to cancer.

“Cdk5 regulates communication between nerve cells and is essential for proper brain function. Therefore, blocking Cdk5 long-term may not be beneficial,” Dr. Bibb said. “Until now, the connection between Cdk5 and stroke injury was unknown, as was the potential benefit of acute Cdk5 inhibition as a therapy.”

In this study, researchers administered a Cdk5 inhibitor directly into dissected brain slices after adult rodents suffered a stroke, in addition to measuring the post-stroke effects in Cdk5 knockout mice. 

“We are not yet at a point where this new treatment can be given for stroke. Nevertheless, this research brings us a step closer to developing the right kinds of drugs,” Dr. Bibb said. “We first need to know what mechanisms underlie the disease before targeted treatments can be developed that will be effective. As no Cdk5 blocker exists that works in a pill form, the next step will be to develop a systemic drug that could be used to confirm the study’s results and lead to a clinical trial at later stages.”

Currently, there is only one FDA-approved drug for acute treatment of stroke, the clot-busting drug tPA. Other treatment options include neurosurgical procedures to help minimize brain damage.

(Source: utsouthwestern.edu)

Filed under stroke nerve cells cdk5 brain function tPA cell death neuroscience science

free counters