Neuroscience

Articles and news from the latest research reports.

Posts tagged science

253 notes

Experimental Cancer Drug Reverses Schizophrenia in Adolescent Mice
Johns Hopkins researchers say that an experimental anticancer compound appears to have reversed behaviors associated with schizophrenia and restored some lost brain cell function in adolescent mice with a rodent version of the devastating mental illness.
The drug is one of a class of compounds known as PAK inhibitors, which have been shown in animal experiments to confer some protection from brain damage due to Fragile X syndrome, an inherited disease in humans marked by mental retardation. There also is some evidence, experts say, suggesting PAK inhibitors could be used to treat Alzheimer’s disease. And because the PAK protein itself can initiate cancer and cell growth, PAK inhibitors have also been tested for cancer.
In the new Johns Hopkins-led study, reported online March 31 in the Proceedings of the National Academy of Sciences, the researchers found that the compound, called FRAX486, appears to halt an out-of-control biological “pruning” process in the schizophrenic brain during which important neural connections are unnecessarily destroyed. Working with mice that mimic the pathological progression of schizophrenia and related disorders, the researchers were able to partially restore disabled neurons so they could connect to other nerve cells.
The Johns Hopkins team says the findings in teenage mice are an especially promising step in efforts to develop better therapies for schizophrenia in humans, because schizophrenia symptoms typically appear in late adolescence and early adulthood.
“By using this compound to block excess pruning in adolescent mice, we also normalized the behavior deficit,” says study leader Akira Sawa, M.D., Ph.D., a professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. “That we could intervene in adolescence and still make a difference in restoring brain function in these mice is intriguing.”
For the mouse experiments, Sawa and his colleagues chemically turned down the expression of a gene known as Disrupted-in-Schizophrenia 1 (DISC1), whose protein appears to regulate the fate of neurons in the cerebral cortex responsible for “higher-order” functions, like information processing.
In studies of rodent brain cells, the researchers found that a DISC1 deficit caused deterioration of vital parts of the neuron called spines, which help neurons communicate with one another.
Reduced amounts of DISC1 protein also impact the development of a protein called Kalirin-7 (KAL7), which is needed to regulate another protein called Rac1. Without enough DISC1, KAL7 can’t adequately control Rac1 production and the development of neuronal spines. Excess Rac1 apparently erases spines and leads to excess PAK in the mice.
By using FRAX486 to reduce the activity of PAK, the researchers were able to protect against the deterioration of the spines caused by too little DISC1, halting the process. This normalized the excess pruning and resulted in the restoration of missing spines. They were able to see this by peering into the brains of the mice with DISC1 mutations on the 35th and 60th day of their lives, the equivalent of adolescence and young adulthood.
Sawa, who is also director of the Johns Hopkins Schizophrenia Center, cautions that it has not yet been shown that PAK is elevated in the brains of people with schizophrenia. Thus, he says, it is important to validate these results by determining whether this haywire PAK cascade is also occurring in humans.
In the mice, the researchers also found that their behavior improved when PAK inhibitors were used. The mice were tested for their reaction to noises. There is a neuropsychiatric phenomenon in which any organism will react less to a strong, startling sound when they have first been primed by hearing a weaker one. In schizophrenia, the first noise makes no impact on the reaction to the second one.
The mice in the study showed improvements in their reactions after being treated with the PAK inhibitor. The drug was given in small doses and appeared to be safe for the animals.
“Drugs aimed at treating a disease should be able to reverse an already existing defect as well as block future damage,” Sawa says. “This compound has the potential to do both.”
(Image: iStockphoto)

Experimental Cancer Drug Reverses Schizophrenia in Adolescent Mice

Johns Hopkins researchers say that an experimental anticancer compound appears to have reversed behaviors associated with schizophrenia and restored some lost brain cell function in adolescent mice with a rodent version of the devastating mental illness.

The drug is one of a class of compounds known as PAK inhibitors, which have been shown in animal experiments to confer some protection from brain damage due to Fragile X syndrome, an inherited disease in humans marked by mental retardation. There also is some evidence, experts say, suggesting PAK inhibitors could be used to treat Alzheimer’s disease. And because the PAK protein itself can initiate cancer and cell growth, PAK inhibitors have also been tested for cancer.

In the new Johns Hopkins-led study, reported online March 31 in the Proceedings of the National Academy of Sciences, the researchers found that the compound, called FRAX486, appears to halt an out-of-control biological “pruning” process in the schizophrenic brain during which important neural connections are unnecessarily destroyed.
Working with mice that mimic the pathological progression of schizophrenia and related disorders, the researchers were able to partially restore disabled neurons so they could connect to other nerve cells.

The Johns Hopkins team says the findings in teenage mice are an especially promising step in efforts to develop better therapies for schizophrenia in humans, because schizophrenia symptoms typically appear in late adolescence and early adulthood.

“By using this compound to block excess pruning in adolescent mice, we also normalized the behavior deficit,” says study leader Akira Sawa, M.D., Ph.D., a professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. “That we could intervene in adolescence and still make a difference in restoring brain function in these mice is intriguing.”

For the mouse experiments, Sawa and his colleagues chemically turned down the expression of a gene known as Disrupted-in-Schizophrenia 1 (DISC1), whose protein appears to regulate the fate of neurons in the cerebral cortex responsible for “higher-order” functions, like information processing.

In studies of rodent brain cells, the researchers found that a DISC1 deficit caused deterioration of vital parts of the neuron called spines, which help neurons communicate with one another.

Reduced amounts of DISC1 protein also impact the development of a protein called Kalirin-7 (KAL7), which is needed to regulate another protein called Rac1. Without enough DISC1, KAL7 can’t adequately control Rac1 production and the development of neuronal spines. Excess Rac1 apparently erases spines and leads to excess PAK in the mice.

By using FRAX486 to reduce the activity of PAK, the researchers were able to protect against the deterioration of the spines caused by too little DISC1, halting the process. This normalized the excess pruning and resulted in the restoration of missing spines. They were able to see this by peering into the brains of the mice with DISC1 mutations on the 35th and 60th day of their lives, the equivalent of adolescence and young adulthood.

Sawa, who is also director of the Johns Hopkins Schizophrenia Center, cautions that it has not yet been shown that PAK is elevated in the brains of people with schizophrenia. Thus, he says, it is important to validate these results by determining whether this haywire PAK cascade is also occurring in humans.

In the mice, the researchers also found that their behavior improved when PAK inhibitors were used. The mice were tested for their reaction to noises. There is a neuropsychiatric phenomenon in which any organism will react less to a strong, startling sound when they have first been primed by hearing a weaker one. In schizophrenia, the first noise makes no impact on the reaction to the second one.

The mice in the study showed improvements in their reactions after being treated with the PAK inhibitor. The drug was given in small doses and appeared to be safe for the animals.

“Drugs aimed at treating a disease should be able to reverse an already existing defect as well as block future damage,” Sawa says. “This compound has the potential to do both.”

(Image: iStockphoto)

Filed under schizophrenia mental illness DISC1 neurons Kalirin-7 dendritic spine cancer neuroscience science

105 notes

Computer Maps 21 Distinct Emotional Expressions—Even “Happily Disgusted”

Researchers at The Ohio State University have found a way for computers to recognize 21 distinct facial expressions—even expressions for complex or seemingly contradictory emotions such as “happily disgusted” or “sadly angry.”

image

(Image caption: Researchers at the Ohio State University have found a way for computers to recognize 21 distinct facial expressions — even expressions for complex or seemingly contradictory emotions. The study gives cognitive scientists more tools to study the origins of emotion in the brain. Here, a study participant makes three faces: happy (left), disgusted (center), and happily disgusted (right). Credit: Image courtesy of The Ohio State University.)

In the current issue of the Proceedings of the National Academy of Sciences, they report that they were able to more than triple the number of documented facial expressions that researchers can now use for cognitive analysis.

“We’ve gone beyond facial expressions for simple emotions like ‘happy’ or ‘sad.’ We found a strong consistency in how people move their facial muscles to express 21 categories of emotions,” said Aleix Martinez, a cognitive scientist and associate professor of electrical and computer engineering at Ohio State. “That is simply stunning. That tells us that these 21 emotions are expressed in the same way by nearly everyone, at least in our culture.”

The resulting computational model will help map emotion in the brain with greater precision than ever before, and perhaps even aid the diagnosis and treatment of mental conditions such as autism and post-traumatic stress disorder (PTSD).

Read more

Filed under facial expressions complex emotions FACS PTSD face recognition compound emotion psychology neuroscience science

221 notes

Scientists discover a protein in nerves that determines which brain connections stay and which go
A newborn baby, for all its cooing cuddliness, is a data acquisition machine, absorbing information to finish honing the job of brain wiring that started before birth. This is true nowhere more so than the eyes, which start life peering at a blurry world and within months can make out a crisp, three-dimensional image of a mobile dangling overhead.
This process of refining the brain’s wiring involves cutting off some of the excess nerve connections we have at birth while strengthening connections we use all the time. Some estimates show that as many as half of the brain’s connections formed during development are clipped back as the final wiring takes shape.
Carla Shatz, the David Starr Jordan Director of Stanford Bio-X, and her team, including postdoctoral researcher Hanmi Lee and Bio-X Graduate Fellow Jaimie Adelson, recently found a protein that is essential for the brain to remove those excess connections. The team specifically showed a role for the protein in the developing visual system in mice, but the work appears to apply broadly across the developing brain. They published their findings online March 30 in the journal Nature.
Shatz said the discovery helps clear up something that has been a mystery to those who study brain development: How does the decision get made to eliminate some connections? It also settles a decade-long debate over whether the nervous system or the immune system is making those decisions. (Spoiler alert: It’s the nervous system.)
A single vision
"Vision is a challenging problem because you have two eyes and only one view of the world," said Shatz, who is the Sapp Family Provostial Professor and professor of biology and of neurobiology. "There’s a very beautiful set of wiring steps that makes sure the eyes are pointed at the same place and the two images get aligned."
Shatz said the rule of which connections the brain cuts back to create that single vision follows a simple mantra: “Fire together, wire together. Out of sync, lose your link.” Or rather, if early in life the left sides of both eyes see the same duck motif wallpaper, those neurons fire together and stay linked up. When the top of one eye and bottom of the other eye form a connection, the nerves fire out of sync, and the connection weakens and is eventually pruned back. Over time, the only connections that remain are between parts of the two eyes that are seeing the same thing.
The ability to detect which nerves fire out of sync and should therefore lose their link requires the protein Shatz’s team reported, which goes by the name of MHC Class I D, or D for short. This protein is one that is famous for its role in the immune system, but only in the past decade has Shatz’s team started building a case for D’s independent role in the brain.
Two camps, one protein
In 2000 Shatz first published work suggesting that a group of immune proteins called MHC in mice and HLA in people played a role in the developing nervous system. At the time, this caused a stir among immunologists, who were surprised to find their proteins showing up in the brain.
Lawrence Steinman, professor of neurology and neurological sciences and of pediatrics at Stanford School of Medicine, has followed Shatz’s work from the perspective of both a neurologist and immunologist. “One of the reasons that I think the research is so interesting is that it shows us that molecules thought to be the province of one group can be in another,” he said, adding, “It slowed the prevailing idea that people believed that some molecules were the domain of one camp.”
Shatz is in the privileged position of directing Stanford Bio-X, which includes faculty members and students from both immunology and the neurological sciences. She said being able to talk about her work and collaborate with this mix of colleagues has helped break down barriers in thinking about her unexpected findings.
After the initial discovery, Shatz went on to show that two of those MHC proteins – D and its sister protein K – seemed to be important in eliminating connections in the brain. Mice genetically engineered to lack both K and D had poorly functioning immune systems and also ended up with the visual system in a jumble, with unrelated parts of the two eyes forming connections. Without D and K the mice weren’t detecting which connections fired out of sync, so those connections didn’t lose their link.
After Shatz published that work, some immunologists argued that perhaps D and K were necessary for brain remodeling only because of their key function in the immune system. “They were saying that the immune system was telling the nervous system what to prune,” Shatz said.
It was a theory, but not one Shatz agreed with. Her feeling was that just because D and K were first found in the immune system didn’t mean they couldn’t have a unique role in the brain. “The nervous system has just as much right to these immune proteins as the immune system,” Shatz said. Her most recent work makes that point clear.
D on the brain
Shatz and her group worked with the mice that were lacking D and K everywhere, then used genetic engineering tricks to add D back, but only in the neurons. These mice still had poorly functioning immune systems, but had perfectly normal eye connections. In these mice, the nerves were able to determine which connections to cut and which to keep, even without the immune system.
Steinman said the work settles the issue of whether D is acting in the brain separate from its role in the immune system. “If Carla had studied MHC proteins before the immunologists, then we would consider them to be part of the nervous system. They clearly have major roles in both the nervous system and the immune system,” he said.
The group went on to show that the presence of D alters the composition of other proteins on the nerve cell surface that are in charge of receiving signals from other nerves. Her team thinks that it is this difference in how the nerve receives signals with or without D that makes the pruning process go awry.
Essentially, without D all nerve connections appear to be firing together and therefore they stay wired together.
Shatz says that in addition to explaining an important part of brain development, the work could also provide a new avenue for studying schizophrenia. Some studies have shown that people with mutations in the human genes related to D (called HLA genes) are more prone to the disease. Other studies have associated schizophrenia with improperly formed connections in the brain. Shatz suggests that this new role for D in the brain could mean that the pruning process has gone awry in schizophrenia. The group plans to explore this idea further, as well as to tease apart what D is doing to alter the composition of neurotransmitter receptors on the nerve cell surface.

Scientists discover a protein in nerves that determines which brain connections stay and which go

A newborn baby, for all its cooing cuddliness, is a data acquisition machine, absorbing information to finish honing the job of brain wiring that started before birth. This is true nowhere more so than the eyes, which start life peering at a blurry world and within months can make out a crisp, three-dimensional image of a mobile dangling overhead.

This process of refining the brain’s wiring involves cutting off some of the excess nerve connections we have at birth while strengthening connections we use all the time. Some estimates show that as many as half of the brain’s connections formed during development are clipped back as the final wiring takes shape.

Carla Shatz, the David Starr Jordan Director of Stanford Bio-X, and her team, including postdoctoral researcher Hanmi Lee and Bio-X Graduate Fellow Jaimie Adelson, recently found a protein that is essential for the brain to remove those excess connections. The team specifically showed a role for the protein in the developing visual system in mice, but the work appears to apply broadly across the developing brain. They published their findings online March 30 in the journal Nature.

Shatz said the discovery helps clear up something that has been a mystery to those who study brain development: How does the decision get made to eliminate some connections? It also settles a decade-long debate over whether the nervous system or the immune system is making those decisions. (Spoiler alert: It’s the nervous system.)

A single vision

"Vision is a challenging problem because you have two eyes and only one view of the world," said Shatz, who is the Sapp Family Provostial Professor and professor of biology and of neurobiology. "There’s a very beautiful set of wiring steps that makes sure the eyes are pointed at the same place and the two images get aligned."

Shatz said the rule of which connections the brain cuts back to create that single vision follows a simple mantra: “Fire together, wire together. Out of sync, lose your link.” Or rather, if early in life the left sides of both eyes see the same duck motif wallpaper, those neurons fire together and stay linked up. When the top of one eye and bottom of the other eye form a connection, the nerves fire out of sync, and the connection weakens and is eventually pruned back. Over time, the only connections that remain are between parts of the two eyes that are seeing the same thing.

The ability to detect which nerves fire out of sync and should therefore lose their link requires the protein Shatz’s team reported, which goes by the name of MHC Class I D, or D for short. This protein is one that is famous for its role in the immune system, but only in the past decade has Shatz’s team started building a case for D’s independent role in the brain.

Two camps, one protein

In 2000 Shatz first published work suggesting that a group of immune proteins called MHC in mice and HLA in people played a role in the developing nervous system. At the time, this caused a stir among immunologists, who were surprised to find their proteins showing up in the brain.

Lawrence Steinman, professor of neurology and neurological sciences and of pediatrics at Stanford School of Medicine, has followed Shatz’s work from the perspective of both a neurologist and immunologist. “One of the reasons that I think the research is so interesting is that it shows us that molecules thought to be the province of one group can be in another,” he said, adding, “It slowed the prevailing idea that people believed that some molecules were the domain of one camp.”

Shatz is in the privileged position of directing Stanford Bio-X, which includes faculty members and students from both immunology and the neurological sciences. She said being able to talk about her work and collaborate with this mix of colleagues has helped break down barriers in thinking about her unexpected findings.

After the initial discovery, Shatz went on to show that two of those MHC proteins – D and its sister protein K – seemed to be important in eliminating connections in the brain. Mice genetically engineered to lack both K and D had poorly functioning immune systems and also ended up with the visual system in a jumble, with unrelated parts of the two eyes forming connections. Without D and K the mice weren’t detecting which connections fired out of sync, so those connections didn’t lose their link.

After Shatz published that work, some immunologists argued that perhaps D and K were necessary for brain remodeling only because of their key function in the immune system. “They were saying that the immune system was telling the nervous system what to prune,” Shatz said.

It was a theory, but not one Shatz agreed with. Her feeling was that just because D and K were first found in the immune system didn’t mean they couldn’t have a unique role in the brain. “The nervous system has just as much right to these immune proteins as the immune system,” Shatz said. Her most recent work makes that point clear.

D on the brain

Shatz and her group worked with the mice that were lacking D and K everywhere, then used genetic engineering tricks to add D back, but only in the neurons. These mice still had poorly functioning immune systems, but had perfectly normal eye connections. In these mice, the nerves were able to determine which connections to cut and which to keep, even without the immune system.

Steinman said the work settles the issue of whether D is acting in the brain separate from its role in the immune system. “If Carla had studied MHC proteins before the immunologists, then we would consider them to be part of the nervous system. They clearly have major roles in both the nervous system and the immune system,” he said.

The group went on to show that the presence of D alters the composition of other proteins on the nerve cell surface that are in charge of receiving signals from other nerves. Her team thinks that it is this difference in how the nerve receives signals with or without D that makes the pruning process go awry.

Essentially, without D all nerve connections appear to be firing together and therefore they stay wired together.

Shatz says that in addition to explaining an important part of brain development, the work could also provide a new avenue for studying schizophrenia. Some studies have shown that people with mutations in the human genes related to D (called HLA genes) are more prone to the disease. Other studies have associated schizophrenia with improperly formed connections in the brain. Shatz suggests that this new role for D in the brain could mean that the pruning process has gone awry in schizophrenia. The group plans to explore this idea further, as well as to tease apart what D is doing to alter the composition of neurotransmitter receptors on the nerve cell surface.

Filed under brain development visual system LGN vision nervous system immune system HLA genes neuroscience science

140 notes

Congenitally blind visualise numbers opposite way to sighted
For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.
Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.
Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.
Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.
“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”
The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.
As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.
Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.
Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.
“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Congenitally blind visualise numbers opposite way to sighted

For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.

Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.

Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.

Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.

“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”

The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.

As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.

Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.

Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.

“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Filed under blindness spatial representation number representation parietal cortex psychology neuroscience science

273 notes

New respect for primary visual cortex



In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

New respect for primary visual cortex

In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.

Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.

“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”

Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.

The value of V1

In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.

For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.

On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.

Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.

Implications for human disease

The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”

“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”

“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”

But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.

“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”

Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.

On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

Filed under primary visual cortex sequence learning learning V1 plasticity neurons neuroscience science

108 notes

Huntington’s disease: Study discovers potassium boost improves walking in mouse model
Tweaking a specific cell type’s ability to absorb potassium in the brain improved walking and prolonged survival in a mouse model of Huntington’s disease, reports a UCLA study published March 30 in the online edition of Nature Neuroscience. The discovery could point to new drug targets for treating the devastating disease, which strikes one in every 20,000 Americans.
Huntington’s disease is passed from parent to child through a mutation in the huntingtin gene. By killing brain cells called neurons, the progressive disorder gradually deprives patients of their ability to walk, speak, swallow, breathe and think clearly. No cure exists, and patients with aggressive cases can die in as little as 10 years.
The laboratories of Baljit Khakh, a professor of physiology and neurobiology, and Michael Sofroniew, a professor of neurobiology, teamed up at the David Geffen School of Medicine at UCLA to unravel the role played in Huntington’s by astrocytes—large, star-shaped cells found in the brain and spinal cord.
Read more

Huntington’s disease: Study discovers potassium boost improves walking in mouse model

Tweaking a specific cell type’s ability to absorb potassium in the brain improved walking and prolonged survival in a mouse model of Huntington’s disease, reports a UCLA study published March 30 in the online edition of Nature Neuroscience. The discovery could point to new drug targets for treating the devastating disease, which strikes one in every 20,000 Americans.

Huntington’s disease is passed from parent to child through a mutation in the huntingtin gene. By killing brain cells called neurons, the progressive disorder gradually deprives patients of their ability to walk, speak, swallow, breathe and think clearly. No cure exists, and patients with aggressive cases can die in as little as 10 years.

The laboratories of Baljit Khakh, a professor of physiology and neurobiology, and Michael Sofroniew, a professor of neurobiology, teamed up at the David Geffen School of Medicine at UCLA to unravel the role played in Huntington’s by astrocytes—large, star-shaped cells found in the brain and spinal cord.

Read more

Filed under huntington's disease astrocytes huntingtin neurons animal model gene mutation neuroscience science

97 notes

Detecting Unidentified Changes
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.
Full Article

Detecting Unidentified Changes

Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.

Full Article

Filed under attention blindness visual awareness eye movements visual perception psychology neuroscience science

657 notes

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane
Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.
They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.
“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.
“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  
Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.
“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.
To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.
Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.
“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”
In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.
For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane

Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.

They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.

“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.

“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  

Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.

“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.

To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.

Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.

“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”

In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.

For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Filed under visual perception continuity field visual system perceptual serial dependence neuroscience science

280 notes

The circadian clock is like an orchestra with many conductors
You’ve switched to the night shift and your weight skyrockets, or you wake at 7 a.m. on weekdays but sleep until noon on weekends—a social jet lag that can fog your Saturday and Sunday.
Life runs on rhythms driven by circadian clocks, and disruption of these cycles is associated with serious physical and emotional problems, says Orie Shafer, a University of Michigan assistant professor of molecular, cellular and developmental biology.
Now, new findings from Shafer and U-M doctoral student Zepeng Yao challenge the prevailing wisdom about how our body clocks are organized, and suggest that interactions among neurons that govern circadian rhythms are more complex than originally thought.
Yao and Shafer looked at the circadian clock neuron network in fruit flies, which is functionally similar to that of mammals, but at only 150 clock neurons is much simpler. Previously, scientists thought that a master group of eight clock neurons acted as pacemaker for the remaining 142 clock neurons—think of a conductor leading an orchestra—thus imposing the rhythm for the fruit fly circadian clock. It is thought that the same principle applies to mammals.
Interactions among clock neurons determine the strength and speed of circadian rhythms, Yao says. So, when researchers genetically changed the clock speeds of only the group of eight master pacemakers they could examine how well the conductor alone governed the orchestra. They found that without the environmental cues, the orchestra didn’t follow the conductor as closely as previously thought.
Some of the fruit flies completely lost sense of time, and others simultaneously demonstrated two different sleep cycles, one following the group of eight neurons and the other following some other set of neurons.
"The finding shows that instead of the entire orchestra following a single conductor, part of the orchestra is following a different conductor or not listening at all," Shafer said.
The findings suggest that instead of a group of master pacemaker neurons, the clock network consists of many independent clocks, each of which drives rhythms in activity. Shafer and Yao suspect that a similar organization will be found in mammals, as well.
"A better understanding of the circadian clock mechanisms will be critical for attempts to alleviate the adverse effects associated with circadian disorders," Yao said.
Disrupting the circadian clock through shift work is associated with diabetes, obesity, stress, heart disease, mood disorders and cancer, among other disorders, Yao says. The International Agency for Research on Cancer classified shift work that disrupts circadian rhythms as a human carcinogen equal to cancer-causing ultraviolet radiation.

The circadian clock is like an orchestra with many conductors

You’ve switched to the night shift and your weight skyrockets, or you wake at 7 a.m. on weekdays but sleep until noon on weekends—a social jet lag that can fog your Saturday and Sunday.

Life runs on rhythms driven by circadian clocks, and disruption of these cycles is associated with serious physical and emotional problems, says Orie Shafer, a University of Michigan assistant professor of molecular, cellular and developmental biology.

Now, new findings from Shafer and U-M doctoral student Zepeng Yao challenge the prevailing wisdom about how our body clocks are organized, and suggest that interactions among neurons that govern circadian rhythms are more complex than originally thought.

Yao and Shafer looked at the circadian clock neuron network in fruit flies, which is functionally similar to that of mammals, but at only 150 clock neurons is much simpler. Previously, scientists thought that a master group of eight clock neurons acted as pacemaker for the remaining 142 clock neurons—think of a conductor leading an orchestra—thus imposing the rhythm for the fruit fly circadian clock. It is thought that the same principle applies to mammals.

Interactions among clock neurons determine the strength and speed of circadian rhythms, Yao says. So, when researchers genetically changed the clock speeds of only the group of eight master pacemakers they could examine how well the conductor alone governed the orchestra. They found that without the environmental cues, the orchestra didn’t follow the conductor as closely as previously thought.

Some of the fruit flies completely lost sense of time, and others simultaneously demonstrated two different sleep cycles, one following the group of eight neurons and the other following some other set of neurons.

"The finding shows that instead of the entire orchestra following a single conductor, part of the orchestra is following a different conductor or not listening at all," Shafer said.

The findings suggest that instead of a group of master pacemaker neurons, the clock network consists of many independent clocks, each of which drives rhythms in activity. Shafer and Yao suspect that a similar organization will be found in mammals, as well.

"A better understanding of the circadian clock mechanisms will be critical for attempts to alleviate the adverse effects associated with circadian disorders," Yao said.

Disrupting the circadian clock through shift work is associated with diabetes, obesity, stress, heart disease, mood disorders and cancer, among other disorders, Yao says. The International Agency for Research on Cancer classified shift work that disrupts circadian rhythms as a human carcinogen equal to cancer-causing ultraviolet radiation.

Filed under circadian rhythms fruit flies clock neurons sleep cycle psychology neuroscience science

free counters