Neuroscience

Articles and news from the latest research reports.

Posts tagged science

132 notes

Children’s complex thinking skills begin forming before they go to school
New research at the University of Chicago and the University of North Carolina at Chapel Hill shows that children begin to show signs of higher-level thinking skills as young as age 4 ½. Researchers have previously attributed higher-order thinking development to knowledge acquisition and better schooling, but the new longitudinal study shows that other skills, not always connected with knowledge, play a role in the ability of children to reason analytically. 
The findings, reported in January in the journal Psychological Science, show for the first time that children’s executive function has a role in the development of complicated analytical thinking. Executive function includes such complex skills as planning, monitoring, task switching, and controlling attention. High early executive function skills at school entry are related to higher than average reasoning skills in adolescence. 
Growing research suggests that executive function may be trainable through pathways, including preschool curriculum, exercise and impulse control training. Parents and teachers may be able to help encourage development of executive function by having youngsters help plan activities, learn to stop, think, and then take action, or engage in pretend play, said lead author of the study, Lindsey Richland, assistant professor of comparative human development at the University of Chicago.
Although important to a child’s education, “little is known about the cognitive mechanisms underlying children’s development of the capacity to engage in complex forms of reasoning,” Richland said.
The new research is reported in the paper “Early Executive Function Predicts Reasoning Development” and follows the development of complex reasoning in children from before the time they go to school until they are 15. Richland’s co-author is Margaret Burchinal, senior scientist at the Frank Porter Graham Child Development Institute at the University of North Carolina at Chapel Hill.
(Image: Shutterstock)

Children’s complex thinking skills begin forming before they go to school

New research at the University of Chicago and the University of North Carolina at Chapel Hill shows that children begin to show signs of higher-level thinking skills as young as age 4 ½. Researchers have previously attributed higher-order thinking development to knowledge acquisition and better schooling, but the new longitudinal study shows that other skills, not always connected with knowledge, play a role in the ability of children to reason analytically. 

The findings, reported in January in the journal Psychological Science, show for the first time that children’s executive function has a role in the development of complicated analytical thinking. Executive function includes such complex skills as planning, monitoring, task switching, and controlling attention. High early executive function skills at school entry are related to higher than average reasoning skills in adolescence. 

Growing research suggests that executive function may be trainable through pathways, including preschool curriculum, exercise and impulse control training. Parents and teachers may be able to help encourage development of executive function by having youngsters help plan activities, learn to stop, think, and then take action, or engage in pretend play, said lead author of the study, Lindsey Richland, assistant professor of comparative human development at the University of Chicago.

Although important to a child’s education, “little is known about the cognitive mechanisms underlying children’s development of the capacity to engage in complex forms of reasoning,” Richland said.

The new research is reported in the paper “Early Executive Function Predicts Reasoning Development” and follows the development of complex reasoning in children from before the time they go to school until they are 15. Richland’s co-author is Margaret Burchinal, senior scientist at the Frank Porter Graham Child Development Institute at the University of North Carolina at Chapel Hill.

(Image: Shutterstock)

Filed under children thinking analytical thinking executive function psychology neuroscience science

326 notes

Learning and Memory May Play a Central Role in Synesthesia
People with color-grapheme synesthesia experience color when viewing written letters or numerals, usually with a particular color evoked by each grapheme (i.e., the letter ‘A’ evokes the color red). In a new study, researchers Nathan Witthoft and Jonathan Winawer of Stanford University present data from 11 color grapheme synesthetes who had startlingly similar color-letter pairings that were traceable to childhood toys containing magnetic colored letters.
Their findings are published in Psychological Science, a journal of the Association for Psychological Science.
Matching data from the 11 participants showed reliably consistent letter-color matches, both within and between testing sessions (data collected online at http://www.synesthete.org/). Participants’ matches were consistent even after a delay of up to seven years since their first session.
Participants also performed a timed task, in which they were presented with colored letters for 1 second each and required to indicate whether the color was consistent with their synesthetic association. Their data show that they were able to perform the task rapidly and accurately.
Together, these data suggest that the participants’ color-letter associations are specific, automatic, and relatively constant over time, thereby meeting the criteria for true synesthesia.
The degree of similarity in the letter-color pairings across participants, along with the regular repeating pattern in the colors found in each individual’s letter-color pairings, indicates that the pairings were learned from the magnetic colored letters that the participants had been exposed to in childhood.
According to the researchers, these are the first and only data to show learned synesthesia of this kind in more than a single individual.
They point out that this does not mean that exposure to the colored letter magnets was sufficient to induce synesthesia in the participants, though it may have increased the chances. After all, many people who do not have synesthesia played with the same colored letter magnets as kids.
Based on their findings, Witthoft and Winawer conclude that a complete explanation of synesthesia must incorporate a central role for learning and memory.
(Image: Shutterstock)

Learning and Memory May Play a Central Role in Synesthesia

People with color-grapheme synesthesia experience color when viewing written letters or numerals, usually with a particular color evoked by each grapheme (i.e., the letter ‘A’ evokes the color red). In a new study, researchers Nathan Witthoft and Jonathan Winawer of Stanford University present data from 11 color grapheme synesthetes who had startlingly similar color-letter pairings that were traceable to childhood toys containing magnetic colored letters.

Their findings are published in Psychological Science, a journal of the Association for Psychological Science.

Matching data from the 11 participants showed reliably consistent letter-color matches, both within and between testing sessions (data collected online at http://www.synesthete.org/). Participants’ matches were consistent even after a delay of up to seven years since their first session.

Participants also performed a timed task, in which they were presented with colored letters for 1 second each and required to indicate whether the color was consistent with their synesthetic association. Their data show that they were able to perform the task rapidly and accurately.

Together, these data suggest that the participants’ color-letter associations are specific, automatic, and relatively constant over time, thereby meeting the criteria for true synesthesia.

The degree of similarity in the letter-color pairings across participants, along with the regular repeating pattern in the colors found in each individual’s letter-color pairings, indicates that the pairings were learned from the magnetic colored letters that the participants had been exposed to in childhood.

According to the researchers, these are the first and only data to show learned synesthesia of this kind in more than a single individual.

They point out that this does not mean that exposure to the colored letter magnets was sufficient to induce synesthesia in the participants, though it may have increased the chances. After all, many people who do not have synesthesia played with the same colored letter magnets as kids.

Based on their findings, Witthoft and Winawer conclude that a complete explanation of synesthesia must incorporate a central role for learning and memory.

(Image: Shutterstock)

Filed under synesthesia synesthetic association memory learning psychology science

194 notes

Can You Smell Yourself?
You might not be able to pick your fingerprint out of an inky lineup, but your brain knows what you smell like. For the first time, scientists have shown that people recognize their own scent based on their particular combination of major histocompatibility complex (MHC) proteins, molecules similar to those used by animals to choose their mates. The discovery suggests that humans can also exploit the molecules to differentiate between people.
"This is definitely new and exciting," says Frank Zufall, a neurobiologist at Saarland University’s School of Medicine in Homburg, Germany, who was not involved in the work. "This type of experiment had never been done on humans before."
MHC peptides are found on the surface of almost all cells in the human body, helping inform the immune system that the cells are ours. Because a given combination of MHC peptides—called an MHC type—is unique to a person, they can help the body recognize invading pathogens and foreign cells. Over the past 2 decades, scientists have discovered that the molecules also foster communication between animals, including mice and fish. Stickleback fish, for example, choose mates with different MHC types than their own. Then, in 1995, researchers conducted the now famous “sweaty T-shirt study,” which concluded that women prefer the smell of men who have different MHC genes than themselves. But no studies had shown a clear-cut physiological response to MHC proteins.
In the new work, Thomas Boehm, a biologist at the Max Planck Institute of Immunobiology and Epigenetics in Freiburg, Germany, and colleagues first tested whether women can recognize lab-made MHC proteins resembling their own. After showering, 22 women applied two different solutions to their armpits and decided which odor they liked better. The experiment was repeated two to six times for each participant. Women preferred to wear a synthetic scent containing their own MHC proteins, but only if they were nonsmokers and didn’t have a cold. The study did not determine which scents women preferred on other people, but past studies on perfume have shown that individuals prefer different smells on themselves than on others.
The researchers wanted to know whether the preferences were truly rooted in the brain’s response to the proteins. So next, they used functional magnetic resonance imaging to measure changes in the brains of 19 different women when they smelled the various solutions, in aerosol form puffed toward their noses. “Sure enough, there again was a clear difference between the response to self and non-self peptides,” Boehm says. “There was a particular region of the brain that was only activated by peptides resembling a person’s own MHC molecules.” The brain had a similar response to all non-self MHC combinations, suggesting that any preference for how other people smell is a preference for non-self, not for particular MHC types.
(Image: Getty)

Can You Smell Yourself?

You might not be able to pick your fingerprint out of an inky lineup, but your brain knows what you smell like. For the first time, scientists have shown that people recognize their own scent based on their particular combination of major histocompatibility complex (MHC) proteins, molecules similar to those used by animals to choose their mates. The discovery suggests that humans can also exploit the molecules to differentiate between people.

"This is definitely new and exciting," says Frank Zufall, a neurobiologist at Saarland University’s School of Medicine in Homburg, Germany, who was not involved in the work. "This type of experiment had never been done on humans before."

MHC peptides are found on the surface of almost all cells in the human body, helping inform the immune system that the cells are ours. Because a given combination of MHC peptides—called an MHC type—is unique to a person, they can help the body recognize invading pathogens and foreign cells. Over the past 2 decades, scientists have discovered that the molecules also foster communication between animals, including mice and fish. Stickleback fish, for example, choose mates with different MHC types than their own. Then, in 1995, researchers conducted the now famous “sweaty T-shirt study,” which concluded that women prefer the smell of men who have different MHC genes than themselves. But no studies had shown a clear-cut physiological response to MHC proteins.

In the new work, Thomas Boehm, a biologist at the Max Planck Institute of Immunobiology and Epigenetics in Freiburg, Germany, and colleagues first tested whether women can recognize lab-made MHC proteins resembling their own. After showering, 22 women applied two different solutions to their armpits and decided which odor they liked better. The experiment was repeated two to six times for each participant. Women preferred to wear a synthetic scent containing their own MHC proteins, but only if they were nonsmokers and didn’t have a cold. The study did not determine which scents women preferred on other people, but past studies on perfume have shown that individuals prefer different smells on themselves than on others.

The researchers wanted to know whether the preferences were truly rooted in the brain’s response to the proteins. So next, they used functional magnetic resonance imaging to measure changes in the brains of 19 different women when they smelled the various solutions, in aerosol form puffed toward their noses. “Sure enough, there again was a clear difference between the response to self and non-self peptides,” Boehm says. “There was a particular region of the brain that was only activated by peptides resembling a person’s own MHC molecules.” The brain had a similar response to all non-self MHC combinations, suggesting that any preference for how other people smell is a preference for non-self, not for particular MHC types.

(Image: Getty)

Filed under brain proteins smell major histocompatibility complex human cells immune system science

79 notes

Oxygen Chamber Can Boost Brain Repair
Stroke, traumatic injury, and metabolic disorder are major causes of brain damage and permanent disabilities, including motor dysfunction, psychological disorders, memory loss, and more. Current therapy and rehab programs aim to help patients heal, but they often have limited success.
Now Dr. Shai Efrati of Tel Aviv University’s Sackler Faculty of Medicine has found a way to restore a significant amount of neurological function in brain tissue thought to be chronically damaged — even years after initial injury. Theorizing that high levels of oxygen could reinvigorate dormant neurons, Dr. Efrati and his fellow researchers, including Prof. Eshel Ben-Jacob of TAU’s School of Physics and Astronomy and the Sagol School of Neuroscience, recruited post-stroke patients for hyperbaric oxygen therapy (HBOT) — sessions in high pressure chambers that contain oxygen-rich air — which increases oxygen levels in the body tenfold.
Analysis of brain imaging showed significantly increased neuronal activity after a two-month period of HBOT treatment compared to control periods of non-treatment, reported Dr. Efrati in PLoS ONE. Patients experienced improvements such as a reversal of paralysis, increased sensation, and renewed use of language. These changes can make a world of difference in daily life, helping patients recover their independence and complete tasks such as bathing, cooking, climbing stairs, or reading a book.

Oxygen Chamber Can Boost Brain Repair

Stroke, traumatic injury, and metabolic disorder are major causes of brain damage and permanent disabilities, including motor dysfunction, psychological disorders, memory loss, and more. Current therapy and rehab programs aim to help patients heal, but they often have limited success.

Now Dr. Shai Efrati of Tel Aviv University’s Sackler Faculty of Medicine has found a way to restore a significant amount of neurological function in brain tissue thought to be chronically damaged — even years after initial injury. Theorizing that high levels of oxygen could reinvigorate dormant neurons, Dr. Efrati and his fellow researchers, including Prof. Eshel Ben-Jacob of TAU’s School of Physics and Astronomy and the Sagol School of Neuroscience, recruited post-stroke patients for hyperbaric oxygen therapy (HBOT) — sessions in high pressure chambers that contain oxygen-rich air — which increases oxygen levels in the body tenfold.

Analysis of brain imaging showed significantly increased neuronal activity after a two-month period of HBOT treatment compared to control periods of non-treatment, reported Dr. Efrati in PLoS ONE. Patients experienced improvements such as a reversal of paralysis, increased sensation, and renewed use of language. These changes can make a world of difference in daily life, helping patients recover their independence and complete tasks such as bathing, cooking, climbing stairs, or reading a book.

Filed under brain brain injury brain tissue oxygen hyperbaric oxygen therapy neuroscience science

38 notes

Right target, but missing the bulls-eye for Alzheimer’s

Alzheimer’s disease is the most common cause of late-life dementia. The disorder is thought to be caused by a protein known as amyloid-beta, or Abeta, which clumps together in the brain, forming plaques that are thought to destroy neurons. This destruction starts early, too, and can presage clinical signs of the disease by up to 20 years.

For decades now, researchers have been trying, with limited success, to develop drugs that prevent this clumping. Such drugs require a “target” — a structure they can bind to, thereby preventing the toxic actions of Abeta.

Now, a new study out of UCLA suggests that while researchers may have the right target in Abeta, they may be missing the bull’s-eye. Reporting in the Jan. 23 issue of the Journal of Molecular Biology, UCLA neurology professor David Teplow and colleagues focused on a particular segment of a toxic form of Abeta and discovered a unique hairpin-like structure that facilitates clumping.

"Every 68 seconds, someone in this country is diagnosed with Alzheimer’s," said Teplow, the study’s senior author and principal investigator of the NIH-sponsored Alzheimer’s Disease Research Center at UCLA. "Alzheimer’s disease is the only one of the top 10 causes of death in America that cannot be prevented, cured or even slowed down once it begins. Most of the drugs that have been developed have either failed or only provide modest improvement of the symptoms. So finding a better pathway for these potential therapeutics is critical."

The Abeta protein is composed of a sequence of amino acids, much like “a pearl necklace composed of 20 different combinations of different colors of pearl,” Teplow said. One form of Abeta, Abeta40, has 40 amino acids, while a second form, Abeta42, has two extra amino acids at one end.

Abeta42 has long been thought to be the toxic form of Abeta, but until now, no one has understood how the simple addition of two amino acids made it so much more toxic than Abeta40.

In his lab, Teplow and his colleagues used computer simulations in which they looked at the structure of the Abeta proteins in a virtual world. The researchers first created a virtual Abeta peptide that only contained the last 12 amino acids of the entire 42–amino-acid-long Abeta42 protein. Then, said Teplow, “we just let the molecule move around in a virtual world, letting the laws of physics determine how each atom of the peptide was attracted to or repulsed by other atoms.”

By taking thousands of snapshots of the various molecular structures the peptides created, the researchers determined which structures formed more frequently than others. From those, they then physically created mutant Abeta peptides using chemical synthesis.

"We studied these mutant peptides and found that the structure that made Abeta42 Abeta42 was a hairpin-like turn at the very end of the peptide of the whole Abeta protein," Teplow said.

The hairpin turn structure was not previously known in the detail revealed by the researchers, “so we feel our experiments were novel,” he said. “Our lab is the first to show that it is this specific turn that accounts for the special ability of Abeta42 to aggregate into clumps that we think kills neurons. Abeta40, the Abeta protein with two less amino acids at the end of the protein, did not do the same thing.”

Hopefully, the work of the Teplow laboratory presents what may the most relevant target yet for the development of drugs to fight Alzheimer’s disease, the researchers said.

(Source: uclahealth.org)

Filed under alzheimer's disease proteins drug development amyloid-beta science

34 notes

Pavlov’s Rats? Rodents Trained to Link Rewards to Visual Cues
In experiments on rats outfitted with tiny goggles, scientists say they have learned that the brain’s initial vision processing center not only relays visual stimuli, but also can “learn” time intervals and create specifically timed expectations of future rewards. The research, by a team at the Johns Hopkins University School of Medicine and the Massachusetts Institute of Technology, sheds new light on learning and memory-making, the investigators say, and could help explain why people with Alzheimer’s disease have trouble remembering recent events. 
Results of the study, in the journal Neuron, suggest that connections within nerve cell networks in the vision-processing center can be strengthened by the neurochemical acetylcholine (ACh), which the brain is thought to secrete after a reward is received. Only nerve cell networks recently stimulated by a flash of light delivered through the goggles are affected by ACh, which in turn allows those nerve networks to associate the visual cue with the reward. Because brain structures are highly conserved in mammals, the findings likely have parallels in humans, they say.
“We’ve discovered that nerve cells in this part of the brain, the primary visual cortex, seem to be able to develop molecular memories, helping us understand how animals learn to predict rewarding outcomes,” says Marshall Hussain Shuler, Ph.D., assistant professor of neuroscience at the Institute for Basic Biomedical Sciences at the Johns Hopkins University School of Medicine. 
To maximize survival, an animal’s brain has to remember what cues precede a positive or negative event, allowing the animal to alter its behavior to increase rewards and decrease mishaps. In the Hopkins-MIT study, the researchers sought clarity about how the brain links visual information to more complex information about time and reward.
The presiding theory, Hussain Shuler says, assumed that this connection was made in areas devoted to “high-level” processing, like the frontal cortex, which is known to be important for learning and memory. The primary visual cortex seemed to simply receive information from the eyes and “re-piece” the visual world together before presenting it to decision-making parts of the brain.

Pavlov’s Rats? Rodents Trained to Link Rewards to Visual Cues

In experiments on rats outfitted with tiny goggles, scientists say they have learned that the brain’s initial vision processing center not only relays visual stimuli, but also can “learn” time intervals and create specifically timed expectations of future rewards. The research, by a team at the Johns Hopkins University School of Medicine and the Massachusetts Institute of Technology, sheds new light on learning and memory-making, the investigators say, and could help explain why people with Alzheimer’s disease have trouble remembering recent events.

Results of the study, in the journal Neuron, suggest that connections within nerve cell networks in the vision-processing center can be strengthened by the neurochemical acetylcholine (ACh), which the brain is thought to secrete after a reward is received. Only nerve cell networks recently stimulated by a flash of light delivered through the goggles are affected by ACh, which in turn allows those nerve networks to associate the visual cue with the reward. Because brain structures are highly conserved in mammals, the findings likely have parallels in humans, they say.

“We’ve discovered that nerve cells in this part of the brain, the primary visual cortex, seem to be able to develop molecular memories, helping us understand how animals learn to predict rewarding outcomes,” says Marshall Hussain Shuler, Ph.D., assistant professor of neuroscience at the Institute for Basic Biomedical Sciences at the Johns Hopkins University School of Medicine.

To maximize survival, an animal’s brain has to remember what cues precede a positive or negative event, allowing the animal to alter its behavior to increase rewards and decrease mishaps. In the Hopkins-MIT study, the researchers sought clarity about how the brain links visual information to more complex information about time and reward.

The presiding theory, Hussain Shuler says, assumed that this connection was made in areas devoted to “high-level” processing, like the frontal cortex, which is known to be important for learning and memory. The primary visual cortex seemed to simply receive information from the eyes and “re-piece” the visual world together before presenting it to decision-making parts of the brain.

Filed under brain nerve cells primary visual cortex memory acetylcholine neuroscience science

39 notes

New Brain Circuit Sheds Light on Development of Voluntary Movements
All parents know the infant milestones: turning over, learning to crawl, standing, and taking that first unassisted step. Achieving each accomplishment presumably requires the formation of new connections among subsets of the billions of nerve cells in the infant’s brain. But how, when and where those connections form has been a mystery.
Now researchers at Duke Medicine have begun to find answers. In a study reported Jan. 23, 2013, in the scientific journal Neuron, the research team describes the entire network of brain cells that are connected to specific motor neurons controlling whisker muscles in newborn mice.
A better understanding of such motor control circuits could help inform how human brains develop, potentially leading to new ways of restoring movement in people who suffer paralysis from brain injuries, or to the development of better prosthetics for limb replacement.
“Whiskers to mice are like fingers to humans, in that both are moving touch sensors,” said lead investigator Fan Wang, PhD, associate professor of cell biology and member of the Duke Institute for Brain Sciences. “Understanding how the mouse’s brain controls whisker movements may tell us about neural control of finger movements in people.”
Mice are active at night, so they rely heavily on whiskers to detect and discriminate objects in the dark, brushing their whiskers against objects in a rhythmic back-and-forth sweeping pattern referred to as “whisking”. But this whisking behavior does not appear until about two weeks after birth, when young mice start to explore the world outside their nest.
To learn how motor control of whiskers takes place, Wang and postdoctoral fellow Jun Takatoh used a new technique that takes advantage of the rabies virus’ ability to spread through connected nerve cells. A disabled form of the virus used to vaccinate pets was created with the ability to express a fluorescent protein. The researchers were able to trace its path through a network of brain cells directly connected to the motor neurons controlling whisker movement.
“The precision of this mapping method allowed us to ask a key question, namely are parts of the whisker motor control circuitry not yet connected in newborn mice, and are such missing links added later to enable whisking?” Wang said.
By taking a series of pictures in the fluorescently labeled brains during the first two weeks after birth, the research team chronicled the developing circuits before and after mice start whisking.
“When we traced the circuit it was stunning in the sense that we didn’t realize there are so many pools of neurons located throughout the brainstem that are connected to whisker motor neurons,” said Wang. “It’s remarkable that a single motor neuron receives so many inputs, and somehow is able to integrate them.”
At the same time whisking movements emerge, motor neurons receive a new set of inputs from a region of the brainstem called the LPGi. A single LPGi neuron is connected to motor neurons on both sides of the face, putting them in perfect position to synchronize the movements of left and right whiskers.
To learn more about the new circuit formed between LPGi and motor neurons, Wang and Takatoh drew on the expertise of Duke colleague Richard Mooney, PhD, professor of neurobiology, and his student Anders Nelson. Together, the researchers were able to record the labeled neurons and found the LPGi neurons communicate with motor neurons using glutamate, the main neurotransmitter that stimulates the brain. They further discovered that LPGi neurons receive direct inputs from the motor cortex.
“This makes sense because exploratory whisking is a voluntary movement under control of the motor cortex,” Wang said. “Excitatory input is needed for initiating such movements, and LPGi may be critical for relaying signals from the motor cortex to whisker motor neurons.”
The researchers will next explore the connectivity by using genetic, viral and optical tools to see what happens when certain components of the circuits are activated or silenced during various motor tasks.

New Brain Circuit Sheds Light on Development of Voluntary Movements

All parents know the infant milestones: turning over, learning to crawl, standing, and taking that first unassisted step. Achieving each accomplishment presumably requires the formation of new connections among subsets of the billions of nerve cells in the infant’s brain. But how, when and where those connections form has been a mystery.

Now researchers at Duke Medicine have begun to find answers. In a study reported Jan. 23, 2013, in the scientific journal Neuron, the research team describes the entire network of brain cells that are connected to specific motor neurons controlling whisker muscles in newborn mice.

A better understanding of such motor control circuits could help inform how human brains develop, potentially leading to new ways of restoring movement in people who suffer paralysis from brain injuries, or to the development of better prosthetics for limb replacement.

“Whiskers to mice are like fingers to humans, in that both are moving touch sensors,” said lead investigator Fan Wang, PhD, associate professor of cell biology and member of the Duke Institute for Brain Sciences. “Understanding how the mouse’s brain controls whisker movements may tell us about neural control of finger movements in people.”

Mice are active at night, so they rely heavily on whiskers to detect and discriminate objects in the dark, brushing their whiskers against objects in a rhythmic back-and-forth sweeping pattern referred to as “whisking”. But this whisking behavior does not appear until about two weeks after birth, when young mice start to explore the world outside their nest.

To learn how motor control of whiskers takes place, Wang and postdoctoral fellow Jun Takatoh used a new technique that takes advantage of the rabies virus’ ability to spread through connected nerve cells. A disabled form of the virus used to vaccinate pets was created with the ability to express a fluorescent protein. The researchers were able to trace its path through a network of brain cells directly connected to the motor neurons controlling whisker movement.

“The precision of this mapping method allowed us to ask a key question, namely are parts of the whisker motor control circuitry not yet connected in newborn mice, and are such missing links added later to enable whisking?” Wang said.

By taking a series of pictures in the fluorescently labeled brains during the first two weeks after birth, the research team chronicled the developing circuits before and after mice start whisking.

“When we traced the circuit it was stunning in the sense that we didn’t realize there are so many pools of neurons located throughout the brainstem that are connected to whisker motor neurons,” said Wang. “It’s remarkable that a single motor neuron receives so many inputs, and somehow is able to integrate them.”

At the same time whisking movements emerge, motor neurons receive a new set of inputs from a region of the brainstem called the LPGi. A single LPGi neuron is connected to motor neurons on both sides of the face, putting them in perfect position to synchronize the movements of left and right whiskers.

To learn more about the new circuit formed between LPGi and motor neurons, Wang and Takatoh drew on the expertise of Duke colleague Richard Mooney, PhD, professor of neurobiology, and his student Anders Nelson. Together, the researchers were able to record the labeled neurons and found the LPGi neurons communicate with motor neurons using glutamate, the main neurotransmitter that stimulates the brain. They further discovered that LPGi neurons receive direct inputs from the motor cortex.

“This makes sense because exploratory whisking is a voluntary movement under control of the motor cortex,” Wang said. “Excitatory input is needed for initiating such movements, and LPGi may be critical for relaying signals from the motor cortex to whisker motor neurons.”

The researchers will next explore the connectivity by using genetic, viral and optical tools to see what happens when certain components of the circuits are activated or silenced during various motor tasks.

Filed under nerve cells brain cells motor neurons whiskers neuroscience science

94 notes

Socially Isolated Rats are More Vulnerable to Addiction
Rats that are socially isolated during a critical period of adolescence are more vulnerable to addiction to amphetamine and alcohol, found researchers at The University of Texas at Austin. Amphetamine addiction is also harder to extinguish in the socially isolated rats.
These effects, which are described this week in the journal Neuron, persist even after the rats are reintroduced into the community of other rats.
“Basically the animals become more manipulatable,” said Hitoshi Morikawa, associate professor of neurobiology in the College of Natural Sciences. “They’re more sensitive to reward, and once conditioned the conditioning takes longer to extinguish. We’ve been able to observe this at both the behavioral and neuronal level.”
Morikawa said the negative effects of social isolation during adolescence have been well documented when it comes to traits such as anxiety, aggression, cognitive rigidity and spatial learning. What wasn’t clear until now is how social isolation affects the specific kind of behavior and brain activity that has to do with addiction.
“Isolated animals have a more aggressive profile,” said Leslie Whitaker, a former doctoral student in Morikawa’s lab and now a researcher at the National Institute on Drug Abuse. “They are more anxious. Put them in an open field and they freeze more. We also know that those areas of the brain that are more involved in conscious memory are impaired. But the kind of memory involved in addiction isn’t conscious memory. It’s an unconscious preference for the place in which you got the reward. You keep coming back to it without even knowing why. That kind of memory is enhanced by the isolation.”

Socially Isolated Rats are More Vulnerable to Addiction

Rats that are socially isolated during a critical period of adolescence are more vulnerable to addiction to amphetamine and alcohol, found researchers at The University of Texas at Austin. Amphetamine addiction is also harder to extinguish in the socially isolated rats.

These effects, which are described this week in the journal Neuron, persist even after the rats are reintroduced into the community of other rats.

“Basically the animals become more manipulatable,” said Hitoshi Morikawa, associate professor of neurobiology in the College of Natural Sciences. “They’re more sensitive to reward, and once conditioned the conditioning takes longer to extinguish. We’ve been able to observe this at both the behavioral and neuronal level.”

Morikawa said the negative effects of social isolation during adolescence have been well documented when it comes to traits such as anxiety, aggression, cognitive rigidity and spatial learning. What wasn’t clear until now is how social isolation affects the specific kind of behavior and brain activity that has to do with addiction.

“Isolated animals have a more aggressive profile,” said Leslie Whitaker, a former doctoral student in Morikawa’s lab and now a researcher at the National Institute on Drug Abuse. “They are more anxious. Put them in an open field and they freeze more. We also know that those areas of the brain that are more involved in conscious memory are impaired. But the kind of memory involved in addiction isn’t conscious memory. It’s an unconscious preference for the place in which you got the reward. You keep coming back to it without even knowing why. That kind of memory is enhanced by the isolation.”

Filed under social isolation addiction brain activity neuron adolescence neuroscience science

233 notes

Astrocytes Identified as Target for New Depression Therapy
Neuroscience researchers from Tufts University have found that our star-shaped brain cells, called astrocytes, may be responsible for the rapid improvement in mood in depressed patients after acute sleep deprivation. This in vivo study, published in the current issue of Translational Psychiatry, identified how astrocytes regulate a neurotransmitter involved in sleep. The researchers report that the findings may help lead to the development of effective and fast-acting drugs to treat depression, particularly in psychiatric emergencies.
Drugs are widely used to treat depression, but often take weeks to work effectively. Sleep deprivation, however, has been shown to be effective immediately in approximately 60% of patients with major depressive disorders. Although widely-recognized as helpful, it is not always ideal because it can be uncomfortable for patients, and the effects are not long-lasting.
During the 1970s, research verified the effectiveness of acute sleep deprivation for treating depression, particularly deprivation of rapid eye movement sleep, but the underlying brain mechanisms were not known.
Most of what we understand of the brain has come from research on neurons, but another type of largely-ignored cell, called glia, are their partners. Although historically thought of as a support cell for neurons, the Phil Haydon group at Tufts University School of Medicine has shown in animal models that a type of glia, called astrocytes, affect behavior.  
Haydon’s team had established previously that astrocytes regulate responses to sleep deprivation by releasing neurotransmitters that regulate neurons. This regulation of neuronal activity affects the sleep-wake cycle. Specifically, astrocytes act on adenosine receptors on neurons. Adenosine is a chemical known to have sleep-inducing effects.
During our waking hours, adenosine accumulates and increases the urge to sleep, known as sleep pressure. Chemicals, such as caffeine, are adenosine receptor antagonists and promote wakefulness. In contrast, an adenosine receptor agonist creates sleepiness.
“In this study, we administered three doses of an adenosine receptor agonist to mice over the course of a night that caused the equivalent of sleep deprivation. The mice slept as normal, but the sleep did not reduce adenosine levels sufficiently, mimicking the effects of sleep deprivation. After only 12 hours, we observed that mice had decreased depressive-like symptoms and increased levels of adenosine in the brain, and these results were sustained for 48 hours,” said first author Dustin Hines, Ph.D., a post-doctoral fellow in the department of neuroscience at Tufts University School of Medicine (TUSM).
“By manipulating astrocytes we were able to mimic the effects of sleep deprivation on depressive-like symptoms, causing a rapid and sustained improvement in behavior,” continued Hines.
“Further understanding of astrocytic signaling and the role of adenosine is important for research and development of anti-depressant drugs. Potentially, new drugs that target this mechanism may provide rapid relief for psychiatric emergencies, as well as long-term alleviation of chronic depressive symptoms,” said Naomi Rosenberg, Ph.D., dean of the Sackler School of Graduate Biomedical Sciences and vice dean for research at Tufts University School of Medicine. “The team’s next step is to further understand the other receptors in this system and see if they, too, can be affected.”
(Image: Paul De Koninck)

Astrocytes Identified as Target for New Depression Therapy

Neuroscience researchers from Tufts University have found that our star-shaped brain cells, called astrocytes, may be responsible for the rapid improvement in mood in depressed patients after acute sleep deprivation. This in vivo study, published in the current issue of Translational Psychiatry, identified how astrocytes regulate a neurotransmitter involved in sleep. The researchers report that the findings may help lead to the development of effective and fast-acting drugs to treat depression, particularly in psychiatric emergencies.

Drugs are widely used to treat depression, but often take weeks to work effectively. Sleep deprivation, however, has been shown to be effective immediately in approximately 60% of patients with major depressive disorders. Although widely-recognized as helpful, it is not always ideal because it can be uncomfortable for patients, and the effects are not long-lasting.

During the 1970s, research verified the effectiveness of acute sleep deprivation for treating depression, particularly deprivation of rapid eye movement sleep, but the underlying brain mechanisms were not known.

Most of what we understand of the brain has come from research on neurons, but another type of largely-ignored cell, called glia, are their partners. Although historically thought of as a support cell for neurons, the Phil Haydon group at Tufts University School of Medicine has shown in animal models that a type of glia, called astrocytes, affect behavior.  

Haydon’s team had established previously that astrocytes regulate responses to sleep deprivation by releasing neurotransmitters that regulate neurons. This regulation of neuronal activity affects the sleep-wake cycle. Specifically, astrocytes act on adenosine receptors on neurons. Adenosine is a chemical known to have sleep-inducing effects.

During our waking hours, adenosine accumulates and increases the urge to sleep, known as sleep pressure. Chemicals, such as caffeine, are adenosine receptor antagonists and promote wakefulness. In contrast, an adenosine receptor agonist creates sleepiness.

“In this study, we administered three doses of an adenosine receptor agonist to mice over the course of a night that caused the equivalent of sleep deprivation. The mice slept as normal, but the sleep did not reduce adenosine levels sufficiently, mimicking the effects of sleep deprivation. After only 12 hours, we observed that mice had decreased depressive-like symptoms and increased levels of adenosine in the brain, and these results were sustained for 48 hours,” said first author Dustin Hines, Ph.D., a post-doctoral fellow in the department of neuroscience at Tufts University School of Medicine (TUSM).

“By manipulating astrocytes we were able to mimic the effects of sleep deprivation on depressive-like symptoms, causing a rapid and sustained improvement in behavior,” continued Hines.

“Further understanding of astrocytic signaling and the role of adenosine is important for research and development of anti-depressant drugs. Potentially, new drugs that target this mechanism may provide rapid relief for psychiatric emergencies, as well as long-term alleviation of chronic depressive symptoms,” said Naomi Rosenberg, Ph.D., dean of the Sackler School of Graduate Biomedical Sciences and vice dean for research at Tufts University School of Medicine. “The team’s next step is to further understand the other receptors in this system and see if they, too, can be affected.”

(Image: Paul De Koninck)

Filed under brain cells neuronal activity sleep deprivation depression astrocytes neuroscience science

112 notes

Study of how eye cells become damaged could help prevent blindness
Light-sensing cells in the eye rely on their outer segment to convert light into neural signals that allow us to see. But because of its unique cylindrical shape, the outer segment is prone to breakage, which can cause blindness in humans. A study published by Cell Press on January 22nd in the Biophysical Journal provides new insight into the mechanical properties that cause the outer segment to snap under pressure. The new experimental and theoretical findings help to explain the origin of severe eye diseases and could lead to new ways of preventing blindness.
"To our knowledge, this is the first theory that explains how the structural rigidity of the outer segment can make it prone to damage," says senior study author Aphrodite Ahmadi of the State University of New York Cortland. "Our theory represents a significant advance in our understanding of retinal degenerative diseases."
The outer segment of photoreceptors consists of discs packed with a light-sensitive protein called rhodopsin. Discs made at nighttime are different from those produced during the day, generating a banding pattern that was first observed in frogs but is common across species. Mutations that affect photoreceptors often destabilize the outer segment and may damage its discs, leading to cell death, retinal degeneration, and blindness in humans. But until now, it was unclear which structural properties of the outer segment determine its susceptibility to damage.
To address this question, Ahmadi and her team examined tadpole photoreceptors under the microscope while subjecting them to fluid forces. They found that high-density bands packed with a high concentration of rhodopsin were very rigid, which made them more susceptible to breakage than low-density bands consisting of less rhodopsin. Their model confirmed their experimental results and revealed factors that determine the critical force needed to break the outer segment.
The findings support the idea that mutations causing rhodopsin to aggregate can destabilize the outer segment, eventually causing blindness. “Further refinement of the model could lead to novel ways to stabilize the outer segment and could delay the onset of blindness,” says Ahmadi.

Study of how eye cells become damaged could help prevent blindness

Light-sensing cells in the eye rely on their outer segment to convert light into neural signals that allow us to see. But because of its unique cylindrical shape, the outer segment is prone to breakage, which can cause blindness in humans. A study published by Cell Press on January 22nd in the Biophysical Journal provides new insight into the mechanical properties that cause the outer segment to snap under pressure. The new experimental and theoretical findings help to explain the origin of severe eye diseases and could lead to new ways of preventing blindness.

"To our knowledge, this is the first theory that explains how the structural rigidity of the outer segment can make it prone to damage," says senior study author Aphrodite Ahmadi of the State University of New York Cortland. "Our theory represents a significant advance in our understanding of retinal degenerative diseases."

The outer segment of photoreceptors consists of discs packed with a light-sensitive protein called rhodopsin. Discs made at nighttime are different from those produced during the day, generating a banding pattern that was first observed in frogs but is common across species. Mutations that affect photoreceptors often destabilize the outer segment and may damage its discs, leading to cell death, retinal degeneration, and blindness in humans. But until now, it was unclear which structural properties of the outer segment determine its susceptibility to damage.

To address this question, Ahmadi and her team examined tadpole photoreceptors under the microscope while subjecting them to fluid forces. They found that high-density bands packed with a high concentration of rhodopsin were very rigid, which made them more susceptible to breakage than low-density bands consisting of less rhodopsin. Their model confirmed their experimental results and revealed factors that determine the critical force needed to break the outer segment.

The findings support the idea that mutations causing rhodopsin to aggregate can destabilize the outer segment, eventually causing blindness. “Further refinement of the model could lead to novel ways to stabilize the outer segment and could delay the onset of blindness,” says Ahmadi.

Filed under retinal degeneration blindness photoreceptors eye cells neuroscience science

free counters