Posts tagged neuroscience

Posts tagged neuroscience
Fronto-limbic brain activity during sleep is believed to support the consolidation of emotional memories in healthy adults. Attention deficit-hyperactivity disorder (ADHD) is accompanied by emotional deficits coincidently caused by dysfunctional interplay of fronto-limbic circuits. This study aimed to examine the role of sleep in the consolidation of emotional memory in ADHD in the context of healthy development. 16 children with ADHD, 16 healthy children, and 20 healthy adults participated in this study. Participants completed an emotional picture recognition paradigm in sleep and wake control conditions. Each condition had an immediate (baseline) and delayed (target) retrieval session. The emotional memory bias was baseline–corrected, and groups were compared in terms of sleep-dependent memory consolidation (sleep vs. wake). We observed an increased sleep-dependent emotional memory bias in healthy children compared to children with ADHD and healthy adults. Frontal oscillatory EEG activity (slow oscillations, theta) during sleep correlated negatively with emotional memory performance in children with ADHD. When combining data of healthy children and adults, correlation coefficients were positive and differed from those in children with ADHD. Since children displayed a higher frontal EEG activity than adults these data indicate a decline in sleep-related consolidation of emotional memory in healthy development. In addition, it is suggested that deficits in sleep-related selection between emotional and non-emotional memories in ADHD exacerbate emotional problems during daytime as they are often reported in ADHD.
In one of the first successful attempts at genetically engineering mosquitoes, HHMI researchers have altered the way the insects respond to odors, including the smell of humans and the insect repellant DEET. The research not only demonstrates that mosquitoes can be genetically altered using the latest research techniques, but paves the way to understanding why the insect is so attracted to humans, and how to block that attraction.

“The time has come now to do genetics in these important disease-vector insects. I think our new work is a great example that you can do it,” says Leslie Vosshall, an HHMI investigator at The Rockefeller University who led the new research, published May 29, 2013 in the journal Nature.
In 2007, scientists announced the completion of the full genome sequence of Aedes aegypti, the mosquito that transmits dengue and yellow fever. A year later, when Vosshall became an HHMI investigator, she shifted the focus of her lab from Drosophila flies to mosquitoes with the specific goal of genetically engineering the insects. Studying mosquitoes appealed to her because of their importance as disease carriers, as well as their unique attraction to humans.
Vosshall’s first target: a gene called orco, which her lab had deleted in genetically engineered flies 10 years earlier. “We knew this gene was important for flies to be able to respond to the odors they respond to,” says Vosshall. “And we had some hints that mosquitoes interact with smells in their environment, so it was a good bet that something would interact with orco in mosquitoes.”
Vosshall’s team turned to a genetic engineering tool called zinc-finger nucleases to specifically mutate the orco gene in Aedes aegypti. They injected the targeted zinc-finger nucleases into mosquito embryos, waited for them to mature, identified mutant individuals, and generated mutant strains that allowed them to study the role of orco in mosquito biology. The engineered mosquitoes showed diminished activity in neurons linked to odor-sensing. Then, behavioral tests revealed more changes.
When given a choice between a human and any other animal, normal Aedes aegypti will reliably buzz toward the human. But the mosquitoes with orco mutations showed reduced preference for the smell of humans over guinea pigs, even in the presence of carbon dioxide, which is thought to help mosquitoes respond to human scent. “By disrupting a single gene, we can fundamentally confuse the mosquito from its task of seeking humans,” says Vosshall. But they don’t yet know whether the confusion stems from an inability to sense a “bad” smell coming from the guinea pig, a “good” smell from the human, or both.
Next, the team tested whether the mosquitoes with orco mutations responded differently to DEET. When exposed to two human arms—one slathered in a solution containing 10 percent DEET, the active ingredient in many bug repellants, and the other untreated—the mosquitoes flew equally toward both arms, suggesting they couldn’t smell the DEET. But once they landed on the arms, they quickly flew away from the DEET-covered one. “This tells us that there are two totally different mechanisms that mosquitoes are using to sense DEET,” explains Vosshall. “One is what’s happening in the air, and the other only comes into action when the mosquito is touching the skin.” Such dual mechanisms had been discussed but had never been shown before.
Vosshall and her collaborators next want to study in more detail how the orco protein interacts with the mosquitoes’ odorant receptors to allow the insects to sense smells. “We want to know what it is about these mosquitoes that makes them so specialized for humans,” she says. “And if we can also provide insights into how existing repellants are working, then we can start having some ideas about what a next-generation repellant would look like.”
(Source: hhmi.org)

Early brain responses to words predict developmental outcomes in children with autism
The pattern of brain responses to words in 2-year-old children with autism spectrum disorder predicted the youngsters’ linguistic, cognitive and adaptive skills at ages 4 and 6, according to a new study.
The findings, published May 29 in PLOS ONE, are among the first to demonstrate that a brain marker can predict future abilities in children with autism.
“We’ve shown that the brain’s indicator of word learning in 2-year-olds already diagnosed with autism predicts their eventual skills on a broad set of cognitive and linguistic abilities and adaptive behaviors,” said lead author Patricia Kuhl, co-director of the University of Washington’s Institute for Learning & Brain Sciences.
“This is true four years after the initial test, and regardless of the type of autism treatment the children received,” she said.
In the study, 2-year-olds – 24 with autism and 20 without – listened to a mix of familiar and unfamiliar words while wearing an elastic cap that held sensors in place. The sensors measured brain responses to hearing words, known as event-related potentials.
The research team then divided the children with autism into two groups based on the severity of their social impairments and took a closer look at the brain responses. Youngsters with less severe symptoms had brain responses that were similar to the typically developing children, in that both groups exhibited a strong response to known words in a language area located in the temporal parietal region on the left side of the brain.
This suggests that the brains of children with less severe symptoms can process words in ways that are similar to children without the disorder.
In contrast, children with more severe social impairments showed brain responses more broadly over the right hemisphere, which is not seen in typically developing children of any age.
“We think this measure signals that the 2-year-old’s brain has reorganized itself to process words. This reorganization depends on the child’s ability to learn from social experiences,” Kuhl said. She cautioned that identifying a neural marker that predicts future autism diagnoses with assurance is still a ways off.
The researchers also tested the children’s language skills, cognitive abilities, and social and emotional development, beginning at age 2, then again at ages 4 and 6.
The children with autism received intensive treatment and, as a group, they improved on the behavioral tests over time. But the outcome for individual children varied widely and the more their brain responses to words at age 2 were like those of typically developing children, the more improvement in skills they showed by age 6.
In other studies, Kuhl has found that social interactions accelerate language learning in babies. Infants use social cues, such as tracking adults’ eye movements to learn the names of things, and must be interested in people to learn in this way. Paying attention to people is a way for babies to sort through all that is happening around them and serves as a gate to know what is important.
But with autism, social impairments impede children’s interest in, and ability to pick up on, social cues. They find themselves paying attention to many other things, especially objects as opposed to people.
“Social learning is what most humans are about,” Kuhl said. “If your brain can learn from other people in a social context you have the capability to learn just about anything.”
She hopes that the new findings will lead to brain measures that can be used much earlier in development – at 12 months or younger – to help identify children at risk for autism.
“This line of work may lead to new interventions applied early in development, when the brain shows its highest level of neural plasticity,” Kuhl said.

Neuroscientists Discover New Phase of Synaptic Development
Breakthrough Could Lead to Better Understanding of Learning and Memory
Students preparing for final exams might want to wait before pulling an all-night cram session — at least as far as their neurons are concerned. Carnegie Mellon University neuroscientists have discovered a new intermediate phase in neuronal development during which repeated exposure to a stimulus shrinks synapses. The findings are published in the May 8 issue of the Journal of Neuroscience.
It’s well known that synapses in the brain, the connections between neurons and other cells that allow for the transmission of information, grow when they’re exposed to a stimulus. New research from the lab of Carnegie Mellon Associate Professor of Biological Sciences Alison L. Barth has shown that in the short term, synapses get even stronger than previously thought, but then quickly go through a transitional phase where they weaken.
"When you think of learning, you think that it’s cumulative. We thought that synapses started small and then got bigger and bigger. This isn’t the case," said Barth, who also is a member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition. "Based on our data, it seems like synapses that have recently been strengthened are peculiarly vulnerable — more stimulation can actually wipe out the effects of learning.
"Psychologists know that for long-lasting memory, spaced training - like studying for your classes after very lecture, all semester long — is superior to cramming all night before the exam," Barth said. "This study shows why. Right after plasticity, synapses are almost fragile — more training during this labile phases is actually counterproductive."
Previous research from Barth’s lab established the biochemical mechanisms responsible for the strengthening of synapses in the neocortex, the part of the brain responsible for thought and language, but only measured the synapses after 24 hours. In the current study, post-doctoral student Jing A. Wen investigated how the synapses developed throughout the first 24 hours of exposure to a stimulus using a specialized transgenic mouse model created by Barth. The model senses its surroundings using only one whisker, which alters its ability to sense its environment and creates a sensory imbalance that increases plasticity in the brain. Since each whisker is linked to a specific area of the cortex, researchers can easily track neuronal changes.
Wen found that during this first day of learning, synapses go through three distinct phases. In the initiation phase, synaptic plasticity is spurred on by NMDA receptors. Over the next 12 hours or so, the synapses get stronger and stronger. As the stimulus is repeated, the NDMA receptors change their function and start to weaken the synapses in what the researchers have called the labile phase. After a few hours of weakening, another receptor, mGluR5, initiates a stabilization phase during which the synapses maintain their residual strength.
Furthermore, the researchers found that they could maintain the super-activated state found at the beginning of the labile phase by stopping the stimulus altogether or by injecting a glutamate receptor antagonist drug at an optimal time point. The findings are analogous to those seen in many psychological studies that use space training to improve memory.
"While synaptic changes can be long lasting, we’ve found that in this initial period there are a number of different things we could play with," Barth said. "The discovery of this labile phase suggests there are ways to control learning through the manipulation of the biochemical pathways that maintain memory."

From trauma to tau - Researchers tie brain injury to toxic form of protein
University of Texas Medical Branch at Galveston researchers have uncovered what may be a key molecular mechanism behind the lasting damage done by traumatic brain injury.
The discovery centers on a particular form of a protein that neuroscientists call tau, which has also been associated with Alzheimer’s disease and other neurodegenerative conditions. Under ordinary conditions, tau is essential to neuron health, but in Alzheimer’s the protein aggregates into two abnormal forms: so-called “neurofibrillary tangles,” and collections of two, three, or four or more tau units known as “oligomers.”
Neurofibrillary tangles are not believed to be harmful, but tau oligomers are toxic to nerve cells. They are also are thought to have an additional damaging property — when they come into contact with healthy tau proteins, they cause them to also clump together into oligomers, and so spread toxic tau oligomers to other parts of the brain.
Now, in experiments with laboratory rats, using novel antibodies developed at UTMB, scientists have found that traumatic brain injuries also generate tau oligomers. The destructive protein assemblages formed within four hours after injury and persisted for at least two weeks — long enough to suggest that they might contribute to lasting brain damage.
Significantly, the rats used in the experiments were normal, unlike the genetically modified animals used in most tau research. The findings are thus likely to be more relevant to human traumatic brain injuries.
“Although people have given some attention to the formation of neurofibrillary tangles after traumatic brain injury, we were the first to look at tau oligomers, because we have an antibody that allows us to separate them out and see how much of the total tau is the toxic species,” said Bridget Hawkins, lead author of a paper on the research now online in the Journal of Biological Chemistry. “We saw that it’s a substantial amount — enough to play an important role in the effects of traumatic brain injury.”
Those effects can include memory deficits, which have been recently shown by UTMB researchers to be induced by tau oligomers. Other long-term ramifications of TBI include seizures, and disruptions in the sleep-wake cycle. The UTMB scientists hypothesize that these problems could be avoided if physicians had a way to stop the process of tau oligomerization.
One possibility is a treatment based on the antibodies used to label tau oligomers in this project, which were developed as part of an effort to develop a vaccine against different neurodegenerative disorders.
“We have antibodies that can specifically target these tau oligomers without interfering with the function of healthy tau,” said UTMB associate professor Rakez Kayed, the senior author on the paper. “This is a new approach — we’re starting by targeting them in animals — but we hope to eventually humanize these antibodies for clinical trials.”

New chemical approach to beat Alzheimer’s disease
Scientists at the University of Liverpool and Callaghan Innovation in New Zealand have developed a new chemical approach to help harness the natural ability of complex sugars to treat Alzheimer’s disease.
The team used a new chemical method to produce a library of sugars, called heparan sulphates, which are known to control the formation of the proteins in the brain that cause memory loss.
Chemically produced in the lab
Heparan sulphates are found in nearly every cell of the body, and are similar to the natural blood-thinning drug, heparin. Now scientists have discovered how to produce them chemically in the lab, and found that some of these sugars can inhibit an enzyme that creates small proteins in the brain.
These proteins, called amyloid, disrupt the normal function of cells leading to the progressive memory loss that is characteristic of Alzheimer’s disease.
Professor Jerry Turnbull, from the University’s Institute of Integrative Biology, said: “We are targeting an enzyme, called BACE, which is responsible for creating the amyloid protein. The amyloid builds up in the brain in Alzheimer’s disease and causes damage. BACE has proved to be a difficult enzyme to block despite lots of efforts by drug companies.”
“We are using a new approach, harnessing the natural ability of sugars, based on the blood-thinning drug heparin, to block the action of BACE.”
Dr Peter Tyler, from Callaghan Innovation, added: “We have developed new chemical methods that have allowed us to make the largest set of these sugars produced to date. These new compounds will now be tested to identify those with the best activity and fewest possible side effects, as these have potential for development into a drug treatment that targets the underlying cause of this disease.”
Current treatments only help symptoms
There are more than 800,000 people in the UK, and 50,000 in New Zealand living with dementia. Over half of these have Alzheimer’s disease, the most common cause of dementia. The cost of these diseases to the UK economy stands at £23 billion, more than the cost of cancer and heart disease combined. Current treatments for dementia can help with symptoms, but there are no drugs available that can slow or stop the underlying disease.
In some neurodegenerative diseases, and specifically in a devastating inherited condition called spinocerebellar ataxia 1 (SCA1), the answer may not be an “all-or-nothing,” said a collaboration of researchers from Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital and the University of Minnesota in a report that appears online in the journal Nature. The problem might be solved with just a little less.
"If you can only decrease the levels of ataxin-1 (the protein involved in SCA1) by 20 percent, you can reduce many symptoms of the disease," said Dr. Huda Zoghbi, professor of molecular and human genetics and pediatrics at BCM and director of the Neurological Research Institute. She is also a Howard Hughes Medical Institute Investigator.
Her long-time colleague Dr. Harry Orr, director of the University of Minnesota Institute for Translational Neuroscience, echoed that sentiment: “Perhaps, if you decrease the levels of the protein, you will decrease the severity of the disease.” In this report, the laboratories of Zoghbi, Dr. Juan Botas, also of BCM and the Neurological Researcher Institute, Dr. Thomas Westbrook, assistant professor of molecular and human genetics at BCM, and Orr identified a molecular pathway in the cell (RAS/MAPK/MSK1) with components that can be modulated slightly to reduce the levels of defective ataxin-1, the protein that causes disease in patients with the disorder.
Spinocerebellar ataxia 1 occurs when the ataxin-1 gene is mutated, with three letters of the DNA alphabet repeating many, many times. The abnormal protein that results cannot fold correctly and piles up in the cell, eventually killing it. As with many neurodegenerative disorders, the process can take over a decade. A person usually does not develop symptoms of this form of ataxia until he or she is 30 years old or older. The person develops gait problems, eventually loses the ability to speak and function and dies. Zoghbi and Orr teamed to find the gene associated with the disorder in 1993. Their work on the disease has spanned 20 years.
Totally eliminating the protein would not work. Mice that lack the gene have problems with learning and memory, indicating that ataxin-1 plays a role in those activities. Reducing the levels of ataxin-1 does not cure the disease, but it can significantly delay onset.
A Collaborative Innovation Award from the Howard Hughes Medical Institute enabled Zoghbi to put together the team that could screen for the genes or the gene pathway that could be manipulated to result in less ataxin-1.
"Harry and I had studied the disease and we had animal models. Botas, professor of molecular and human genetics at BCM, had a fruit fly model and Dr. Westbrook had a nice technology that enabled us to monitor ataxin-1 levels."
They began with a screen for genes that could affect the levels of ataxin-1 produced in the cell, said Dr. Ismail Al-Ramahi, a postdoctoral fellow in the lab of Botas. Dr. Jeehye Park, a post-doctoral fellow in Zoghbi’s laboratory, and Al-Ramahi are co-first authors of the report. Park and her colleagues carried out the screen in human cell lines and Al-Ramahi and his colleagues carried out the screen in fruit flies (Drosophila melanogaster).
The screen in human cells focused on forms of enzymes called kinases because they are susceptible to the effects of drugs. Using a special technique called RNA silencing, they targeted each known human kinase. At the same, Botas and Al-Ramahi screened kinase genes in fruit flies with a form of SCA1. When the two laboratories compared results, they found 10 genes in common that when inhibited could reduce the levels of ataxin-1 as well as the toxicity associated with it. The genes were part of the RAS/MAPK/MSKI signaling cascade within the cell.
Then the researchers focused on one protein in this pathway called MSK1 and found that when its levels were decreased in mice that were laboratory models of SCA1, the levels of ataxin-1 dropped and the animals improved. That was the final experiment that proved that reducing levels of the protein could stave off the disease.
"We want to look for more pathways," said Zoghbi. If they find more pathways, they may be able to reduce toxicity. "If you have a pain and you take acetaminophen all the time, you have a risk of toxicity. Similarly, if you took a nonsteroidal anti-inflammatory all the time, you would have another toxicity. If you alternate between them, there is less toxicity. If we hit only one pathway with a big inhibition, we risk some toxicity. If we find two or three pathways and hit each only a little, the rest of the body should not be hurt. Each little hit should help us reduce ataxin-1 by a respectable amount."
"I think what is novel about this paper is the integration of the screen in cells that was done in Huda’s lab and the screen in fruit flies done in our lab to look for targets for genes about which we knew nothing ahead of time," said Botas.
While the finding in spinocerebellar ataxia 1 is exciting, its potential application in other diseases is even more provocative.
"Now that we know that it works with ataxin-1, we can revisit many proteins whose levels drive neurodegeneration in sporadic and inherited diseases such as Alzheimer’s, Parkinson’s, Huntington’s and other neurological disorders," said Zoghbi. "This is a pilot study and the results from it are as important as a new pathway in neurodegenerative disease research."
"These are diseases that take a long time to develop," said Park. "Most Alzheimer’s occurs after the age of 85. If we could delay it until age 95, that would be very helpful."
"This is getting us really close, not only for SCA1, but I think it’s going to be a guidepost for work on a lot of other neurodegenerative diseases," said Orr. "It sets us a beautiful research strategy to get at that goal."
(Source: bcm.edu)

Picking Up a Second Language Is Predicted by Ability to Learn Patterns
Some people seem to pick up a second language with relative ease, while others have a much more difficult time. Now, a new study suggests that learning to understand and read a second language may be driven, at least in part, by our ability to pick up on statistical regularities.
The study is published in Psychological Science, a journal of the Association for Psychological Science.
Some research suggests that learning a second language draws on capacities that are language-specific, while other research suggests that it reflects a more general capacity for learning patterns. According to psychological scientist and lead researcher Ram Frost of Hebrew University, the data from the new study clearly point to the latter:
“These new results suggest that learning a second language is determined to a large extent by an individual ability that is not at all linguistic,” says Frost.
In the study, Frost and colleagues used three different tasks to measure how well American students in an overseas program picked up on the structure of words and sounds in Hebrew. The students were tested once in the first semester and again in the second semester.
The students also completed a task that measured their ability to pick up on statistical patterns in visual stimuli. The participants watched a stream of complex shapes that were presented one at a time. Unbeknownst to the participants, the 24 shapes were organized into 8 triplets — the order of the triplets was randomized, though the shapes within each triplet always appeared in the same sequence. After viewing the stream of shapes, the students were tested to see whether they implicitly picked up the statistical regularities of the shape sequences.
The data revealed a strong association between statistical learning and language learning: Students who were high performers on the shapes task tended to pick up the most Hebrew over the two semesters.
“It’s surprising that a short 15-minute test involving the perception of visual shapes could predict to such a large extent which of the students who came to study Hebrew would finish the year with a better grasp of the language,” says Frost.
According to the researchers, establishing a link between second language acquisition and a general capacity for statistical learning may have broad implications.
“This finding points to the possibility that a unified and universal principle of statistical learning can quantitatively explain a wide range of cognitive processes across domains, whether they are linguistic or nonlinguistic,” they conclude.
Art appreciation is measureable
Is it your own innate taste or what you have been taught that decides if you like a work of art? Both, according to an Australian-Norwegian research team.
Have you experienced seeing a painting or a play that has left you with no feelings whatsoever, whilst a friend thought it was beautiful and meaningful? Experts have argued for years about the feasibility of researching art appreciation, and what should be taken into consideration.
Neuroscientists believe that biological processes that take place in the brain decide whether one likes a work of art or not. Historians and philosophers say that this is far too narrow a viewpoint. They believe that what you know about the artist’s intentions, when the work was created, and other external factors, also affect how you experience a work of art.
Building bridges
A new model that combines both the historical and the psychological approach has been developed.
Eye-opening experience
- We know from earlier research that a painting that is difficult – yet possible – to interpret, is felt to be more meaningful than a painting that one looks at and understands immediately. The painter, Eugène Delacroix, made use of this fact to depict war. Joseph Mallord William Turner did the same in ‘Snow storm’. When you have to struggle to understand, you can have an eye-opening experience, which the brain appreciates, explains Reber.
He hopes that other scientists will use the Australian-Norwegian model.
- By measuring brain activity, interviewing test persons about thoughts and reactions, and charting their artistic knowledge, it’s possible to gain new and exciting insight into what makes people appreciate good works of art. The model can be used for visual art, music, theatre and literature, says Reber.
This beer-pouring robot is programmed to anticipate human actions
A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.
Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.
Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.
"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.
Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.
In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.
"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."