Neuroscience

Month

November 2013

Nov 11, 2013268 notes
#oxytocin #oxytocin receptor gene #loneliness #adolescence #neuroscience #genetics #science
Scientists discover that ants, like humans, can change their priorities

All animals have to make decisions every day. Where will they live and what will they eat? How will they protect themselves? They often have to make these decisions as a group, too, turning what may seem like a simple choice into a far more nuanced process. So, how do animals know what’s best for their survival?

image

For the first time, Arizona State University researchers have discovered that at least in ants, animals can change their decision-making strategies based on experience. They can also use that experience to weigh different options.

The findings are featured today in the early online edition of the scientific journal Biology Letters, as well as in its Dec. 23 edition.

Co-authors Taka Sasaki and Stephen Pratt, both with ASU’s School of Life Sciences, have studied insect collectives, such as ants, for years. Sasaki, a postdoctoral research associate, specializes in adapting psychological theories and experiments that are designed for humans to ants, hoping to understand how the collective decision-making process arises out of individually ignorant ants.

“The interesting thing is we can make decisions and ants can make decisions – but ants do it collectively,” said Sasaki. “So how different are we from ant colonies?”

To answer this question, Sasaki and Pratt gave a number of Temnothorax rugatulus ant colonies a series of choices between two nests with differing qualities. In one treatment, the entrances of the nests had varied sizes, and in the other, the exposure to light was manipulated. Since these ants prefer both a smaller entrance size and a lower level of light exposure, they had to prioritize.

“It’s kind of like a humans and buying a house,” said Pratt, an associate professor with the school. “There’s so many options to consider – the size, the number of rooms, the neighborhood, the price, if there’s a pool. The list goes on and on. And for the ants it’s similar, since they live in cavities that can be dark or light, big or small. With all of these things, just like with a human house, it’s very unlikely to find a home that has everything you want.”

Pratt continued to explain that because it is impossible to find the perfect habitat, ants make various tradeoffs for certain qualities, ordering them in a queue of most important aspects. But, when faced with a decision between two different homes, the ants displayed a previously unseen level of intelligence.

According to their data, the series of choices the ants faced caused them to reprioritize their preferences based on the type of decision they faced. Ants that had to choose a nest based on light level prioritized light level over entrance size in the final choice. On the other hand, ants that had to choose a nest based on entrance size ranked light level lower in the later experiment.

This means that, like people, ants take the past into account when weighing options while making a choice. The difference is that ants somehow manage to do this as a colony without any dissent. While this research builds on groundwork previously laid down by Sasaki and Pratt, the newest experiments have already raised more questions.

“You have hundreds of these ants, and somehow they have to reach a consensus,” Pratt said. “How do they do it without anyone in charge to tell them what to do?”

Pratt likened individual ants to individual neurons in the human brain. Both play a key role in the decision-making process, but no one understands how every neuron influences a decision.

Sasaki and Pratt hope to delve deeper into the realm of ant behavior so that one day, they can understand how individual ants influence the colony. Their greater goal is to apply what they discover to help society better understand how humanity can make collective decisions with the same ease ants display.

“This helps us learn how collective decision-making works and how it’s different from individual decision-making,” said Pratt. “And ants aren’t the only animals that make collective decisions – humans do, too. So maybe we can gain some general insight.”

Nov 11, 2013138 notes
#ants #learning #decision making #collective decision making #neuroscience #psychology #science
Simple Dot Test May Help Gauge the Progression of Dopamine Loss in Parkinson’s Disease

A pilot study by a multi-disciplinary team of investigators at Georgetown University suggests that a simple dot test could help doctors gauge the extent of dopamine loss in individuals with Parkinson’s disease (PD). Their study is being presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience.

“It is very difficult now to assess the extent of dopamine loss — a hallmark of Parkinson’s disease — in people with the disease,” says lead author Katherine R. Gamble, a psychology PhD student working with two Georgetown psychologists, a psychiatrist and a neurologist. “Use of this test, called the Triplets Learning Task (TLT), may provide some help for physicians who treat people with Parkinson’s disease, but we still have much work to do to better understand its utility,” she adds.

Gamble works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.

The TLT tests implicit learning, a type of learning that occurs without awareness or intent, which relies on the caudate nucleus, an area of the brain affected by loss of dopamine.

The test is a sequential learning task that does not require complex motor skills, which tend to decline in people with PD. In the TLT, participants see four open circles, see two red dots appear, and are asked to respond when they see a green dot appear. Unbeknownst to them, the location of the first red dot predicts the location of the green target. Participants learn implicitly where the green target will appear, and they become faster and more accurate in their responses.

Previous studies have shown that the caudate region in the brain underlies implicit learning. In the study, PD participants implicitly learned the dot pattern with training, but a loss of dopamine appears to negatively impact that learning compared to healthy older adults.

“Their performance began to decline toward the end of training, suggesting that people with Parkinson’s disease lack the neural resources in the caudate, such as dopamine, to complete the learning task,” says Gamble.

In this study of 27 people with PD, the research team is now testing how implicit learning may differ by different PD stages and drug doses.

“This work is important in that it may be a non-invasive way to evaluate the level of dopamine deficiency in PD patients, and which may lead to future ways to improve clinical treatment of PD patients,” explains Steven E. Lo, MD, associate professor of neurology at Georgetown University Medical Center, and a co-author of the study.

They hope the TLT may one day be a tool to help determine levels of dopamine loss in PD.

Nov 11, 201357 notes
#parkinson's disease #dopamine #caudate nucleus #Neuroscience 2013 #neuroscience #science
Research gives new insight into how antidepressants work in the brain

Research from Oregon Health & Science University’s Vollum Institute, published in the current issue of Nature (1, 2), is giving scientists a never-before-seen view of how nerve cells communicate with each other. That new view can give scientists a better understanding of how antidepressants work in the human brain — and could lead to the development of better antidepressants with few or no side effects.

The article in today’s edition of Nature came from the lab of Eric Gouaux, Ph.D., a senior scientist at OHSU’s Vollum Institute and a Howard Hughes Medical Institute Investigator. The article describes research that gives a better view of the structural biology of a protein that controls communication between nerve cells. The view is obtained through special structural and biochemical methods Gouaux uses to investigate these neural proteins.

The Nature article focuses on the structure of the dopamine transporter, which helps regulate dopamine levels in the brain. Dopamine is an essential neurotransmitter for the human body’s central nervous system; abnormal levels of dopamine are present in a range of neurological disorders, including Parkinson’s disease, drug addiction, depression and schizophrenia. Along with dopamine, the neurotransmitters noradrenaline and serotonin are transported by related transporters, which can be studied with greater accuracy based on the dopamine transporter structure.

The Gouaux lab’s more detailed view of the dopamine transporter structure better reveals how antidepressants act on the transporters and thus do their work.

The more detailed view could help scientists and pharmaceutical companies develop drugs that do a much better job of targeting what they’re trying to target — and not create side effects caused by a broader blast at the brain proteins.

"By learning as much as possible about the structure of the transporter and its complexes with antidepressants, we have laid the foundation for the design of new molecules with better therapeutic profiles and, hopefully, with fewer deleterious side effects," said Gouaux.

Gouaux’s latest dopamine transporter research is also important because it was done using the molecule from fruit flies, a dopamine transporter that is much more similar to those in humans than the bacteria models that previous studies had used.

The dopamine transporter article was one of two articles Gouaux had published in today’s edition of Nature. The other article also dealt with a modified amino acid transporter that mimics the mammalian neurotransmitter transporter proteins targeted by antidepressants. It gives new insights into the pharmacology of four different classes of widely used antidepressants that act on certain transporter proteins, including transporters for dopamine, serotonin and noradrenaline. The second paper in part was validated by findings of the first paper — in how an antidepressant bound itself to a specific transporter.

"What we ended up finding with this research was complementary and mutually reinforcing with the other work — so that was really important," Gouaux said. "And it told us a great deal about how these transporters work and how they interact with the antidepressant molecules."

Nov 11, 2013218 notes
#science #antidepressants #nerve cells #dopamine #neurotransmission #neuroscience
Stress makes snails forgetful

New research on pond snails has revealed that high levels of stress can block memory processes. Researchers from the University of Exeter and the University of Calgary trained snails and found that when they were exposed to multiple stressful events they were unable remember what they had learned.

image

Previous research has shown that stress also affects human ability to remember. This study, published in the journal PLOS ONE, found that experiencing multiple stressful events simultaneously has a cumulative detrimental effect on memory.

Dr Sarah Dalesman, a Leverhulme Trust Early Career Fellow, from Biosciences at the University of Exeter, formally at the University of Calgary, said: “It’s really important to study how different forms of stress interact as this is what animals, including people, frequently experience in real life. By training snails, and then observing their behaviour and brain activity following exposure to stressful situations, we found that a single stressful event resulted in some impairment of memory but multiple stressful events prevented any memories from being formed.” 

The pond snail, Lymnaea stagnalis, has easily observable behaviours linked to memory and large neurons in the brain, both useful benefits when studying memory processes. They also respond to stressful events in a similar way to mammals, making them a useful model species to study learning and memory.

In the study, the pond snails were trained to reduce how often they breathed outside water. Usually pond snails breathe underwater and absorb oxygen through their skin. In water with low oxygen levels the snails emerge and inhale air using a basic lung opened to the air via a breathing hole.

To train the snails not to breathe air they were placed in poorly oxygenated water and their breathing holes were gently poked every time they emerged to breathe. Snail memory was tested by observing how many times the snails attempted to breathe air after they had received their training. Memory was considered to be present if there was a reduction in the number of times they opened their breathing holes. The researchers also assessed memory by monitoring neural activity in the brain. 

Immediately before training, the snails were exposed to two different stressful experiences, low calcium - which is stressful as calcium is necessary for healthy shells - and overcrowding by other pond snails.

When faced with the stressors individually, the pond snails had reduced ability to form long term memory, but were still able to learn and form short and intermediate term memory lasting from a few minutes to hours. However, when both stressors were experienced at the same time, results showed that they had additive effects on the snails’ ability to form memory and all learning and memory processes were blocked. 

Future work will focus on the effects of stress on different populations of pond snail.

Nov 10, 2013122 notes
#snail #lymnaea stagnalis #memory #neural activity #stress #neuroscience #science
Play
Nov 10, 2013143 notes
#stroke #constraint-induced therapy #hemiparesis #rehabilitation #video games #neuroscience #science
New Study Decodes Brain’s Process for Decision Making

When faced with a choice, the brain retrieves specific traces of memories, rather than a generalized overview of past experiences, from its mental Rolodex, according to new brain-imaging research from The University of Texas at Austin.

image

Led by Michael Mack, a postdoctoral researcher in the departments of psychology and neuroscience, the study is the first to combine computer simulations with brain-imaging data to compare two different types of decision-making models.

In one model — exemplar — a decision is framed around concrete traces of memories, while in the other model — prototype — the decision is based on a generalized overview of all memories lumped into a specific category.

Whether one model drives decisions more than the other has remained a matter of debate among scientists for more than three decades. But according to the findings, the exemplar model is more consistent with decision-making behavior.

The study was published this month in Current Biology. The authors include Alison Preston, associate professor in the Department of Psychology and the Center for Learning and Memory; and Bradley Love, a professor at University College London.

In the study, 20 respondents were asked to sort various shapes into two categories. During the task their brain activity was observed using functional magnetic resonance imaging (fMRI), allowing researchers to see how the respondents associate shapes with past memories.

According to the findings, behavioral research alone cannot determine whether a subject uses the exemplar or prototype model to make decisions. With brain-imaging analysis, researchers found that the exemplar model accounted for the majority of participants’ decisions. The results show three different regions associated with the exemplar model were activated during the learning task: occipital (visual perception), parietal (sensory) and frontal cortex (attention).

While processing new information, the brain stores concrete traces of experiences, allowing it to make different kinds of decisions, such as categorization information (is that a dog?), identification (is that John’s dog?) and recall (when did I last see John’s dog?).

To illustrate, Mack says: Imagine having a conversation with a friend about buying a new car. When you think of the category “car,” you’re likely to think of an abstract concept of a car, but not specific details. However, abstract categories are composed of memories from individual experiences. So when you imagine “car,” the abstract mental picture is actually derived from experiences, such as your friend’s white sedan or the red sports car you saw on the morning commute.

“We flexibly memorize our experiences, and this allows us to use these memories for different kinds of decisions,” Mack says. “By storing concrete traces of our experiences, we can make decisions about different types of cars and even specific past experiences in our life with the same memories.”

Mack says this new approach to model-based cognitive neuroscience could lead to discoveries in cognitive research.

“The field has struggled with linking theories of how we behave and act to the activation measures we see in the brain,” Mack says. “Our work offers a method to move beyond simply looking at blobs of brain activation. Instead, we use patterns of brain activation to decode the algorithms underlying cognitive behaviors like decision making.”

Nov 10, 2013162 notes
#decision making #memory #brain activity #brain imaging #neuroscience #science
In Animal Study, “Cold Turkey” Withdrawal from Drugs Triggers Mental Decline

Can quitting drugs without treatment trigger a decline in mental health? That appears to be the case in an animal model of morphine addiction. Georgetown University Medical Center researchers say their observations suggest that managing morphine withdrawal could promote a healthier mental state in people.

“Over time, drug-abusing individuals often develop mental disorders,” says Italo Mocchetti, PhD, a professor of neuroscience. “It’s been thought that drug abuse itself contributes to mental decline, but our findings suggest that ‘quitting cold turkey’ can also lead to damage.”

In the study published in the November issue of Brain, Behavior and Immunity and presented at Neuroscience 2013, Mocchetti and his research colleagues treated the animals with morphine, or allowed them to undergo withdrawal by stopping the treatment. Then, they measured pro-inflammatory cytokines, which can promote damage and cell death, and the protein CCL5, which has various protective effects in the brain.

“Interestingly, we found that treating the addicted animals with morphine both increased the protective CCL5 protein while decreasing pro-inflammatory cytokines, suggesting a beneficial effect,” Mocchetti explains. The animals that ween’t treated during withdrawal had the opposite results — decreased CCL5 and increased levels of the damaging cytokines.

“From these findings, it appears that morphine withdrawal may be a causative factor that leads to mental decline, presenting an important avenue for research in how we can better help people who are trying to quit using drugs,” concludes Mocchetti.

Nov 9, 2013101 notes
#morphine addiction #cytokines #morphine withdrawal #CCL5 #mental health #neuroscience #science
Nov 9, 2013218 notes
#robotics #robotic leg #artificial limbs #prosthetics #CNS #technology #neuroscience #science
New Method Predicts Time from Alzheimer’s Onset to Nursing Home, Death

A Columbia University Medical Center-led research team has clinically validated a new method for predicting time to full-time care, nursing home residence, or death for patients with Alzheimer’s disease. The method, which uses data gathered from a single patient visit, is based on a complex model of Alzheimer’s disease progression that the researchers developed by consecutively following two sets of Alzheimer’s patients for 10 years each. The results were published online ahead of print in the Journal of Alzheimer’s Disease.

image

“Predicting Alzheimer’s progression has been a challenge because the disease varies significantly from one person to another—two Alzheimer’s patients may both appear to have mild forms of the disease, yet one may progress rapidly, while the other progresses much more slowly,” said senior author Yaakov Stern, PhD, professor of neuropsychology (in neurology, psychiatry, and psychology and in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain and the Gertrude H. Sergievsky Center) at CUMC. “Our method enables clinicians to predict the disease path with great specificity.”

Read More →

Nov 8, 201362 notes
#alzheimer's disease #dementia #neurodegeneration #neuroscience #science
Nov 8, 2013169 notes
#science #vegetative state #brain injury #brain damage #neuroimaging #neuroscience
Social symptoms in autistic children may be caused by hyper-connected neurons

The brains of children with autism show more connections than the brains of typically developing children do. What’s more, the brains of individuals with the most severe social symptoms are also the most hyper-connected. The findings reported in two independent studies published in the Cell Press journal Cell Reports (1, 2) on November 7th are challenge the prevailing notion in the field that autistic brains are lacking in neural connections.

image

The findings could lead to new treatment strategies and new ways to detect autism early, the researchers say. Autism spectrum disorder is a neurodevelopmental condition affecting nearly 1 in 88 children.

"Our study addresses one of the hottest open questions in autism research," said Kaustubh Supekar of Stanford University School of Medicine of his and his colleague Vinod Menon’s study aimed at characterizing whole-brain connectivity in children. "Using one of the largest and most heterogeneous pediatric functional neuroimaging datasets to date, we demonstrate that the brains of children with autism are hyper-connected in ways that are related to the severity of social impairment exhibited by these children."

In the second Cell Reports study, Ralph-Axel Müller and colleagues at San Diego State University focused specifically on neighboring brain regions to find an atypical increase in connections in adolescents with a diagnosis of autism spectrum disorder. That over-connection, which his team observed particularly in the regions of the brain that control vision, was also linked to symptom severity.

"Our findings support the special status of the visual system in children with heavier symptom load," Müller said, noting that all of the participants in his study were considered "high-functioning" with IQs above 70. He says measures of local connectivity in the cortex might be used as an aid to diagnosis, which today is based purely on behavioral criteria.

For Supekar and Menon, these new views of the autistic brain raise the intriguing possibility that epilepsy drugs might be used to treat autism.

"Our findings suggest that the imbalance of excitation and inhibition in the local brain circuits could engender cognitive and behavioral deficits observed in autism," Menon said. That imbalance is a hallmark of epilepsy as well, which might explain why children with autism so often suffer with epilepsy too.

"Drawing from these observations, it might not be too far fetched to speculate that the existing drugs used to treat epilepsy may be potentially useful in treating autism," Supekar said.

Nov 8, 2013159 notes
#autism #ASD #neurons #neuroimaging #brain circuits #neuroscience #science
Nov 8, 2013165 notes
#prefrontal cortex #neural networks #brain mapping #neurons #decision making #neuroscience #science
Nov 8, 2013284 notes
#science #nerve cells #nerve injuries #dendrites #HDAC5 #neuroregeneration #axons #neurons #neuroscience
Nov 7, 2013142 notes
#ASD #autism #eye contact #neurodevelopmental disorders #neuroscience #science
Monkeys Use Minds to Move Two Virtual Arms

In a study led by Duke researchers, monkeys have learned to control the movement of both arms on an avatar using just their brain activity.

The findings, published Nov. 6, 2013, in the journal Science Translational Medicine, advance efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients.

To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date

Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices. The brain-machine interface approach, pioneered at the Duke University Center for Neuroengineering in the early 2000s, holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.

“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” said senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.

The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.

As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggest that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.

The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.

The study suggests that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions. Small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.

“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”

Nicolelis is incorporating the study’s findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton, which is currently being developed, during the opening ceremony of the 2014 FIFA World Cup.

Nov 7, 201368 notes
#brain activity #prosthetics #bimanual movements #neurons #plasticity #neuroscience #science
Nov 7, 2013175 notes
#anxiety #depression #neuroimaging #brain activity #frontal cortex #psychology #neuroscience #science
Nov 7, 201376 notes
#spatial learning #virtual reality #navigation #brain mapping #neuroscience #science
Nov 7, 2013743 notes
#science #alzheimer's disease #dementia #neurodegeneration #language #bilingualism #neuroscience
Nov 6, 2013280 notes
#music #repetition #earworm #psychology #neuroscience #science
Nov 6, 2013336 notes
#music #language #speech #auditory cortex #sound perception #neuroscience #science
Nov 6, 201361 notes
#lateral intraparietal area #neural activity #neuronal noise #eye movements #neurons #neuroscience #science
Nov 6, 201384 notes
#brain mapping #MS #myelin #brain tissue #neuroimaging #neurological diseases #neuroscience #science
Torture Permanently Damages Normal Perception of Pain

TAU researchers study the long-term effects of torture on the human pain system

image

Israeli soldiers captured during the 1973 Yom Kippur War were subjected to brutal torture in Egypt and Syria. Held alone in tiny, filthy spaces for weeks or months, sometimes handcuffed and blindfolded, they suffered severe beatings, burns, electric shocks, starvation, and worse. And rather than receiving treatment, additional torture was inflicted on existing wounds.

Forty years later, research by Prof. Ruth Defrin of the Department of Physical Therapy in the Sackler Faculty of Medicine at Tel Aviv University shows that the ex-prisoners of war (POWs), continue to suffer from dysfunctional pain perception and regulation, likely as a result of their torture. The study — conducted in collaboration with Prof. Zahava Solomon and Prof. Karni Ginzburg of TAU’s Bob Shapell School of Social Work and Prof. Mario Mikulincer of the School of Psychology at the Interdisciplinary Center, Herzliya — was published in the European Journal of Pain.

"The human body’s pain system can either inhibit or excite pain. It’s two sides of the same coin," says Prof. Defrin. "Usually, when it does more of one, it does less of the other. But in Israeli ex-POWs, torture appears to have caused dysfunction in both directions. Our findings emphasize that tissue damage can have long-term systemic effects and needs to be treated immediately."

A painful legacy

The study focused on 104 combat veterans of the Yom Kippur War. Sixty of the men were taken prisoner during the war, and 44 of them were not. In the study, all were put through a battery of psychophysical pain tests — applying a heating device to one arm, submerging the other arm in a hot water bath, and pressing a nylon fiber into a middle finger. They also filled out psychological questionnaires.

The ex-POWs exhibited diminished pain inhibition (the degree to which the body eases one pain in response to another) and heightened pain excitation (the degree to which repeated exposure to the same sensation heightens the resulting pain). Based on these novel findings, the researchers conclude that the torture survivors’ bodies now regulate pain in a dysfunctional way.

It is not entirely clear whether the dysfunction is the result of years of chronic pain or of the original torture itself. But the ex-POWs exhibited worse pain regulation than the non-POW chronic pain sufferers in the study. And a statistical analysis of the test data also suggested that being tortured had a direct effect on their ability to regulate pain.

Head games

The researchers say non-physical torture may have also contributed to the ex-POWs’ chronic pain. Among other forms of oppression and humiliation, the ex-POWs were not allowed to use the toilet, cursed at and threatened, told demoralizing misinformation about their loved ones, and exposed to mock executions. In the later stages of captivity, most of the POWs were transferred to a group cell, where social isolation was replaced by intense friction, crowding, and loss of privacy.

"We think psychological torture also affects the physiological pain system," says Prof. Defrin. "We still have to fully analyze the data, but preliminary analysis suggests there is a connection."

Nov 6, 2013247 notes
#torture #chronic pain #pain #psychology #neuroscience #science
Nov 6, 2013230 notes
#chronic stress #stress #CNS #nervous system #inflammation #genes #genetics #neuroscience #science
Antidepressant drug induces a juvenile-like state in neurons of the prefrontal cortex

For long, brain development and maturation has been thought to be a one-way process, in which plasticity diminishes with age. The possibility that the adult brain can revert to a younger state and regain plasticity has not been considered, often. In a paper appearing on November 4 in the online open-access journal Molecular Brain, Dr. Tsuyoshi Miyakawa and his colleagues from Fujita Health University show that chronic administration of one of the most widely used antidepressants fluoxetine (FLX, which is also known by trade names like Prozac, Sarafem, and Fontex and is a selective serotonin reuptake inhibitor) can induce a juvenile-like state in specific types of neurons in the prefrontal cortex of adult mice.

In their study, FLX-treated adult mice showed reduced expression of parvalbumin and perineuronal nets, which are molecular markers for maturation and are expressed in a certain group of mature neurons in adults, and increased expression of an immature marker, which typically appears in developing juvenile brains, in the prefrontal cortex. These findings suggest the possibility that certain types of adult neurons in the prefrontal cortex can partially regain a youth-like state; the authors termed this as induced-youth or iYouth. These researchers as well as other groups had previously reported similar effects of FLX in the hippocampal dentate gyrus, basolateral amygdala, and visual cortex, which were associated with increased neural plasticity in certain types of neurons. This study is the first to report on “iYouth” in the prefrontal cortex, which is the brain region critically involved in functions such as working memory, decision-making, personality expression, and social behavior, as well as in psychiatric disorders related to deficits in these functions.

Network dysfunction in the prefrontal cortex and limbic system, including the hippocampus and amygdala, is known to be involved in the pathophysiology of depressive disorders. Reversion to a youth-like state may mediate some of the therapeutic effects of FLX by restoring neural plasticity in these regions. On the other hand, some non-preferable aspects of FLX-induced pseudo-youth may play a role in certain behavioral effects associated with FLX treatment, such as aggression, violence, and psychosis, which have recently received attention as adverse effects of FLX. Interestingly, expression of the same molecular markers of maturation, as discussed in this study, has been reported to be decreased in the prefrontal cortex of postmortem brains of patients with schizophrenia. This raises the possibility that some of FLX’s adverse effects may be attributable to iYouth in the same type of neurons in this region. Currently, basic knowledge on this is lacking, and there are several unanswered questions like: What are the molecular and cellular mechanisms underlying iYouth? What are the differences between actual youth and iYouth? Is iYouth good or bad? Future studies to answer these questions could potentially revolutionize the prevention and/or treatment of various neuropsychiatric disorders and aid in improving the quality of life for an aging population.

Nov 6, 2013243 notes
#antidepressants #neurons #prefrontal cortex #fluoxetine #neuroscience #science
Nov 5, 201394 notes
#brain injury #coma #brain activity #brain-machine interface #anesthesia #neuroscience #science
Brain Tumor Removal Through a Hole Smaller Than a Dime

More than two decades ago, Ryan Vincent had open brain surgery to remove a malignant brain tumor, resulting in a lengthy hospital stay and weeks of recovery at home. Recently, neurosurgeons at Houston Methodist Hospital removed a different lesion from Vincent’s brain through a tube inserted into a hole smaller than a dime and he went home the next day.

image

Gavin Britz, MBBCh, MPH, FAANS, chairman of neurosurgery at Houston Methodist Neurological Institute, used a minimally-invasive technique to remove a vascular lesion from deep within the 44-year-old patient’s brain, the first to use this technique in the region. Traditionally, vascular lesions or brain tumors that are located deep within the brain can cause damage just by surgical removal.

“With this new approach, we can navigate through millions of important brain fibers and tracts to access deep areas of the brain where these benign tumors or hemorrhages are located with minimal injury to normal brain,” said Britz. “Ryan’s surgery took less than an hour.”

Houston Methodist neurosurgeons Britz and David Baskin, M.D., director of the Kenneth R. Peak Brain & Pituitary Tumor Center, are using this “six-pillar approach” that encompasses the latest technology in minimally-invasive surgeries — mapping of the brain; navigating the brain like a GPS system; safely accessing the brain and tumor/lesion; using high-end optics for visualization; successfully removing the tumor without disrupting tissues around it; and directed therapy using tissue collected for evaluation that can then be used for personalized treatments.

The new surgical technique is used to remove cancerous and non-cancerous tumors, lesions and cysts deep inside the brain. This approach reduces risks of damage to speech, memory, muscle strength, balance, vision, coordination and other function areas of the brain.

Nov 5, 2013174 notes
#brain tumors #vascular lesion #brain mapping #medicine #science
Stem cells linked to cognitive gain after brain injury in preclinical study

A stem cell therapy previously shown to reduce inflammation in the critical time window after traumatic brain injury also promotes lasting cognitive improvement, according to preclinical research led by Charles Cox, M.D., at The University of Texas Health Science Center at Houston (UTHealth) Medical School.

The research was published in today’s issue of STEM CELLS Translational Medicine.

Cellular damage in the brain after traumatic injury can cause severe, ongoing neurological impairment and inflammation. Few pharmaceutical options exist to treat the problem. About half of patients with severe head injuries need surgery to remove or repair ruptured blood vessels or bruised brain tissue.

A stem cell treatment known as multipotent adult progenitor cell (MAPC) therapy has been found to reduce inflammation in mice immediately after traumatic brain injury, but no one had been able to gauge its usefulness over time.

The research team led by Cox, the Children’s Fund, Inc. Distinguished Professor of Pediatric Surgery at the UTHealth Medical School, injected two groups of brain-injured mice with MAPCs two hours after the mice were injured and again 24 hours later. One group received a dose of 2 million cells per kilogram and the other a dose five times stronger.

After four months, the mice receiving the stronger dose not only continued to have less inflammation—they also made significant gains in cognitive function. A laboratory examination of the rodents’ brains confirmed that those receiving the higher dose of MAPCs had better brain function than those receiving the lower dose.

“Based on our data, we saw improved spatial learning, improved motor deficits and fewer active antibodies in the mice that were given the stronger concentration of MAPCs,” Cox said.

The study indicates that intravenous injection of MAPCs may in the future become a viable treatment for people with traumatic brain injury, he said.

Nov 5, 201382 notes
#stem cells #TBI #head injury #multipotent adult progenitor cell #neuroscience #medicine #science
Researchers gain new insights into brain neuronal networks

A paper published in a special edition of the journal Science proposes a novel understanding of brain architecture using a network representation of connections within the primate cortex. Zoltán Toroczkai, professor of physics at the University of Notre Dame and co-director of the Interdisciplinary Center for Network Science and Applications, is a co-author of the paper “Cortical High-Density Counterstream Architectures.”

image

Using brain-wide and consistent tracer data, the researchers describe the cortex as a network of connections with a “bow tie” structure characterized by a high-efficiency, dense core connecting with “wings” of feed-forward and feedback pathways to the rest of the cortex (periphery). The local circuits, reaching to within 2.5 millimeters and taking up more than 70 percent of all the connections in the macaque cortex, are integrated across areas with different functional modalities (somatosensory, motor, cognitive) with medium- to long-range projections.

The authors also report on a simple network model that incorporates the physical principle of entropic cost to long wiring and the spatial positioning of the functional areas in the cortex. They show that this model reproduces the properties of the connectivity data in the experiments, including the structure of the bow tie. The wings of the bow tie emerge from the counterstream organization of the feed-forward and feedback nature of the pathways. They also demonstrate that, contrary to previous beliefs, such high-density cortical graphs can achieve simultaneously strong connectivity (almost direct between any two areas), communication efficiency, and economy of connections (shown via optimizing total wire cost) via weight-distance correlations that are also consequences of this simple network model.

This bow tie arrangement is a typical feature of self-organizing information processing systems. The paper notes that the cortex has some analogies with information-processing networks such as the World Wide Web, as well as metabolism, the immune system and cell signaling. The core-periphery bow tie structure, they say, is “an evolutionarily favored structure for a wide variety of complex networks” because “these systems are not in thermodynamic equilibrium and are required to maintain energy and matter flow through the system.” The brain, however, also shows important differences from such systems. For example, destination addresses are encoded in information packets sent along the Internet, apparently unlike in the brain, and location and timing of activity are critical factors of information processing in the brain, unlike in the Internet.

“Biological data is extremely complex and diverse,” Toroczkai said. “However, as a physicist, I am interested in what is common or invariant in the data, because it may reveal a fundamental organizational principle behind a complex system. A minimal theory that incorporates such principle should reproduce the observations, if not in great detail, but in extent. I believe that with additional consistent data, as those obtained by the Kennedy team, the fundamental principles of massive information processing in brain neuronal networks are within reach.”

Nov 5, 201362 notes
#cerebral cortex #neural networks #brain architecture #neuroscience #science
Learning and memory: How neurons activate PP1

A study in The Journal of Cell Biology describes how neurons activate the protein PP1, providing key insights into the biology of learning and memory.

PP1 is known to be a key regulator of synaptic plasticity, the phenomenon in which neurons remodel their synaptic connections in order to store and relay information—the foundation of learning and memory. But how PP1 is controlled has been unclear. Now, a team led by researchers from the LSU Health Science Center describes several mechanisms for PP1 regulation that close some major gaps in our understanding of its role in neuronal signaling.

Among the novel findings, the researchers describe how the neurotransmitter NMDA leads to activation of PP1. They show that, when NMDA activates neuronal synapses, it switches off an enzyme, Cdk5, that would otherwise inhibit PP1. This allows PP1 to activate itself and promote synaptic remodeling. In addition, the researchers suggest that, despite its name, a regulatory protein called inhibitor-2 helps promote PP1 activity in neurons. Together, these findings significantly extend our understanding of how PP1 is regulated in the context of synaptic plasticity.

Nov 5, 201382 notes
#learning #memory #neurons #synaptic plasticity #NMDA #neuroscience #science
Brain aging is conclusively linked to genes; a crucial first step in finding biological mechanisms of normal aging

For the first time in a large study sample, the decline in brain function in normal aging is conclusively shown to be influenced by genes, say researchers from the Texas Biomedical Research Institute and Yale University.

image

“Identification of genes associated with brain aging should improve our understanding of the biological processes that govern normal age-related decline,” said John Blangero, Ph.D., a Texas Biomed geneticist and the senior author of the paper. The study, funded by the National Institutes of Health (NIH),  is published in the November 4, 2013 issue of the Proceedings of the National Academy of Sciences. David Glahn, Ph.D., an associate professor of psychiatry at the Yale University School of Medicine, is the first author on the paper.

In large pedigrees including 1,129 people aged 18 to 83, the scientists documented profound aging effects from young adulthood to old age, on neurocognitive ability and brain white matter measures. White matter actively affects how the brain learns and functions. Genetic material shared amongst biological relatives appears to predict the observed changes in brain function with age.

Participants were enrolled in the Genetics of Brain Structure and Function Study and drawn from large Mexican Americans families in San Antonio. Brain imaging studies were conducted at the University of Texas Health Science Center at San Antonio Research Imaging Institute  directed by  Peter Fox, M.D.

“The use of large human pedigrees provides a powerful resource for measuring how genetic factors change with age,” Blangero said.

By applying a sophisticated analysis, the scientists demonstrated a heritable basis for neurocognitive deterioration with age that could be attributed to genetic factors. Similarly, decreasing white matter integrity with age was influenced by genes., The investigators further demonstrated that different sets of genes are responsible for these two biological aging processes.

 “A key advantage of this study is that we specifically focused on large extended families and so we were able to disentangle genetic from non-genetic influences on the aging process,” said Glahn.

Nov 5, 201396 notes
#aging #white matter #alzheimer's disease #dementia #brain mapping #neuroscience #science
Nov 4, 2013182 notes
#alzheimer's disease #parkinson's disease #Creutzfeldt-Jakob disease #multi-photon laser #amyloid protein #science
Nov 4, 201371 notes
#cognitive fatigue #neuroimaging #MS #diffusion tensor imaging #neuroscience #science
Important breakthrough in identifying the effect of epilepsy treatment

50 years after valproate was first discovered, research published today in the journal Neurobiology of Disease, reports how the drug works to block seizure progression.

image

Valproate (variously labelled worldwide as Epilim, Depacon, Depakene, Depakote, Orlept, Episenta, Orfiril, and Convulex) is one of the world’s most highly prescribed treatments for epilepsy. It was first discovered to be an effective treatment for epilepsy, by accident, in 1963 by a group of French scientists. In thousands of subsequent experiments, animals have been used to investigate how valproate blocks seizures, without success. Scientists from Royal Holloway and University College London have now identified how valproate blocks seizures in the brain, by using a simple amoeba.

“The discovery of how valproate blocks seizures, initially using the social amoeba Dictyostelium, and then replicated using accepted seizure models, highlights the successful use of non-animal testing in biomedical research,” said Professor Robin Williams from the School of Biological Sciences at Royal Holloway.

“Sodium valproate is one of the most effective antiepileptic drugs in many people with epilepsy, but its use has been limited by side-effects, in particular its effect in pregnant women on the unborn child,” said Professor Matthew Walker from the Institute of Neurology at University College London. “Understanding valproate’s mechanism of action is a first step to developing even more effective drugs that lack many of valproate’s side-effects.

“Our study also found that the decrease of a specific chemical in the brain at the start of the seizure causes even more seizure activity. This holds important implications for identifying underlying causes,” added Professor Williams.

Nov 4, 201391 notes
#epilepsy #seizures #valproate #antiepileptic drugs #medicine #science
Kessler researchers find aerobic exercise benefits memory in persons with MS

Kessler researchers find aerobic exercise benefits memory in persons with multiple sclerosis

image

A research study headed by Victoria Leavitt, Ph.D. and James Sumowski, Ph.D., of Kessler Foundation, provides the first evidence for beneficial effects of aerobic exercise on brain and memory in individuals with multiple sclerosis (MS). The article, “Aerobic exercise increases hippocampal volume and improves memory in multiple sclerosis: Preliminary findings,” was released as an epub ahead of print on October 4 by Neurocase: The Neural Basis of Cognition. The study was funded by Kessler Foundation.

Hippocampal atrophy seen in MS is linked to the memory deficits that affect approximately 50% of individuals with MS. Despite the prevalence of this disabling symptom, there are no effective pharmacological or behavioral treatments. “Aerobic exercise may be the first effective treatment for MS patients with memory problems,” noted Dr. Leavitt, research scientist in Neuropsychology & Neuroscience Research at Kessler Foundation. “Moreover, aerobic exercise has the advantages of being readily available, low cost, self-administered, and lacking in side effects.” No beneficial effects were seen with non-aerobic exercise. Dr. Leavitt noted that the positive effects of aerobic exercise were specific to memory; other cognitive functions such as executive functioning and processing speed were unaffected.

The study’s participants were two MS patients with memory deficits who were randomized to non-aerobic (stretching) and aerobic (stationary cycling) conditions. Baseline and follow-up measurements were recorded before and after the treatment protocol of 30-minute exercise sessions 3 times per week for 3 months. Data were collected by high-resolution MRI (neuroanatomical volumes), fMRI (functional connectivity), and memory assessment. Aerobic exercise resulted in a 16.5% increase in hippocampal volume, a 53.7% increase in memory, and increased hippocampal resting-state functional connectivity. Non-aerobic exercise resulted in minimal change in hippocampal volume and no changes in memory or functional connectivity.

“These findings clearly warrant large-scale clinical trials of aerobic exercise for the treatment of memory deficits in the MS population,” said James Sumowski„ Ph.D., research scientist in Neuropsychology & Neuroscience Research at Kessler Foundation. 

Nov 3, 201373 notes
#MS #memory #hippocampus #aerobic exercise #neuroscience #science
Nov 3, 2013556 notes
#science #visual perception #color perception #neuroimaging #neuroscience
Study finds a patchwork of genetic variation in the brain

It was once thought that each cell in a person’s body possesses the same DNA code and that the particular way the genome is read imparts cell function and defines the individual. For many cell types in our bodies, however, that is an oversimplification. Studies of neuronal genomes published in the past decade have turned up extra or missing chromosomes, or pieces of DNA that can copy and paste themselves throughout the genomes.

The only way to know for sure that neurons from the same person harbor unique DNA is by profiling the genomes of single cells instead of bulk cell populations, the latter of which produce an average. Now, using single-cell sequencing, Salk Institute researchers and their collaborators have shown that the genomic structures of individual neurons differ from each other even more than expected. The findings were published November 1, 2013, in Science.

"Contrary to what we once thought, the genetic makeup of neurons in the brain aren’t identical, but are made up of a patchwork of DNA," says corresponding author Fred Gage, Salk’s Vi and John Adler Chair for Research on Age-Related Neurodegenerative Disease.

In the study, led by Mike McConnell, a former junior fellow in the Crick-Jacobs Center for Theoretical and Computational Biology at the Salk, researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome—looking for large deletions and duplications of DNA called copy number variations or CNVs—and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn’t passed down from a parent. The CNVs are spread throughout the genome, the team found.

The miniscule amount of DNA in a single cell has to be chemically amplified many times before it can be sequenced. This process is technically challenging, so the team spent a year ruling out potential sources of error in the process.

"A good bit of our study was doing control experiments to show that this is not an artifact," says Gage. "We had to do that because this was such a surprise—finding out that individual neurons in your brain have different DNA content."

The group found a similar amount of variability in CNVs within individual neurons derived from the skin cells of three healthy people. Scientists routinely use such induced pluripotent stem cells (iPSCs) to study living neurons in a culture dish. Because iPSCs are derived from single skin cells, one might expect their genomes to be the same.

"The surprising thing is that they’re not," says Gage. "There are quite a few unique deletions and amplifications in the genomes of neurons derived from one iPSC line."

Interestingly, the skin cells themselves are genetically different, though not nearly as much as the neurons. This finding, along with the fact that the neurons had unique CNVs, suggests that the genetic changes occur later in development and are not inherited from parents or passed to offspring.

It makes sense that neurons have more diverse genomes than skin cells do, says McConnell, who is now an assistant professor of biochemistry and molecular genetics at the University of Virginia School of Medicine in Charlottesville. “The thing about neurons is that, unlike skin cells, they don’t turn over, and they interact with each other,” he says. “They form these big complex circuits, where one cell that has CNVs that make it different can potentially have network-wide influence in a brain.”

Spontaneously occurring CNVs have also been linked to risk for brain disorders such as schizophrenia and autism, but those studies usually pool many blood cells. As a result, the CNVs uncovered in those studies affect many if not all cells, which suggests that they arise early in development.

The purpose of CNVs in the healthy brain is still unclear, but researchers have some ideas. The modifications might help people adapt to new surroundings encountered over a lifetime, or they might help us survive a massive viral infection. The scientists are working out ways to alter genomic variability in iPSC-derived neurons and challenge them in specific ways in the culture dish.

Cells with different genomes probably produce unique RNA and then proteins. However, for now, only one sequencing technology can be applied to a single cell.

"If and when more than one method can be applied to a cell, we will be able to see whether cells with different genomes have different transcriptomes (the collection of all the RNA in a cell) in predictable ways," says McConnell.

In addition, it will be necessary to sequence many more cells, and in particular, more cell types, notes corresponding author Ira Hall, an associate professor of biochemistry and molecular genetics at the University of Virginia. “There’s a lot more work to do to really understand to what level we think the things we’ve found are neuron-specific or associated with different parameters like age or genotype,” he says.

Nov 2, 2013119 notes
#stem cells #induced pluripotent stem cells #neurons #genetics #genomics #neuroscience #science
Neuroscientists Determine How Treatment for Anxiety Disorders Silences Fear Neurons

Excessive fear can develop after a traumatic experience, leading to anxiety disorders such as post-traumatic stress disorder and phobias. During exposure therapy, an effective and common treatment for anxiety disorders, the patient confronts a fear or memory of a traumatic event in a safe environment, which leads to a gradual loss of fear. A new study in mice, published online today in Neuron, reports that exposure therapy remodels an inhibitory junction in the amygdala, a brain region important for fear in mice and humans. The findings improve our understanding of how exposure therapy suppresses fear responses and may aid in developing more effective treatments. The study, led by researchers at Tufts University School of Medicine and the Sackler School of Graduate Biomedical Sciences at Tufts, was partially funded by a New Innovator Award from the Office of the Director at the National Institutes of Health.

image

A fear-inducing situation activates a small group of neurons in the amygdala. Exposure therapy silences these fear neurons, causing them to be less active. As a result of this reduced activity, fear responses are alleviated. The research team sought to understand how exactly exposure therapy silences fear neurons.

The researchers found that exposure therapy not only silences fear neurons but also induces remodeling of a specific type of inhibitory junction, called the perisomatic synapse. Perisomatic inhibitory synapses are connections between neurons that enable one group of neurons to silence another group of neurons. Exposure therapy increases the number of perisomatic inhibitory synapses around fear neurons in the amygdala. This increase provides an explanation for how exposure therapy silences fear neurons.

“The increase in number of perisomatic inhibitory synapses is a form of remodeling in the brain. Interestingly, this form of remodeling does not seem to erase the memory of the fear-inducing event, but suppresses it,” said senior author, Leon Reijmers, Ph.D., assistant professor of neuroscience at Tufts University School of Medicine and member of the neuroscience program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts.

Reijmers and his team discovered the increase in perisomatic inhibitory synapses by imaging neurons activated by fear in genetically manipulated mice. Connections in the human brain responsible for suppressing fear and storing fear memories are similar to those found in the mouse brain, making the mouse an appropriate model organism for studying fear circuits.

Mice were placed in a box and experienced a fear-inducing situation to create a fear response to the box. One group of mice, the control group, did not receive exposure therapy. Another group of mice, the comparison group, received exposure therapy to alleviate the fear response. For exposure therapy, the comparison group was repeatedly placed in the box without experiencing the fear-inducing situation, which led to a decreased fear response in these mice. This is also referred to as fear extinction.

The researchers found that mice subjected to exposure therapy had more perisomatic inhibitory synapses in the amygdala than mice who did not receive exposure therapy. Interestingly, this increase was found around fear neurons that became silent after exposure therapy.

“We showed that the remodeling of perisomatic inhibitory synapses is closely linked to the activity state of fear neurons. Our findings shed new light on the precise location where mechanisms of fear regulation might act. We hope that this will lead to new drug targets for improving exposure therapy,” said first author, Stéphanie Trouche, Ph.D., a former postdoctoral fellow in Reijmers’ lab at Tufts and now a medical research council investigator scientist at the University of Oxford in the United Kingdom.

“Exposure therapy in humans does not work for every patient, and in patients that do respond to the treatment, it rarely leads to a complete and permanent suppression of fear. For this reason, there is a need for treatments that can make exposure therapy more effective,” Reijmers added.

Nov 2, 2013245 notes
#PTSD #anxiety #amygdala #fear #neuroimaging #synapses #neurons #psychology #neuroscience #science
Synaptic transistor learns while it computes

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

image

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.

“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.

For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.

“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”

“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”

Nov 2, 201383 notes
#AI #dendrites #synapses #synaptic transistor #learning #neurons #neuroscience #technology #science
Researchers identify molecule that orients neurons for high definition sensing

Many animals have highly developed senses, such as vision in carnivores, touch in mice, and hearing in bats. New research from the RIKEN Brain Science Institute has uncovered a brain molecule that can explain the existence of such finely-tuned sensory capabilities, revealing how brain cells responsible for specific senses are positioned to receive incoming sensory information.

image

The study, led by Dr. Tomomi Shimogori and published in the journal Science, sought to uncover the molecule that enables high acuity sensing by examining brain regions that receive information from the senses. They found that areas responsible for touch in mice and vision in ferrets contain a protein called BTBD3 that optimizes neuronal shape to receive sensory input more efficiently.

Neurons have a highly specialized shape, sending signals through one long projection called an axon, while receiving signals from many branch-like projections called dendrites. The final shape and connections to other neurons are typically completed after birth. Some neurons have dendrites distributed equally all around the cell body, like a starfish, while in others they extend only from one side, like a squid, steering towards axons that are actively bringing in information from the peripheral nerves. It was previously unknown what enables neurons to have highly oriented dendrites.

“We were fascinated by the dendrite patterning changes that occurred during the early postnatal stage that is controlled by neuronal input,” says Dr. Shimogori. “We found a fundamental process that is important to remove unnecessary dendrites to prevent mis-wiring and to make efficient neuronal circuits.”

The researchers searched for genes that are active exclusively in the mouse somatosensory cortex, the brain region responsible for their sense of touch, and found that the gene coding for the protein BTBD3 was active in the neurons of the barrel cortex, which receives input from their whiskers, the highly sensitive tactile sensors in mice, and that these neurons had unidirectional dendrites.

Using gene manipulations in embryonic mouse brain the authors found that eliminating BTBD3 made dendrites uniformly distribute around neurons in the mouse barrel cortex. In contrast, artificially introducing BTBD3 in the visual cortex of mice where BTBD3 is not normally found, reoriented the normally symmetrically positioned dendrites to one side. The same mechanism shaped neurons in the visual cortex of ferrets, which unlike the mouse contains BTBD3.

“High acuity sensory function may have been enabled by the evolution of BTBD3 and related proteins in brain development,” adds Dr. Shimogori. “Finding BTBD3 selectively in the visual and auditory cortex of the common marmoset, a species that relies heavily on high acuity vocal and visual communication for survival, and in mouse, where it is expressed in high-acuity tactile and olfactory areas, but not in low acuity visual cortex, supports this idea.” The authors plan to examine their theory by testing sensory function in mice without BTBD3 gene expression.

Nov 1, 201395 notes
#neurons #dendrites #brain development #BTBD3 #sensory information #neural circuits #neuroscience #science
Brain Connectivity Can Predict Epilepsy Surgery Outcomes

A discovery from Case Western Reserve and Cleveland Clinic researchers could provide epilepsy patients invaluable advance guidance about their chances to improve symptoms through surgery.

Assistant Professor of Neurosciences Roberto Fernández Galán, PhD, and his collaborators have identified a new, far more accurate way to determine precisely what portions of the brain suffer from the disease. This information can give patients and physicians better information regarding whether temporal lobe surgery will provide the results they seek.

“Our analysis of neuronal activity in the temporal lobe allows us to determine whether it is diseased, and therefore, whether removing it with surgery will be beneficial for the patient,” Galán said, the paper’s senior author. “In terms of accuracy and efficiency, our analysis method is a significant improvement relative to current approaches.”

The findings appear in research published October 30 in the open access journal PLOS ONE.

About one-third of patients with temporal lobe epilepsy do not respond to medical treatment and opt to do lobectomies to alleviate their symptoms. Yet the surgery’s success rate is only 60 to 70 percent because of the difficulties in identifying the diseased brain tissue prior to the procedures.

Galán and investigators from Cleveland Clinic determined that using intracranial electroencephalography (iEEG) to measure patients’ functional neural connectivity – that is, the communication from one brain region to another - identified the epileptic lobe with 87 percent accuracy. An iEEG records electrical activity with electrodes implanted in the brain. Key indicators of a diseased lobe are weak and similar connections.

In the retrospective study, Galán and Arun Antony, MD, formerly a senior clinical fellow in the Epilepsy Center at Cleveland Clinic and now an assistant professor of neurology at the University of Pittsburgh, examined data from 23 patients with temporal lobe epilepsy who had all or part of their temporal lobes removed after iEEG evaluations performed at Cleveland Clinic. The researchers examined the results of patients’ preoperative iEEG to determine the degree of functional connectivity that was associated with successful surgical outcomes.

“The concept of functional connectivity has been extensively studied by basic science researchers, but has not found a way into the realm of clinical epilepsy treatment yet,” Antony said, the paper’s first author. “Our discovery is another step towards the use of measures of functional connectivity in making clinical decisions in the treatment of epilepsy.”

As a standard preoperative test for lobectomy surgery, physicians analyze iEEG traces looking for simultaneous discharges of neurons that appear as spikes in the recordings, which indicate epileptic activity. This PLOS ONE discovery evaluates the data differently by examining normal brain activity in the absence of spikes and inferring connectivity.

Nov 1, 201353 notes
#epilepsy #brain activity #lobectomy #intracranial electroencephalography #neuroscience #science
Gene Found To Foster Synapse Formation In The Brain

Researchers at Johns Hopkins say they have found that a gene already implicated in human speech disorders and epilepsy is also needed for vocalizations and synapse formation in mice. The finding, they say, adds to scientific understanding of how language develops, as well as the way synapses — the connections among brain cells that enable us to think — are formed. A description of their experiments appears in Science Express on Oct. 31.

image

A group led by Richard Huganir, Ph.D., director of the Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, set out to investigate genes involved in synapse formation. Gek-Ming Sia, Ph.D., a research associate in Huganir’s laboratory, first screened hundreds of human genes for their effects on lab-grown mouse brain cells. When one gene, SRPX2, was turned up higher than normal, it caused the brain cells to erupt with new synapses, Sia found.

When Huganir’s team injected fetal mice with an SRPX2-blocking compound, the mice showed fewer synapses than normal mice even as adults, the researchers found. In addition, when SRPX2-deficient mouse pups were separated from their mothers, they did not emit high-pitched distress calls as other pups do, indicating they lacked the rodent equivalent of early language ability.

Other researchers’ analyses of the human genome have found that mutations in SRPX2 are associated with language disorders and epilepsy, and when Huganir’s team injected the human SRPX2 with the same mutations into the fetal mice, they also had deficits in their vocalization as young pups.

Another research group at Institut de Neurobiologie de la Méditerranée in France had previously shown that SRPX2 interacts with FoxP2, a gene that has gained wide attention for its apparently crucial role in language ability.

Huganir’s team confirmed this, showing that FoxP2 controls how much protein the SRPX2 gene makes and may affect language in this way. “FoxP2 is famous for its role in language, but it’s actually involved in other functions as well,” Huganir comments. “SRPX2 appears to be more specialized to language ability.” Huganir suspects that the gene may also be involved in autism, since autistic patients often have language impairments, and the condition has been linked to defects in synapse formation.

This study is only the beginning of teasing out how SRPX2 acts on the brain, Sia says. “We’d like to find out what other proteins it acts on, and how exactly it regulates synapses and enables language development.”

Nov 1, 201366 notes
#synapses #language development #autism #epilepsy #genetics #neuroscience #science
Exposure to Cortisol-Like Medications Before Birth May Contribute to Emotional Problems and Brain Changes

Neonatologists seem to perform miracles in the fight to support the survival of babies born prematurely.

To promote their survival, cortisol-like drugs called glucocorticoids are administered frequently to women in preterm labor to accelerate their babies’ lung maturation prior to birth. Cortisol is a substance naturally released by the body when stressed. But the levels of glucocorticoids administered to promote lung development are higher than that achieved with typical stress, perhaps only mirrored in the body’s reaction to extreme stresses.

The benefit of glucocorticoids is undisputed and has certainly saved the lives of countless babies, but this exposure also may have some negative consequences. Indeed, excessive glucocorticoid levels may have effects on brain development, perhaps contributing to emotional problems later in life.

In this issue of Biological Psychiatry, Dr. Elysia Davis at the University of Denver and her colleagues report new findings on the effects of synthetic glucocorticoid on human brain development. Their study focused on healthy children who were born full-term, avoiding the confounding effects of premature birth.

The investigators conducted brain imaging sessions in and carefully assessed 54 children, 6-10 years of age. The mothers of the participating children also completed reports on their child’s behavior. The researchers then divided the children into two groups: those who were exposed to glucocorticoids prenatally and those who were not.

In this study, children with fetal glucocorticoid exposure showed significant cortical thinning, and a thinner cortex also predicted more emotional problems. In one particularly affected part of the brain, the rostral anterior cingulate cortex, it was 8-9% thinner among children exposed to glucocorticoids. Interestingly, other studies have shown that this region of the brain is affected in individuals diagnosed with mood and anxiety disorders.

"Fetal exposure to a frequently administered stress hormone is associated with consequences for child brain development that persist for at least 6 to 10 years. These neurological changes are associated with increased risk for stress and emotional problems," Davis explained of their findings. "Importantly, these findings were observed among healthy children born full term."

Although such a finding does not indicate that glucocorticoids ‘caused’ these changes, the researchers did determine that the findings can’t be explained by any obvious confounding differences between the groups. The two groups did not differ on weight or gestational age at birth, apgar scores, maternal factors, or any other basic demographics. Thus, the findings do suggest that glucocorticoid administration may somehow alter the trajectory of brain development of exposed children.

"This study provides evidence that prenatal exposure to stress hormones shapes the construction of the fetal nervous system with consequences for the developing brain that persist into the preadolescent period," she added.

"This study highlights potential links between early cortisol exposure, cortical thinning and mood symptoms in children. It may provide important insights into the development of the brain and the long-term impact of maternal stress," commented Dr. John Krystal, Editor of Biological Psychiatry.

Nov 1, 201398 notes
#stress #glucocorticoids #cortisol #brain development #psychology #neuroscience #science
Nov 1, 2013243 notes
#science #infants #speech development #memory #learning #psychology #neuroscience
Critical Gene in Retinal Development and Motion Sensing Identified

Our vision depends on exquisitely organized layers of cells within the eye’s retina, each with a distinct role in perception. Johns Hopkins researchers say they have taken an important step toward understanding how those cells are organized to produce what the brain “sees.” Specifically, they report identification of a gene that guides the separation of two types of motion-sensing cells, offering insight into how cellular layering develops in the retina, with possible implications for the brain’s cerebral cortex. A report on the discovery is published in the Nov. 1 issue of the journal Science.

“The separation of different types of cells into layers is critical to their ability to form the precise sets of connections with each other — the circuitry — that lets us process visual information,” says Alex Kolodkin, Ph.D., a professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience and an investigator at the Howard Hughes Medical Institute. “There is still much to learn about how that separation happens during development, but we’ve identified for the first time proteins that enable two very similar types of cells to segregate into their own distinct neuronal layers.”

Kolodkin’s research group specializes in studying how circuitry forms among neurons (brain and nerve cells). Past experiments revealed that two types of proteins, called semaphorins and plexins, help guide this process. In the current study, Lu Sun, a graduate student in Kolodkin’s laboratory, focused on the genes that carry the blueprint for these proteins in two of the 10 layers of cells in the mammalian retina.

Those two layers are made up of so-called starburst amacrine cells (SACs). One type of SAC, known as “Off,” detects motion by sensing decreases in the amount of light hitting the retina, while the other type, “On,” detects increases in light. Sun examined the amounts of several semaphorin and plexin proteins being made by each type of cell, and found that only the “On” SACs were making a semaphorin called Sema6A. Sema6A can only work in the retina by interacting with its receptor, a plexin called PlexA2, but Sun found both types of SAC were churning out roughly equal amounts of PlexA2.

Reasoning that Sema6A might be the key difference that enabled the “On” and “Off” SACs to segregate from one another, Kolodkin’s team analyzed mice in which the genes for either Sema6A, PlexA2 or both could be switched off, and looked at the effects of this manipulation on their retinas. “Knocking out” either gene during development led the “On” and “Off” layers to run together, the team found, and caused abnormalities in the “On” SACs’ tree-like extensions. However, the “Off” SACs, which hadn’t been using their Sema6A gene in the first place, still looked and functioned normally.

“When signaling between Sema6A and PlexA2 was lost, not only was layering compromised, but the ‘On’ SACs lost both their distinctive symmetrical appearance, and, importantly, their motion-detecting ability,” Sun says. “This is evidence that the beautiful symmetric shape that gives starburst amacrine cells their name is necessary for their function.”

Adds Kolodkin, “We hope that learning how layering occurs in these very specific cell types will help us begin sorting out how connections are made not just in the retina, but also in neurons throughout the nervous system. Layering also occurs in the cerebral cortex, for example, which is responsible for thought and consciousness, and we really want to know how this is organized during neural development.”

Nov 1, 201347 notes
#retinal development #retina #nerve cells #amacrine cells #cerebral cortex #neuroscience #science
Nov 1, 2013898 notes
#science #consciousness #vegetative state #neuroimaging #attention #brain mapping #neuroscience

October 2013

Incurable Brain Cancer Gene Is Silenced

Gene regulation technology increases survival rates in mice with glioblastoma

Glioblastoma multiforme (GBM), the brain cancer that killed Sen. Edward Kennedy and kills approximately 13,000 Americans a year, is aggressive and incurable. Now a Northwestern University research team is the first to demonstrate delivery of a drug that turns off a critical gene in this complex cancer, increasing survival rates significantly in animals with the deadly disease.

image

Image: Researchers combined gold nanoparticles (in yellow) with small interfering RNAs (in green) to knock down an oncogene that is overexpressed in glioblastoma.

The novel therapeutic, which is based on nanotechnology, is small and nimble enough to cross the blood-brain barrier and get to where it is needed — the brain tumor. Designed to target a specific cancer-causing gene in cells, the drug simply flips the switch of the troublesome oncogene to “off,” silencing the gene. This knocks out the proteins that keep cancer cells immortal.

In a study of mice, the nontoxic drug was delivered by intravenous injection. In animals with GBM, the survival rate increased nearly 20 percent, and tumor size was reduced three to four fold, as compared to the control group. The results are published today (Oct. 30) in Science Translational Medicine.

“This is a beautiful marriage of a new technology with the genes of a terrible disease,” said Chad A. Mirkin, a nanomedicine expert and a senior co-author of the study. “Using highly adaptable spherical nucleic acids, we specifically targeted a gene associated with GBM and turned it off in vivo. This proof-of-concept further establishes a broad platform for treating a wide range of diseases, from lung and colon cancers to rheumatoid arthritis and psoriasis.”

Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering.

Glioblastoma expert Alexander H. Stegh came to Northwestern University in 2009, attracted by the University’s reputation for interdisciplinary research, and within weeks was paired up with Mirkin to tackle the difficult problem of developing better treatments for glioblastoma. 

Help is critical for patients with GBM: The median survival rate is 14 to 16 months, and approximately 16,000 new cases are reported in the U.S. every year.

In their research partnership, Mirkin had the perfect tool to tackle the deadly cancer: spherical nucleic acids (SNAs), new globular forms of DNA and RNA, which he had invented at Northwestern in 1996, and which are nontoxic to humans. The nucleic acid sequence is designed to match the target gene.

And Stegh had the gene: In 2007, he and colleagues identified the gene Bcl2Like12 as one that is overexpressed in glioblastoma tumors and related to glioblastoma’s resistance to conventional therapies.

“My research group is working to uncover the secrets of cancer and, more importantly, how to stop it,” said Stegh, a senior co-author of the study. “Glioblastoma is a very challenging cancer, and most chemo-therapeutic drugs fail in the clinic. The beauty of the gene we silenced in this study is that it plays many different roles in therapy resistance. Taking the gene out of the picture should allow conventional therapies to be more effective.”

Stegh is an assistant professor in the Ken and Ruth Davee Department of Neurology at the Northwestern University Feinberg School of Medicine and an investigator in the Northwestern Brain Tumor Institute.

The power of gene regulation technology is that a disease with a genetic basis can be attacked and treated if scientists have the right tools. Thanks to the Human Genome Project and genomics research over the last two decades, there is an enormous number of genetic targets; having the right therapeutic agents and delivery materials has been the challenge.

“The RNA interfering-based SNAs are a completely novel approach in thinking about cancer therapy,” Stegh said. “One of the problems is that we have large lists of genes that are somehow disregulated in glioblastoma, but we have absolutely no way of targeting all of them using standard pharmacological approaches. That’s where we think nanomaterials can play a fundamental role in allowing us to implement the concept of personalized medicine in cancer therapy.”

Stegh and Mirkin’s drug for GBM is specially designed to target the Bcl2Like12 gene in cancer cells. Key is the nanostructure’s spherical shape and nucleic acid density. Normal (linear) nucleic acids cannot get into cells, but these spherical nucleic acids can. Small interfering RNA (siRNA) surrounds a gold nanoparticle like a shell; the nucleic acids are highly oriented, densely packed and form a tiny sphere. (The gold nanoparticle core is only 13 nanometers in diameter.) The RNA’s sequence is programmed to silence the disease-causing gene.

“The problems posed by glioblastoma and many other diseases are simply too big for one research group to handle,” said Mirkin, who also is the director of Northwestern’s International Institute for Nanotechnology. “This work highlights the power of scientists and engineers from different fields coming together to address a difficult medical issue.”

Mirkin first developed the nanostructure platform used in this study in 1996 at Northwestern, and the technology now is the basis of powerful commercialized and FDA-cleared medical diagnostic tools. This new development, however, is the first realization that the nanostructures injected into an animal naturally find their target in the brain and can deliver an effective payload of therapeutics.

The next step for the therapeutic will be to test it in clinical trials.

The nanostructures used in this study were developed in Mirkin’s lab on the Evanston campus and then used in cell and animal studies in Stegh’s lab on the Chicago campus.

Oct 31, 2013205 notes
#glioblastoma #brain tumors #brain cancer #medicine #science
Oct 31, 2013129 notes
#motor cortex #learning #brain mapping #brain activity #infants #psychology #neuroscience #science
Oct 31, 2013268 notes
#infants #premature babies #anxiety #stress #pain #psychology #neuroscience #science
Next page →
20132014
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
201220132014
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
20122013
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December