Posts tagged neuroscience

Posts tagged neuroscience
Boys are at greater risk for delayed language development than girls, according to a new study using data from the Norwegian Mother and Child Cohort Study. The researchers also found that reading and writing difficulties in the family gave an increased risk.

“We show for the first time that reading and writing difficulties in the family can be the main reason why a child has a speech delay that first begins between three to five years of age,” says Eivind Ystrøm, senior researcher at the Norwegian Institute of Public Health.
Ystrøm was supervisor of Imac Maria Zambrana, a former PhD student at the Norwegian Institute of Public Health who conducted the research in this study as part of her doctoral research.
The researchers used data from questionnaires completed by the mothers who are participating in the Norwegian Mother and Child Cohort Study (MoBa). The study included more than 10,000 children from week 17 of pregnancy up to five years of age.
“MoBa is a large study with a normal cross-section of the population. It gives us a unique opportunity to examine changes over time, the scope and any risk factors for delayed language development,” says Ystrøm.
Mostly boys
The researchers classified the language difficulties at three and five years of age in three groups: persistent delayed language development (present at both times), transient delayed language development (only present at three years) and delayed language development first identified at five years old.
Boys are in the majority for the groups with persistent and transient language difficulties. Ystrøm explains that boys are biologically at greater risk for developmental disorders in utero than girls. British scientists have measured the male sex hormone (testosterone) in amniotic fluid and they found that the levels were related to the development of both autism and language disorders. Ystrøm points out that boys are generally a little later in language development than girls, but that most catch up during the first year. Therefore, many boys could be at risk of persistent language impairment and increasingly have transient language difficulties that disappear before school age.
The researchers found that gender was irrelevant for the third group who have language difficulties that begin sometime between three and five years of age.
Hereditary factors
We have good knowledge about normal language development in children. Many genes are important for language development and research suggests that different genes are involved in different types of language difficulty.
“Reading and writing difficulties in the family are the predominant risk factors for late-onset language difficulties. We see no language problems when the child is between 18 months and three years old. They are latent” says Ystrøm.
The researchers believe that both specific genes and factors in the child’s external environment can lead to delays in language development at three to five years of age.
What can we do?
Ystrøm believes that children with delayed language development must be identified as early as possible. Parents, health care workers and child care staff should be aware of the language development of children and encourage an enabling language environment, in some cases with specially adapted measures. In particular, they must be aware of children who have sustained disabilities, or who have had normal language development up to three years and then unexpectedly began to have difficulties.
“Professionals and caregivers must be vigilant. It is difficult to detect language difficulties when language becomes more complex in older children. They must be trained so that they are confident in how to spot language difficulties and how to encourage a child’s language. We need more research into the needs of children with different trajectories”, says Ystrøm.
Parents who are concerned about their child’s language development should consult their doctor. They should also raise the issue at the regular check-ups at the health clinic when the child is between two and four years old.
“The checks must take place at the appropriate time. It is important that they are not delayed or not implemented at all,” says Ystrøm.
A few years ago, a survey by the Health and Welfare Department in Oslo showed that few of the health centres in Oslo met the required 14 consultations for each child from birth to school stipulated by the Norwegian Directorate of Health.
Further research
In addition to researchers at the Norwegian Institute of Public Health, researchers at the University of Oslo and the University of Melbourne in Australia participated in this study. The work is funded by the Extra Foundation for Health and Rehabilitation.
“We hope to continue this research and specifically look at the relationship between gender and language. We need more research into the needs of children with various types of language delay”, says Eivind Ystrøm.
Reference
Zambrana, IM, Pons, F., Eadie, P. and Ystrom, E. (2013). Trajectories of language delay from age 3 to 5: persistence, recovery and late onset. International Journal of Language & Communication
(Source: fhi.no)

Thinking it through: Scientists seek to unlock mysteries of the brain
Understanding the human brain is one of the greatest challenges facing 21st century science. If we can rise to this challenge, we will gain profound insights into what makes us human, develop new treatments for brain diseases, and build revolutionary new computing technologies that will have far reaching effects, not only in neuroscience.
Scientists at the European Human Brain Project—set to announce more than a dozen new research partnerships worth Eur 8.3 million in funding later this month—the Allen Institute for Brain Science, and the US BRAIN Initiative are developing new paradigms for understanding how the human brain works in health and disease. Today, their international and collaborative projects are defined, explored, and compared during “Inventing New Ways to Understand the Human Brain,” at the 2014 AAAS Annual Meeting in Chicago.
Brain Simulation, Big Data, and a New Computing Paradigm
Henry Markram from the Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerland, where the Human Brain Project is based, describes how the project will leverage available experimental data and basic principles of brain organization to reconstruct the detailed structure of the brain in computer models. The models will allow the HBP to run super-computer based simulations of the inner working of the brain.
"Brain simulation allows measurements and manipulations impossible in the lab, opening the road to a new kind of in silico experimentation," Markram says.
The data deluge in neuroscience is resulting in a revolutionary amount of brain data with new initiatives planning to acquire even more. But searching, accessing, and analyzing this data remains a key challenge.
Sean Hill, also of EPFL and a speaker at AAAS, leads The Neuroinformatics Platform of the Human Brain Project (HBP). In this scientific panel, he explains how the platform will provide tools to manage, navigate, and annotate spatially referenced brain atlases, which will form the basis for the HBP’s modeling effort—turning Big Data into deep knowledge.
The Neuroinformatics Platform will bring together many different kinds of data. University of Edinburgh’s Seth Grant, a key member of the HBP, describes how he is deriving new methods to decode the molecular principles underlying the brain’s organization, such as how individual proteins assemble into larger complexes. As Grant explains in Chicago, this has important practical applications as many mutations in schizophrenia and autism converge on these so-called supercomplexes in the brain.
As we understand more and more about the way the brain computes we can apply this knowledge to technology. Karlheinz Meier, of Heidelberg University in Germany and a speaker at AAAS, outlines how he is working to create entirely new computing systems as part of the HBP. These Neuromorphic Computing Systems will merge realistic brain models with new hardware for a completely new paradigm of computing—one that more closely resembles how the brain itself processes information.
"The brain has the ability to efficiently perform computations that are impossible even for the most powerful computers while consuming only 30 Watts of power," Meier says.
Brain: Get Ready For Your Close-up
At AAAS, Christof Koch lays out another ambitious, 10-year plan from the Allen Institute for Brain Science: to understand the structure and function of the brain by mapping cell types from mice and humans with computer simulations and figuring out how the cells connect, and how they encode, relay, and process information. The project, Koch says, promises massive, multimodal, and open-access datasets and methodology that will be reproducible and scalable.
At Harvard University, George Church is participating in the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, which aims to map every neuron in the brain with rapidly advancing technologies. At AAAS, he describes progress on new tools for measurements of brain cell development, connectivity, and functional state dynamics in rodent and human clinical samples.
What do all of these projects have in common? They seek to help find some of the most elusive answers known to man: what makes us human, how does the brain function, what causes neurological and mental illness, and, most importantly, how can we treat or cure these afflictions?
Promise of a bonus counter-productive in brains with high dopamine levels
Some people perform better and others worse when promised a high bonus. Brain researcher Esther Aarts of the Donders Institute in Nijmegen has demonstrated for the first time that the amount of dopamine in the brain plays a role in this regard. The journal Psychological Science will publish the results on February 13.
It has been known for some time that not everyone performs better after being promised a bonus. Scientists have published contradictory results regarding the cause. The study by Esther Aarts now shows that the differences can be explained by differences in the level of dopamine in the brain. People with a high level of dopamine in a specific brain region – the striatum – perform worse after a being promised a bonus, and people with a low level of dopamine in the same area perform better. Aarts used a PET (Positron Emission Tomography) scanner to examine the amount of dopamine in the brains of subjects. She conducted this research in Berkeley, California (USA), where she worked as a post-doctoral researcher for two years.
Overdose of dopamine
The promise of a bonus provides an additional spurt of the ‘motivation substance’ dopamine in the brain. ‘For people who usually have high levels of dopamine, the promise of a bonus causes a type of dopamine overdose in the striatum’, explains Aarts. ‘Our test subjects were asked to perform a task that required considerable concentration. An overdose of dopamine makes this difficult. People who usually have less dopamine are less likely to have an overdose of dopamine, and they therefore perform better after being promised a bonus.’
Concentration desired
Test subjects performed a computer task that elicited conflicting reactions, therefore requiring considerable concentration: an arrow appears on the screen, pointing either left or right. The word ‘left’ or ‘right’ is written in the middle of the arrow. Subjects were asked to ignore the direction indicated by the arrow and mention only the direction described by the word. For half of the attempts, a bonus of 15 cents was promised for a correct answer. In the other half, the subjects received only 1 cent for each correct answer. People who usually have a high level of dopamine performed better in the low-pay condition than they did in the high-pay condition. The reverse was observed for people with low levels of dopamine: they performed better with high rewards than they did with low rewards.
Flexibility or focus
‘This knowledge could make it possible to apply bonuses more effectively, but it would require observing the standard dopamine levels of people, as well as the nature of the task that they must perform’, reports Aarts. ‘It makes quite a difference whether the task is flexible and creative or whether it requires a great deal of focus. Our research shows how people perform on tasks that require considerable focus’. Given the high cost of PET scans, Aarts is now looking for easier ways of measuring dopamine levels. ‘I hope to be able to relate dopamine levels to scores on questionnaires. In the future, this might eliminate the need for PET scans for determining the quantity of dopamine in the brain’.

Tired all the time: Could undiagnosed sleep problems be making MS patients’ fatigue worse?
People with multiple sclerosis (MS) might assume that the fatigue they often feel just comes with the territory of their chronic neurological condition.
But a new University of Michigan study suggests that a large proportion of MS patients may have an undiagnosed sleep disorder that is also known to cause fatigue. And that disorder – obstructive sleep apnea – is a treatable condition.
In the latest issue of the Journal of Clinical Sleep Medicine, researchers from the U-M Health System’s Sleep Disorders Center report the results of a study involving 195 patients of the U-M Multiple Sclerosis Center.
In all, 56 percent of the MS patients were found to be at increased risk for obstructive sleep apnea, based on a method of screening for the condition known as the STOP-Bang questionnaire. But most had never received a formal diagnosis of sleep apnea, and less than half of those who had been told they had sleep apnea were using the standard treatment for it.
The authors also found that patients who were more fatigued were more likely to also be at elevated risk for sleep apnea – even after taking into account other factors that might have contributed to feelings of fatigue, such as age, gender, body mass index (BMI), sleep duration, depression, and other nighttime symptoms.
The research is based on patients’ answers from a sleep questionnaire designed by the authors, and four validated instruments designed to assess daytime sleepiness, fatigue severity, insomnia severity and obstructive sleep apnea risk. Medical records also were accessed with patients’ permission, to examine clinical characteristics that may predict fatigue or obstructive sleep apnea risk.
“We were particularly surprised by the difference between the proportion of patients who carried an established diagnosis of obstructive sleep apnea – 21 percent — and the proportion at risk for obstructive sleep apnea based on their STOP-Bang scores, which was 56 percent,” says the study’s lead author, Tiffany Braley, M.D., M.S. “These findings suggest that OSA may be a highly prevalent and yet under-recognized contributor to fatigue in persons with MS.”
Braley, an assistant professor of Neurology and multiple sclerosis specialist at the U-M Medical School, conducted the study in collaboration with professors Ronald Chervin, M.D., M.S., and Benjamin Segal, M.D. Chervin is the Director of U-M Sleep Disorders Center, and Segal directs the U-M MS Center.
Multiple sclerosis (MS) is an immune-mediated disease of the central nervous system that causes inflammation and damage of the brain and spinal cord. In addition to neurological disability, MS patients suffer from a number of chronic symptoms, the most common of which is fatigue. Fatigue is also one of the most disabling symptoms experienced by MS patients.
Braley cautions that the design of this new study does not allow for demonstration of cause and effect – that is, the researchers can’t prove based on survey results that the patients felt more fatigued because they had a high score on a sleep apnea risk survey. But, she says, “the findings should prompt doctors who treat MS patients to consider sleep apnea as a possible contributor to their patients’ fatigue, and recommend appropriate testing and treatment.”
The standard treatment for obstructive sleep apnea, called continuous positive airway pressure, or CPAP, involves a machine and mask device that applies a stream of air to the upper airway to keep it open during sleep.
The patients in the study had an average age of 47 and had lived with MS for an average of 10 years. Two-thirds were female, consistent with the prevalence of MS in the U.S., and two-thirds were taking a medication to treat their MS. Three-quarters had the relapsing-remitting form of the disease.

Brain Damage in Children—The Result of Too Many Chemicals?
A new report is sounding the alarm of a “silent epidemic” of childhood neurological disorders linked to neurotoxic compounds.
While genetics is known to play a role in neurological problems, only 30 to 40 percent of neurodevelopmental disorders can be definitively tied to family history. “There are a lot of chemicals out there that have been shown to have the capability to injure the developing brain,” says study coauthor Philip Landrigan, MD, professor and chair of the department of community and preventive medicine at Mount Sinai School of Medicine in New York City and one of the world’s foremost authorities on children’s environmental health. “And we’re very concerned that a number of chemicals in everyday products have never been properly tested to determine whether they’re toxic to the human brain.”
In the new report, Dr. Landrigan and his coauthor identified six chemicals that have been discovered, within the past seven years, to trigger brain damage in children. In 2006, he and other researchers ID’d lead, methylmercury, arsenic, polychlorinated biphenyls (PCBs), and toluene as known contributors to rising rates of neurodevelopmental disorders like autism, attention-deficit hyperactivity disorder, and learning disabilities.
Can a virtual brain replace lab rats?
Testing the effects of drugs on a simulated brain could lead to breakthrough treatments for neurological disorders such as Parkinson’s, Huntington’s and Alzheimer’s disease.
Researchers from the University of Waterloo in Canada hope Spaun, the world’s largest functioning model of the brain, will be used to test new drugs that lead to medical breakthroughs for brain disorders.
Terrence Stewart, a post-doctoral researcher with the Centre for Theoretical Neuroscience at Waterloo and project manager for Spaun, will tell an audience at the American Association for the Advancement of Science (AAAS) annual meeting in Chicago about the advantages of using whole-brain simulation as a tool to aid new discoveries in medicine.
“Our hope is that you could try out different possible treatments quickly to see how the brain reacts and how each one changes behaviour before testing them in people,” said Stewart. “Our brain model offers a new way to test treatments. For Alzheimer’s disease or a stroke that causes memory loss, we could see how a new drug affects the firing pattern of individual brain cells and measure how it changes brain performance on memory tests before trying it on people.”
Stewart’s team has already made progress simulating Parkinson’s and Huntington’s diseases. Their next step is to simulate Alzheimer’s disease after giving Spaun a hippocampus, the brain region involved in forming new memories.
Spaun is more like the human brain than other computer brain models because it makes mistakes and loses abilities in similar ways to people. To simulate the cognitive decline associated with aging, for example, Stewart and his team killed off neurons in the brain model and observed it gradually forgetting more numbers on a memory test.
To reproduce movement problems associated with Huntington’s disease and damage to the cerebellum, Stewart damaged parts of the simulated brain affected by those conditions.
“We showed that errors made in reaching behaviour seen in people with those disorders correspond to the errors made by our brain model when neurons in the affected brain regions are damaged,” he said.
Spaun can see, remember, think and write using a mechanical arm. Most importantly, this virtual brain – which mimics the neuron firing patterns seen in the human brain – allows the researchers to study and understand how damage to individual cells affects the behaviour of the whole brain in different neurological diseases.
Stewart presented new research on successfully simulating the effects of aging and Huntington’s disease in Spaun at a symposium panel, “Virtual Humans: Helping Facilitate Breakthroughs in Medicine” on Friday, February 14, 2014.
Researchers find brain’s ‘sweet spot’ for love in neurological patient
A region deep inside the brain controls how quickly people make decisions about love, according to new research at the University of Chicago.
The finding, made in an examination of a 48-year-old man who suffered a stroke, provides the first causal clinical evidence that an area of the brain called the anterior insula “plays an instrumental role in love,” said UChicago neuroscientist Stephanie Cacioppo, lead author of the study.
In an earlier paper that analyzed research on the topic, Cacioppo and colleagues defined love as “an intentional state for intense [and long-term] longing for union with another” while lust, or sexual desire, is characterized by an intentional state for a short-term, pleasurable goal.
In this study, the patient made decisions normally about lust but showed slower reaction times when making decisions about love, in contrast to neurologically typical participants matched on age, gender and ethnicity. The findings are presented in a paper, “Selective Decision-Making Deficit in Love Following Damage to the Anterior Insula,” published in the journal Current Trends in Neurology.
“This distinction has been interpreted to mean that desire is a relatively concrete representation of sensory experiences, while love is a more abstract representation of those experiences,” said Cacioppo, a research associate and assistant professor in psychology. The new data suggest that the posterior insula, which affects sensation and motor control, is implicated in feelings of lust or desire, while the anterior insula has a role in the more abstract representations involved in love.
In the earlier paper, “The Common Neural Bases Between Sexual Desire and Love: A Multilevel Kernel Density fMRI Analysis,” Cacioppo and colleagues examined a number of studies of brain scans that looked at differences between love and lust.
The studies showed consistently that the anterior insula was associated with love, and the posterior insula was associated with lust. However, as in all fMRI studies, the findings were correlational.
“We reasoned that if the anterior insula was the origin of the love response, we would find evidence for that in brain scans of someone whose anterior insula was damaged,” she said.
In the study, researchers examined a 48-year-old heterosexual male in Argentina, who had suffered a stroke that damaged the function of his anterior insula. He was matched with a control group of seven Argentinian heterosexual men of the same age who had healthy anterior insula.
The patient and the control group were shown 40 photographs at random of attractive, young women dressed in appealing, short and long dresses and asked whether these women were objects of sexual desire or love. The patient with the damaged anterior insula showed a much slower response when asked if the women in the photos could be objects of love.
“The current work makes it possible to disentangle love from other biological drives,” the authors wrote. Such studies also could help researchers examine feelings of love by studying neurological activity rather than subjective questionnaires.
Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.
As a consequence, engineers designing control programs for “multiagent systems” — whether teams of robots or networks of devices with different functions — have generally restricted themselves to special cases, where reliable information about the environment can be assumed or a relatively simple collaborative task can be clearly specified in advance.
This May, at the International Conference on Autonomous Agents and Multiagent Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more complex ways. The system factors in uncertainty — the odds, for instance, that a communication link will drop, or that a particular algorithm will inadvertently steer a robot into a dead end — and automatically plans around it.
For small collaborative tasks, the system can guarantee that its combination of programs is optimal — that it will yield the best possible results, given the uncertainty of the environment and the limitations of the programs themselves.
Working together with Jon How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, and his student Chris Maynor, the researchers are currently testing their system in a simulation of a warehousing application, where teams of robots would be required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads. The simulations involve small groups of iRobot Creates, programmable robots that have the same chassis as the Roomba vacuum cleaner.
Reasonable doubt
“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” says Christopher Amato, a postdoc in CSAIL and first author on the new paper. “If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles.”
An agent may not even have perfect information about its own location, Amato says — which aisle of the warehouse it’s actually in, for instance. Moreover, “When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he says. “Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”
The new MIT system, which Amato developed with co-authors Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and George Konidaris, a fellow postdoc, takes three inputs. One is a set of low-level control algorithms — which the MIT researchers refer to as “macro-actions” — which may govern agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment. And the third is a scheme for valuing different outcomes: Accomplishing a task accrues a high positive valuation, but consuming energy accrues a negative valuation.
School of hard knocks
Amato envisions that the statistics could be gathered automatically, by simply letting a multiagent system run for a while — whether in the real world or in simulations. In the warehousing application, for instance, the robots would be left to execute various macro-actions, and the system would collect data on results. Robots trying to move from point A to point B within the warehouse might end up down a blind alley some percentage of the time, and their communication bandwidth might drop some other percentage of the time; those percentages might vary for robots moving from point B to point C.
The MIT system takes these inputs and then decides how best to combine macro-actions to maximize the system’s value function. It might use all the macro-actions; it might use only a tiny subset. And it might use them in ways that a human designer wouldn’t have thought of.
Suppose, for instance, that each robot has a small bank of colored lights that it can use to communicate with its counterparts if their wireless links are down. “What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato says. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”
The MIT researchers’ work frames the problem of multiagent control as something called a partially observable Markov decision process, or POMDP. “POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” says Nora Ayanian, an assistant professor of computer science at the University of Southern California who specializes in multirobot systems. “So they’re not really very popular in the multirobot world.”
“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” she explains. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution.”
“This will definitely get these POMDPs on the radar of multirobot-systems people,” Ayanian adds. “It’s something that really makes it way more capable to be applied to complex problems.”
Understanding the basic biology of bipolar disorder
Scientists know there is a strong genetic component to bipolar disorder, but they have had an extremely difficult time identifying the genes that cause it. So, in an effort to better understand the illness’s genetic causes, researchers at UCLA tried a new approach.
Instead of only using a standard clinical interview to determine whether individuals met the criteria for a clinical diagnosis of bipolar disorder, the researchers combined the results from brain imaging, cognitive testing, and an array of temperament and behavior measures. Using the new method, UCLA investigators — working with collaborators from UC San Francisco, Colombia’s University of Antioquia and the University of Costa Rica — identified about 50 brain and behavioral measures that are both under strong genetic control and associated with bipolar disorder. Their discoveries could be a major step toward identifying the specific genes that contribute to the illness.
The results are published in the Feb. 12 edition of the journal JAMA Psychiatry.
A severe mental illness that affects about 1 to 2 percent of the population, bipolar disorder causes unusual shifts in mood and energy, and it interferes with the ability to carry out everyday tasks. Those with the disorder can experience tremendous highs and extreme lows — to the point of not wanting to get out of bed when they’re feeling down. The genetic causes of bipolar disorder are highly complex and likely involve many different genes, said Carrie Bearden, a senior author of the study and an associate professor of psychiatry and psychology at the UCLA Semel Institute for Neuroscience and Human Behavior.
"The field of psychiatric genetics has long struggled to find an effective approach to begin dissecting the genetic basis of bipolar disorder," Bearden said. "This is an innovative approach to identifying genetically influenced brain and behavioral measures that are more closely tied to the underlying biology of bipolar disorder than the clinical symptoms alone are."
The researchers assessed 738 adults, 181 of whom have severe bipolar disorder. They used high-resolution 3-D images of the brain, questionnaires evaluating temperament and personality traits of individuals diagnosed with bipolar disorder and their non-bipolar relatives, and an extensive battery of cognitive tests assessing long-term memory, attention, inhibitory control and other neurocognitive abilities.
Approximately 50 of these measures showed strong evidence of being influenced by genetics. Particularly interesting was the discovery that the thickness of the gray matter in the brain’s temporal and prefrontal regions — the structures that are critical for language and for higher-order cognitive functions like self-control and problem-solving — were the most promising candidate traits for genetic mapping, based on both their strong genetic basis and association with the disease.
"These findings are really just the first step in getting us a little closer to the roots of bipolar disorder," Bearden said. "What was really exciting about this project was that we were able to collect the most extensive set of traits associated with bipolar disorder ever assessed within any study sample. These data will be a really valuable resource for the field."
The individuals assessed in this study are members of large families living in Costa Rica’s central valley and Antioquia, Colombia. The families were founded by European and native Amerindian populations about 400 years ago and have a very high incidence of bipolar disorder. The groups were chosen because they have remained fairly isolated since their founding and their genetics are therefore simpler for scientists to study than those of general populations.
The fact that the findings aligned so closely with those of previous, smaller studies in other populations was surprising even to the scientists, given the subjects’ unique genetic background and living environments.
"This suggests that even if the specific genetic variants we identify may be unique to this population, the biological pathways they disrupt are likely to also influence disease risk in other populations," Bearden said.
The researchers’ next step is to use the genomic data they collected from the families — including full genome sequences and gene expression data— to begin identifying the specific genes that contribute to risk for bipolar disorder. The researchers also plan to extend their investigation into the children and teens in these families. They hypothesize that many of the bipolar-related brain and behavioral differences found in adults with bipolar disorder had their origins in adolescent neurodevelopment.
Researchers at the University of Bristol and University College London found that lactate – essentially lactic acid – causes cells in the brain to release more noradrenaline (norepinephrine in US English), a hormone and neurotransmitter which is fundamental for brain function. Without it people can hardly wake up or focus on anything.

Production of lactate can be triggered by muscle use, which reinforces the connection between exercise and positive mental wellbeing.
Lactate was first discovered in sour milk by Swedish chemist, Carl Wilhelm Scheele in 1780. It is produced naturally by the body, for example when muscles are at work. In the brain, it has always been regarded as an energy source which can be delivered to neurones as fuel to keep them working when brain activity increases.
This research, published today [11 February] in Nature Communications, identifies a secondary function for lactate as a signal between brain cells. It implies that there is an as yet unknown receptor for lactate in the brain which must be present on noradrenaline cells to make them sensitive to lactate.
Professor Sergey Kasparov, from Bristol University’s School of Physiology and Pharmacology, said: “Our findings suggest that lactate has more than one incarnation - in addition to its role as an energy source, it is also a signal to neurones to release more noradrenaline.”
Dr Anja Teschemacher, also from the University of Bristol, added: “The next big task is to identify the receptor which mediates this effect because this will help to design drugs to block or stimulate this response. If we can regulate the release of noradrenaline – which is absolutely fundamental for brain function - then this could have important implications for the treatment of major health problems such as stress, blood pressure, pain and depression.”
Astrocytes, small non-neuronal star-shaped cells in the brain and spinal cord, are the principle source of brain lactate. The discovery that astrocytes communicate directly with neurones opens up a whole new area of pharmacology which has been little explored.
(Source: bristol.ac.uk)