Neuroscience

Articles and news from the latest research reports.

122 notes

Nanoscale neuronal activity measured for the first time

A new technique that allows scientists to measure the electrical activity in the communication junctions of the nervous systems has been developed by a researcher at Queen Mary University of London.

The junctions in the central nervous systems that enable the information to flow between neurons, known as synapses, are around 100 times smaller than the width of a human hair (one micrometer and less) and as such are difficult to target let alone measure.

image

By applying a high-resolution scanning probe microscopy that allows three-dimensional visualisation of the structures, the team were able to measure and record the flow of current in small synaptic terminals for the first time.

“We replaced the conventional low-resolution optical system with a high-resolution microscope based on a nanopipette,” said Dr Pavel Novak, a bioengineering specialist from Queen Mary’s School of Engineering and Materials Science.

“The nanopipette hovers above the surface of the sample and scans the structure to reveal its three-dimensional topography. The same nanopipette then attaches to the surface at selected locations on the structure to record electrical activity. By repeating the same procedure for different locations of the neuronal network we can obtain a three-dimensional map of its electrical properties and activity.”

The research, published (Wednesday 18 September) in Neuron, opens a new window into the neuronal activity at nanometre scale, and may contribute to the wider effort of understanding the function of the brain represented by the Brain Activity Map Project (BRAIN initiative), which aims to map the function of each individual neuron in the human brain.

(Source: qmul.ac.uk)

Filed under neural activity BRAIN initiative nervous system CNS synapses ion channels neuroscience science

557 notes

How old memories fade away
Discovery of a gene essential for memory extinction could lead to new PTSD treatments.
If you got beat up by a bully on your walk home from school every day, you would probably become very afraid of the spot where you usually met him. However, if the bully moved out of town, you would gradually cease to fear that area.
Neuroscientists call this phenomenon “memory extinction”: Conditioned responses fade away as older memories are replaced with new experiences.
A new study from MIT reveals a gene that is critical to the process of memory extinction. Enhancing the activity of this gene, known as Tet1, might benefit people with posttraumatic stress disorder (PTSD) by making it easier to replace fearful memories with more positive associations, says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory.
The Tet1 gene appears to control a small group of other genes necessary for memory extinction. “If there is a way to significantly boost the expression of these genes, then extinction learning is going to be much more active,” says Tsai, the Picower Professor of Neuroscience at MIT and senior author of a paper appearing in the Sept. 18 issue of the journal Neuron.
The paper’s lead authors are Andrii Rudenko, a postdoc at the Picower Institute, and Meelad Dawlaty, a postdoc at the Whitehead Institute.
New and old memories
Tsai’s team worked with researchers in MIT biology professor Rudolf Jaenisch’s lab at the Whitehead to study mice with the Tet1 gene knocked out. Tet1 and other Tet proteins help regulate the modifications of DNA that determine whether a particular gene will be expressed or not. Tet proteins are very abundant in the brain, which made scientists suspect they might be involved in learning and memory.
To their surprise, the researchers found that mice without Tet1 were perfectly able to form memories and learn new tasks. However, when the team began to study memory extinction, significant differences emerged.
To measure the mice’s ability to extinguish memories, the researchers conditioned the mice to fear a particular cage where they received a mild shock. Once the memory was formed, the researchers then put the mice in the cage but did not deliver the shock. After a while, mice with normal Tet1 levels lost their fear of the cage as new memories replaced the old ones.
“What happens during memory extinction is not erasure of the original memory,” Tsai says. “The old trace of memory is telling the mice that this place is dangerous. But the new memory informs the mice that this place is actually safe. There are two choices of memory that are competing with each other.”
In normal mice, the new memory wins out. However, mice lacking Tet1 remain fearful. “They don’t relearn properly,” Rudenko says. “They’re kind of getting stuck and cannot extinguish the old memory.”
In another set of experiments involving spatial memory, the researchers found that mice lacking the Tet1 gene were able to learn to navigate a water maze, but were unable to extinguish the memory.
Control of memory genes 
The researchers found that Tet1 exerts its effects on memory by altering the levels of DNA methylation, a modification that controls access to genes. High methylation levels block the promoter regions of genes and prevent them from being turned on, while lower levels allow them to be expressed.
Many proteins that methylate DNA have been identified, but Tet1 and other Tet proteins have the reverse effect, removing DNA methylation. The MIT team found that mice lacking Tet1 had much lower levels of hydroxymethylation — an intermediate step in the removal of methylation — in the hippocampus and the cortex, which are both key to learning and memory.
These changes in demethylation were most dramatic in a group of about 200 genes, including a small subset of so-called “immediate early genes,” which are critical for memory formation. In mice without Tet1, the immediate early genes were very highly methylated, making it difficult for those genes to be turned on.
In the promoter region of an immediate early gene known as Npas4 — which Yingxi Li, the Frederick A. and Carole J. Middleton Career Development Assistant Professor of Neuroscience at MIT, recently showed regulates other immediate early genes — the researchers found methylation levels close to 60 percent, compared to 8 percent in normal mice.
“It’s a huge increase in methylation, and we think that is most likely to explain why Npas4 is so drastically downregulated in the Tet1 knockout mice,” Tsai says.
“By demonstrating some of the ways that regulatory genes are methylated in response to Tet1 knockout and behavioral experience, the authors have taken an important step in identifying potential pharmacological treatment targets for disorders such as PTSD and addiction,” says Matthew Lattal, an associate professor of behavioral neuroscience at Oregon Health and Science University, who was not part of the research team.
Keeping genes poised
The researchers also discovered why the Tet1-deficient mice are still able to learn new things. During fear conditioning, methylation of the Npas4 gene goes down to around 20 percent, which appears to be low enough for the expression of Npas4 to turn on and help create new memories. The researchers suspect the fear stimulus is so strong that it activates other demethylation proteins — possibly Tet2 or Tet3 — that can compensate for the lack of Tet1.
During the memory-extinction training, however, the mice do not experience such a strong stimulus, so methylation levels remain high (around 40 percent) and Npas4 does not turn on.
The findings suggest that a threshold level of methylation is necessary for gene expression to take place, and that the job of Tet1 is to maintain low methylation, ensuring that the genes necessary for memory formation are poised and ready to turn on at the moment they are needed.
The researchers are now looking for ways to increase Tet1 levels artificially and studying whether such a boost could enhance memory extinction. They are also studying the effects of eliminating two or all three of the Tet enzymes.
“This will not only help us further delineate epigenetic regulation of memory formation and extinction, but will also unravel other potential functions of Tets and methylation in the brain beyond memory extinction,” Dawlaty says.

How old memories fade away

Discovery of a gene essential for memory extinction could lead to new PTSD treatments.

If you got beat up by a bully on your walk home from school every day, you would probably become very afraid of the spot where you usually met him. However, if the bully moved out of town, you would gradually cease to fear that area.

Neuroscientists call this phenomenon “memory extinction”: Conditioned responses fade away as older memories are replaced with new experiences.

A new study from MIT reveals a gene that is critical to the process of memory extinction. Enhancing the activity of this gene, known as Tet1, might benefit people with posttraumatic stress disorder (PTSD) by making it easier to replace fearful memories with more positive associations, says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory.

The Tet1 gene appears to control a small group of other genes necessary for memory extinction. “If there is a way to significantly boost the expression of these genes, then extinction learning is going to be much more active,” says Tsai, the Picower Professor of Neuroscience at MIT and senior author of a paper appearing in the Sept. 18 issue of the journal Neuron.

The paper’s lead authors are Andrii Rudenko, a postdoc at the Picower Institute, and Meelad Dawlaty, a postdoc at the Whitehead Institute.

New and old memories

Tsai’s team worked with researchers in MIT biology professor Rudolf Jaenisch’s lab at the Whitehead to study mice with the Tet1 gene knocked out. Tet1 and other Tet proteins help regulate the modifications of DNA that determine whether a particular gene will be expressed or not. Tet proteins are very abundant in the brain, which made scientists suspect they might be involved in learning and memory.

To their surprise, the researchers found that mice without Tet1 were perfectly able to form memories and learn new tasks. However, when the team began to study memory extinction, significant differences emerged.

To measure the mice’s ability to extinguish memories, the researchers conditioned the mice to fear a particular cage where they received a mild shock. Once the memory was formed, the researchers then put the mice in the cage but did not deliver the shock. After a while, mice with normal Tet1 levels lost their fear of the cage as new memories replaced the old ones.

“What happens during memory extinction is not erasure of the original memory,” Tsai says. “The old trace of memory is telling the mice that this place is dangerous. But the new memory informs the mice that this place is actually safe. There are two choices of memory that are competing with each other.”

In normal mice, the new memory wins out. However, mice lacking Tet1 remain fearful. “They don’t relearn properly,” Rudenko says. “They’re kind of getting stuck and cannot extinguish the old memory.”

In another set of experiments involving spatial memory, the researchers found that mice lacking the Tet1 gene were able to learn to navigate a water maze, but were unable to extinguish the memory.

Control of memory genes

The researchers found that Tet1 exerts its effects on memory by altering the levels of DNA methylation, a modification that controls access to genes. High methylation levels block the promoter regions of genes and prevent them from being turned on, while lower levels allow them to be expressed.

Many proteins that methylate DNA have been identified, but Tet1 and other Tet proteins have the reverse effect, removing DNA methylation. The MIT team found that mice lacking Tet1 had much lower levels of hydroxymethylation — an intermediate step in the removal of methylation — in the hippocampus and the cortex, which are both key to learning and memory.

These changes in demethylation were most dramatic in a group of about 200 genes, including a small subset of so-called “immediate early genes,” which are critical for memory formation. In mice without Tet1, the immediate early genes were very highly methylated, making it difficult for those genes to be turned on.

In the promoter region of an immediate early gene known as Npas4 — which Yingxi Li, the Frederick A. and Carole J. Middleton Career Development Assistant Professor of Neuroscience at MIT, recently showed regulates other immediate early genes — the researchers found methylation levels close to 60 percent, compared to 8 percent in normal mice.

“It’s a huge increase in methylation, and we think that is most likely to explain why Npas4 is so drastically downregulated in the Tet1 knockout mice,” Tsai says.

“By demonstrating some of the ways that regulatory genes are methylated in response to Tet1 knockout and behavioral experience, the authors have taken an important step in identifying potential pharmacological treatment targets for disorders such as PTSD and addiction,” says Matthew Lattal, an associate professor of behavioral neuroscience at Oregon Health and Science University, who was not part of the research team.

Keeping genes poised

The researchers also discovered why the Tet1-deficient mice are still able to learn new things. During fear conditioning, methylation of the Npas4 gene goes down to around 20 percent, which appears to be low enough for the expression of Npas4 to turn on and help create new memories. The researchers suspect the fear stimulus is so strong that it activates other demethylation proteins — possibly Tet2 or Tet3 — that can compensate for the lack of Tet1.

During the memory-extinction training, however, the mice do not experience such a strong stimulus, so methylation levels remain high (around 40 percent) and Npas4 does not turn on.

The findings suggest that a threshold level of methylation is necessary for gene expression to take place, and that the job of Tet1 is to maintain low methylation, ensuring that the genes necessary for memory formation are poised and ready to turn on at the moment they are needed.

The researchers are now looking for ways to increase Tet1 levels artificially and studying whether such a boost could enhance memory extinction. They are also studying the effects of eliminating two or all three of the Tet enzymes.

“This will not only help us further delineate epigenetic regulation of memory formation and extinction, but will also unravel other potential functions of Tets and methylation in the brain beyond memory extinction,” Dawlaty says.

Filed under PTSD memory memory extinction dna methylation hippocampus tet proteins neuroscience science

57 notes

Mental Fog with Tamoxifen is Real; Scientists Find Possible Antidote

A team from the University of Rochester Medical Center has shown scientifically what many women report anecdotally: that the breast cancer drug tamoxifen is toxic to cells of the brain and central nervous system, producing mental fogginess similar to “chemo brain.”

However, in the Journal of Neuroscience, researchers also report they’ve discovered an existing drug compound that appears to counteract or rescue brain cells from the adverse effects of the breast cancer drug.

Corresponding author Mark Noble, Ph.D., professor of Biomedical Genetics and director of the UR Stem Cell and Regenerative Medicine Institute, said it’s exciting to potentially be able to prevent a toxic reaction to one of the oldest and most widely used breast cancer medications on the market. Although tamoxifen is more easily tolerated compared to most cancer treatments, it nonetheless produces troubling side effects in a subset of the large number of people who take it. 

By studying tamoxifen’s impact on central nervous system cell populations and then screening a library of 1,040 compounds already in clinical use or clinical trials, his team identified a substance known as AZD6244, and showed that it essentially eliminated tamoxifen-induced killing of brain cells in mice.

“As far as I know, no one else has discovered an agent that singles out and protects brain and central nervous system cells while also not protecting cancer cells,” said Noble, who also collaborates with researchers at the UR’s James P. Wilmot Cancer Center. “This creates a whole new paradigm; it’s where we need to go.”

The research is the result of two separate but related projects from Noble’s lab. One investigates the science underlying a condition known as “chemo brain,” and another is looking at how to exploit tamoxifen’s attributes for use in other types of cancer besides early-stage, less-aggressive breast cancer. (The drug is a type of hormonal therapy, which works by stopping the growth of estrogen-sensitive tumors.)

In the Journal of Neuroscience paper, Noble’s team first identified central nervous system (CNS) cells that are most vulnerable to tamoxifen toxicity. Chief among these were oligodendrocyte-type 2 astrocyte progenitor cells (O-2A/OPCs), cells that are essential for making the insulating sheaths (called myelin) required for nerve cells to work properly. Exposure to clinically relevant levels of tamoxifen for 48 hours killed more than 75 percent of these cells.

In earlier work, while studying the biology of the cognitive difficulties that linger in some people being treated for cancer, Noble and colleagues discovered that 5-fluorouracil, (cisplatin, cytarabine, carmustine), and multiple other types of chemotherapy, damages populations of stem cells in the CNS. Published in the Journal of Biology (1, 2) in 2006 and 2008, these studies pioneered analysis of the biological foundations of chemo brain.

“It’s critical to find safe treatments that can rescue the brain from impairment,” Noble said, “because despite increasing awareness and research in this area, some people continue to endure short-term memory loss, mental cloudiness, and trouble concentrating. For some patients the effects wear off over time, but others experience symptoms that can lead to job loss, depression, and other debilitating events.”

Noble’s lab, led by post-doctoral fellow Hsing-Yu Chen, Ph.D., identified 27 drugs that protected O-2A/OPCs from the effects of tamoxifen. Further testing resulted in singling out AZD6244, by other laboratories as a potential cancer therapy.

In mice co-treated with tamoxifen plus AZD6244, cell death in the corpus callosum, the largest white matter (myelinated) structure in the brain, was prevented, the paper reported. Meanwhile, several national clinical trials are testing the safety and effectiveness of AZD6244 in treating multiple cancers, from breast and colon to melanoma and lung.

Researchers were also optimistic about finding that while AZD6244 protected brain cells, it did not also protect cancer cells. New drug compounds have greater value if they do not compromise the effects of existing treatments, and in this case, Noble said, the experiments in his laboratory agreed with studies by other research groups, who found that the combined use of AZD6244 and chemotherapy enhances targeting of cancer cells.

In future work, Noble’s group plans to identify the dosage of AZD6244 that provides maximum protection and minimum disruption to differentiating brain cells. Their research was supported by the U.S. Department of Defense, National Institutes of Health, Susan Komen Race for the Cure, and the Carlson Stem Cell Fund.

This is the second tamoxifen-related study to come from Noble’s lab in 2013. In April they showed in pre-clinical research they could leverage the drug’s various cellular activities so that it might work on more aggressive triple-negative breast cancer. In the journal EMBO Molecular Medicine, Noble and Chen also reported finding an experimental compound that enhances tamoxifen’s ability to work in this new way.

(Source: urmc.rochester.edu)

Filed under tamoxifen chemo brain corpus callosum CNS memory loss cancer neuroscience science

499 notes

Emotional attachment to robots could affect outcome on battlefield
Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.
It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?
That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?
What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.
Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.
These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.
Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.
The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.
Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.
“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.
Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.
Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.
“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”
The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.
“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Emotional attachment to robots could affect outcome on battlefield

Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.

It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?

That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?

What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.

Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.

These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.

Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.

The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.

Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.

“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.

Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.

Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.

“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.

“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Filed under emotional attachment robots robotics human-robot interaction neuroscience science

127 notes

Carbonation Alters the Mind’s Perception of Sweetness
Carbonation, an essential component of popular soft drinks, alters the brain’s perception of sweetness and makes it difficult for the brain to determine the difference between sugar and artificial sweeteners, according to a new article in Gastroenterology, the official journal of the American Gastroenterological Association.

"This study proves that the right combination of carbonation and artificial sweeteners can leave the sweet taste of diet drinks indistinguishable from normal drinks," said study author, Rosario Cuomo, associate professor, gastroenterology, department of clinical medicine and surgery, "Federico II" University, Naples, Italy. "Tricking the brain about the type of sweet could be advantageous to weight loss — it facilitates the consumption of low-calorie drinks because their taste is perceived as pleasant as the sugary, calorie-laden drink."
The study identifies, however, that there is a downside to this effect; the combination of carbonation and sugar may stimulate increased sugar and food consumption since the brain perceives less sugar intake and energy balance is impaired. This interpretation might better explain the prevalence of eating disorders, metabolic diseases and obesity among diet-soda drinkers.
Investigators used functional magnetic resonance imaging to monitor changes in regional brain activity in response to naturally or artificially sweetened carbonated beverages. The findings were a result of the integration of information on gastric fullness and on nutrient depletion conveyed to the brain.
Future studies combining analysis of carbonation effect on sweetness detection in taste buds and responses elicited by the carbonated sweetened beverages in the gastrointestinal cavity will be required to further clarify the puzzling link between reduced calorie intake with diet drinks and increased incidence of obesity and metabolic diseases.

Carbonation Alters the Mind’s Perception of Sweetness

Carbonation, an essential component of popular soft drinks, alters the brain’s perception of sweetness and makes it difficult for the brain to determine the difference between sugar and artificial sweeteners, according to a new article in Gastroenterology, the official journal of the American Gastroenterological Association.

"This study proves that the right combination of carbonation and artificial sweeteners can leave the sweet taste of diet drinks indistinguishable from normal drinks," said study author, Rosario Cuomo, associate professor, gastroenterology, department of clinical medicine and surgery, "Federico II" University, Naples, Italy. "Tricking the brain about the type of sweet could be advantageous to weight loss — it facilitates the consumption of low-calorie drinks because their taste is perceived as pleasant as the sugary, calorie-laden drink."

The study identifies, however, that there is a downside to this effect; the combination of carbonation and sugar may stimulate increased sugar and food consumption since the brain perceives less sugar intake and energy balance is impaired. This interpretation might better explain the prevalence of eating disorders, metabolic diseases and obesity among diet-soda drinkers.

Investigators used functional magnetic resonance imaging to monitor changes in regional brain activity in response to naturally or artificially sweetened carbonated beverages. The findings were a result of the integration of information on gastric fullness and on nutrient depletion conveyed to the brain.

Future studies combining analysis of carbonation effect on sweetness detection in taste buds and responses elicited by the carbonated sweetened beverages in the gastrointestinal cavity will be required to further clarify the puzzling link between reduced calorie intake with diet drinks and increased incidence of obesity and metabolic diseases.

Filed under carbonation metabolic diseases sugar artificial sweeteners perception brain activity neuroscience science

71 notes

Ten-Year Project Redraws the Map of Bird Brains

Gene expression analysis shows bird brain an even better model for research.

Explorers need good maps, which they often end up drawing themselves. 

image

Pursuing their interests in using the brains of birds as a model for the human brain, an international team of researchers led by Duke neuroscientist Erich Jarvis and his collaborators Chun-Chun Chen and Kazuhiro Wada have just completed a mapping of the bird brain based on a 10-year exploration of the tiny cerebrums of eight species of birds. 

In a special issue appearing online in the Journal of Comparative Neurology, two papers (1, 2) from the Jarvis group propose a dramatic redrawing of some boundaries and functional areas based on a computational analysis of the activity of 52 genes across 23 areas of the bird brain.

Jarvis, who is a professor of neurobiology at Duke, member of the Duke Institute for Brain Sciences, and a Howard Hughes Medical Institute investigator, said the most important takeaway from the new map is that the brains of all vertebrates, a group that includes birds as well as humans, have some important similarities that can be useful to research. 

Most significantly, the new map argues for and supports the existence of columnar organization in the bird brain. “Columnar organization is a rule, rather than an exception found only in mammals,” Jarvis said. “One way I visualize this view is that the avian brain is one big, giant gyrus folding around a ventricle space, functioning like what you’d find in the mammalian brain,” he said. 

To create different patterns of gene expression for the analysis, the birds were exposed to various environmental factors such as darkness or light, silence or bird song, hopping on a treadmill, and in the case of migratory warblers, a magnetic field that stimulated their navigational circuits.

The new map follows up on a 2004 model, proposed by an Avian Brain Nomenclature Consortium, also lead by Jarvis and colleagues, which officially changed a century-old view on the prevailing model that the avian brain contained mostly primitive regions. They argued instead that the avian brain has a cortical-like area and other forebrain regions similar to mammals, but organized differently. 

"The change in terminology is small this time, but the change in concept is big," Jarvis said. For this special issue, the of Journal of Comparative Neurology commissioned a commentary by Juan Montiel and Zoltan Molnar, experts in brain evolution, to summarize the large amount of data presented in the studies by the Jarvis group.

One of the major findings is that two populations of cells on either side of a void called the ventricle are actually the same cell types with similar patterns of gene expression. Earlier investigators had thought of the ventricle as a physical barrier separating cell types, but in development studies led by Jarvis’ post doctoral fellow Chun-chun Chen, the Duke researchers showed how dividing cells spread in a sheet and flow around the ventricle as they multiply. 

The new map simplifies the bird cortex, called pallium, from seven populations of cells down to four major populations. Humans have five populations of cells in six layers.

Part of this refinement is simply that the tools are getting better, says Harvey Karten, a professor of neurosciences at the University of California-San Diego who proposed a dramatic re-thinking of bird cortical organization in the late 1960s. The best tools in that era were microscopes, specific cell stains and electrophysiology. Karten and colleagues are authors of a fourth paper in the special issue which announces a database of gene expression profiles of the avian brain containing some of the data that the Jarvis group used.

Jarvis said having a more specific map is necessary for properly sampling cell populations for gene expression analysis to do even more functional analysis of how the brain operates. As a next step, his team is considering doing an even more detailed bird map with “several hundred” genes rather than the 52 used to make this map.

Jarvis and colleagues are working now on a similar mapping of the crocodile brain with the ultimate goal of being able to say something about how dinosaur brains were organized, since both birds and crocs are descended from them. At a Society for Neuroscience conference in November, they’ll be presenting some early findings from that project. 

Though the specifics of this newest map may only be of interest within the bird research community, Jarvis said, it builds the awareness that birds can be a useful model for many questions about the human brain. 

"Where does the mammalian brain come from?" Karten asks. "And what’s the origin of these structures at the cellular and molecular level?" Some neuroscientists have argued that the mammalian cortex — the one we have — is something apart from the brains of other vertebrates. Jarvis and Karten now think vertebrate brains have more commonalities than differences. 

That awareness is making birds an ever more useful model for questions about the human brain. “There are very few animal models where you can learn — at the molecular level — what’s going on in vocal learning,” Karten said.  Birds are also being used as models for research on Parkinson’s, Huntington’s, deafness and other degenerative conditions in humans.

(Source: today.duke.edu)

Filed under avian brain brain mapping brain evolution neuroscience science

93 notes

Study finds that a subset of children often considered to have autism may be misdiagnosed

UC Davis MIND Institute research finds rigorous evaluations are needed to accurately diagnose autism in children with 22q11.2 deletion syndrome

Children with a genetic disorder called 22q11.2 deletion syndrome, who frequently are believed to also have autism, often may be misidentified because the social impairments associated with their developmental delay may mimic the features of autism, a study by researchers with the UC Davis MIND Institute suggests.

image

The study is the first to examine autism in children with chromosome 22q11.2 deletion syndrome, in whom the prevalence of autism has been reported at between 20 and 50 percent, using rigorous gold-standard diagnostic criteria. The research found that none of the children with 22q11.2 deletion syndrome “met strict diagnostic criteria” for autism.

The researchers said the finding is important because treatments designed for children with autism, such as widely used discrete-trial training methods, may exacerbate the anxiety that is commonplace among the population.

Rather, evaluations should be performed to assess autism and guide the selection of appropriate therapies based on the children’s symptoms, such as language and communication delay, the researchers said. The study, “Social impairments in Chromosome 22q11.2 Deletion Syndrome (22q11.2DS): Autism Spectrum Disorder or a different Endophenotype?” is published online today in Springer’s Journal of Autism and Developmental Disorders.

A high prevalence of autism spectrum disorder has been reported in children with 22q11.2 deletion syndrome – as high as 50 percent based on parent-report measures. Children diagnosed with 22q11.2 deletion syndrome – or 22q – may experience mild to severe cardiac anomalies, weakened immune systems and malformations of the head and neck and the roof of the mouth, or palate. They also experience developmental delay, with IQs in the borderline-to-low-average range. They characteristically experience significant anxiety and appear socially awkward.

“The results of our study show that of the children involved in our study no child actually met strict diagnostic criteria for an autism spectrum disorder,” said Kathleen Angkustsiri, study lead author and assistant professor of developmental-behavioral pediatrics at the MIND Institute.

“This is very important because the literature cites rates of anywhere from 20 to 50 percent of children with the disorder also have an autism spectrum disorder. Our findings lead us to question whether this is the correct label for these children who clearly have social impairments. We need to find out what interventions are most appropriate for their difficulties.”

The disorder’s name also describes its location on the 22nd chromosome as well as the nature of the genetic mutation, which is associated with a variety of anatomical and intellectual deficits.  It has previously been known as Velocardiofacial Syndrome and Di George Syndrome, for the pediatric endocrinologist who described it in the 1960s.

The risk of 22q is about 1 in 2000 in the general population. The condition is seen in individuals of all backgrounds. Notably, people with 22q are at significantly heightened risk of developing mental-health disorders in adolescence and young adulthood. A person with 22q has a 30 times greater risk of developing schizophrenia than individuals in the general population.

“Because of the high rates of psychiatric disorders in childhood and adulthood, 22q is a very special population for prospective study looking at what’s happening throughout childhood that might either increase risk or provide protection against some of the later developing serious psychiatric illnesses, such as schizophrenia, that are associated with the disorder,” said Tony J. Simon, professor of psychiatry and behavioral sciences and director of the chromosome 22q11.2 deletion program at the MIND Institute.

The study was conducted among individuals recruited through the website of the Cognitive Analysis and Brain Imaging Laboratory (CABIL), which Simon directs. Simon and Angkustsiri said that the parents of children with 22q deletion syndrome often had commented that their children “seemed different” from other children with autism diagnoses, but that they hadn’t discovered a better diagnosis.

The clinical impression of the MIND Institute’s 22q deletion syndrome team, which includes psychologists Ingrid Leckliter and Janice Enriquez, was that the children were experiencing significant social impairments, but their presentation diverged from that of children with autism. To determine whether the children met the criteria for classic autism, they decided to test a subset of the children recruited from participants in a larger study of neurocognitive functioning, based on stringent methods and using multiple testing instruments.

The researchers selected 29 children –16 boys and 13 girls – for additional scrutiny, administering two tests. The Autism Diagnostic Observation Schedule (ADOS), a gold-standard assessment for autism, was administered to the children. The Social Communication Questionnaire (SCQ), a 40-question parent screening tool for communication and social functioning based on the gold-standard Autism Diagnostic Interview-Revised, was administered to their parents.

Typically, a diagnosis of autism spectrum disorder requires elevated scores on both a parent report measure, such as the SCQ, and a directly administered assessment such as the ADOS.  Prior studies of autism in chromosome 22q11.2 deletion syndrome have only used parent report measures.

Only five of the 29 children had scores in the elevated range on the ADOS diagnostic tool. Four of the five had significant anxiety. Only two – 7 percent – had SCQ scores above the cut off. No child had both SCQ and ADOS scores in the relevant ranges that would lead to an ASD diagnosis.

 “Over the years, a number of children came to us as part of the research or the clinical assessments that we perform, and their parents told us that they had an autism spectrum diagnosis. It’s quite clear that children with the disorder do have social impairments,” Simon said. “But it did seem to us that they did not have a classic case of autism spectrum disorder. They often have very high levels of social motivation. They get a lot of pleasure from social interaction, and they’re quite socially skilled.”

Simon said that the team also noted that the children’s social deficits might be more a function of their developmental delay and intellectual disability than autism.

“If you put them with their younger siblings’ friends they function very well in a social setting,” Simon continued, “and they interact well with an adult who accommodates their expectations for social interaction.”

Angkustsiri said that further study is needed to assess more appropriate treatments for children with 22q, such as improving their communication skills, treating their anxiety, helping them to remain focused and on task.

 “There are a variety of different avenues that might be pursued rather than treatments that are designed to treat children with autism,” Angkustsiri said. “There are readily available, evidence-based treatments that may be more appropriate to help maximize these children’s potential.”

(Source: ucdmc.ucdavis.edu)

Filed under ASD autism 22q11.2 deletion syndrome neurodevelopmental disorders neuroscience science

171 notes

Ability To Move To A Beat Linked To Brain’s Response To Speech

Study suggests musical training could possibly sharpen language processing

image

People who are better able to move to a beat show more consistent brain responses to speech than those with less rhythm, according to a study published in the September 18 issue of The Journal of Neuroscience. The findings suggest that musical training could possibly sharpen the brain’s response to language. 

Scientists have long known that moving to a steady beat requires synchronization between the parts of the brain responsible for hearing and movement. In the current study, Professor Nina Kraus, PhD, and colleagues at Northwestern University examined the relationship between the ability to keep a beat and the brain’s response to sound.

More than 100 teenagers from the Chicago area participated in the Kraus Lab study, where they were instructed to listen and tap their finger along to a metronome. The teens’ tapping accuracy was computed based on how closely their taps aligned in time with the “tic-toc” of the metronome. In a second test, the researchers used a technique called electroencephalography (EEG) to record brainwaves from a major brain hub for sound processing as the teens listened to the synthesized speech sound “da” repeated periodically over a 30-minute period. The researchers then calculated how similarly the nerve cells in this region responded each time the “da” sound was repeated.

“Across this population of adolescents, the more accurate they were at tapping along to the beat, the more consistent their brains’ response to the ‘da’ syllable was,” Kraus said. Because previous studies show a link between reading ability and beat-keeping ability as well as reading ability and the consistency of the brain’s response to sound, Kraus explained that these new findings show that hearing is a common basis for these associations. 

“Rhythm is inherently a part of music and language,” Kraus said. “It may be that musical training, with an emphasis on rhythmic skills, exercises the auditory-system, leading to strong sound-to-meaning associations that are so essential in learning to read.”

John Iversen, PhD, who studies how the brain processes music at the University of California, San Diego, and was not involved with this study, noted that the findings raise the possibility that musical training may have important impacts on the brain.“This study adds another piece to the puzzle in the emerging story suggesting that musical rhythmic abilities are correlated with improved performance in non-music areas, particularly language,” he said.

Kraus’ group is now working on a multi-year study to evaluate the effects of musical training on beat synchronization, response consistency, and reading skills in a group of children engaging in musical training.

(Source: alphagalileo.org)

Filed under language processing musical training auditory system neuroscience psychology science

116 notes

Predicting Who Will Have Chronic Pain
Abnormalities in brain axons predispose people to chronic back pain after injury
Abnormalities in the structure of the brain predispose people to develop chronic pain after a lower back injury, according to new Northwestern Medicine® research. The findings could lead to changes in the way physicians treat patients’ pain.
Most scientists and clinicians have assumed chronic back pain stems from the site of the original injury.
“We’ve found the pain is triggered by these irregularities in the brain,” said A. Vania Apkarian, senior author of the study and a professor of physiology at Northwestern University Feinberg School of Medicine. “We’ve shown abnormalities in brain structure connections may be enough to push someone to develop chronic pain once they have an injury.”
Based on MRI brain scans of people who had a new lower back injury, Northwestern scientists could predict with about 85 percent accuracy which patients’ pain would persist. The predictor was a specific irregularity or marker the scientists identified in the axons, pathways in the brain’s white matter that connect brain cells so they can communicate with each other.
The findings provide a new view of treating chronic pain, which affects nearly 100 million Americans and costs up to $635 billion a year to treat.
“We think the people who are vulnerable need to be treated aggressively with medication early on to prevent their pain from becoming chronic,” Apkarian said. “Last year, we showed people who take medication early on had a better chance of recovering. Medication does help.” Apkarian also is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.
The research, funded by the National Institutes of Health, was published Sept. 16 in the journal Pain.
Brain abnormalities have been observed in other long-term chronic pain conditions. Apkarian’s study is the first to show brain structure abnormalities are a marker of a predisposition to the chronic pain, not a result of living with it.
The lead author of the study is Ali Mansour, M.D., formerly a postdoctoral fellow in Apkarian’s lab.
Apkarian’s research focuses on the relationship between chronic pain and the brain. One of his previous studies showed chronic pain patients lose gray matter volume over time.
Chronic pain is one of the most expensive health care conditions in the U.S. and takes an enormous toll on quality of life, yet there still is not a scientifically validated therapy for the condition. Lower back pain represents 28 percent of all causes of pain in the U.S.; about 23 percent of these patients suffer long-term pain.
The abnormalities identified in the study were found in multiple white matter axon bundles, some surrounding the nucleus accumbens and medial prefrontal cortex, two brain regions involved in processing emotion and pain. Last year, the Apkarian group showed that the physiological properties of these two regions identify which patients will persist with back pain. The new results identify a pre-existing culprit for these physiological responses to the injury.
“The brain abnormalities exist in the general population, but only those people with a back injury go on to develop the chronic pain,” Apkarian said.
For the study, Apkarian and his colleagues scanned the brains of 46 people who had an episode of lower back pain for at least four weeks and had not experienced any pain for at least one year before that. Their pain had to be rated at least five out of 10 on a pain scale for them to be included in the study.
Scientists followed the patients for a year, scanning their brains at the onset of study and one year later. After a year about half of them had improved, regardless of whether they took anything to treat the pain, and half of them continued to have pain. Those with the persistent pain had the same structural abnormalities in their white matter at the onset of the injury and after one year.
“The abnormality makes them vulnerable and predisposes them to enhanced emotional learning that then amplifies the pain and makes it more emotionally significant,” Apkarian said.
“Pain is becoming an enormous burden on the public,” said Linda Porter, the pain policy advisor at National Institute of Neurological Disorders and Stroke (NINDS) and a leader of the National Institutes of Health (NIH) Pain Consortium. “The U.S. government recently outlined steps to reduce the future burden of pain through broad-ranging efforts, including enhanced research. This study is a good example of the kind of innovative research we hope will reduce chronic pain, which affects a huge portion of the population.”
(Image: Shutterstock)

Predicting Who Will Have Chronic Pain

Abnormalities in brain axons predispose people to chronic back pain after injury

Abnormalities in the structure of the brain predispose people to develop chronic pain after a lower back injury, according to new Northwestern Medicine® research. The findings could lead to changes in the way physicians treat patients’ pain.

Most scientists and clinicians have assumed chronic back pain stems from the site of the original injury.

“We’ve found the pain is triggered by these irregularities in the brain,” said A. Vania Apkarian, senior author of the study and a professor of physiology at Northwestern University Feinberg School of Medicine. “We’ve shown abnormalities in brain structure connections may be enough to push someone to develop chronic pain once they have an injury.”

Based on MRI brain scans of people who had a new lower back injury, Northwestern scientists could predict with about 85 percent accuracy which patients’ pain would persist. The predictor was a specific irregularity or marker the scientists identified in the axons, pathways in the brain’s white matter that connect brain cells so they can communicate with each other.

The findings provide a new view of treating chronic pain, which affects nearly 100 million Americans and costs up to $635 billion a year to treat.

“We think the people who are vulnerable need to be treated aggressively with medication early on to prevent their pain from becoming chronic,” Apkarian said. “Last year, we showed people who take medication early on had a better chance of recovering. Medication does help.” Apkarian also is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.

The research, funded by the National Institutes of Health, was published Sept. 16 in the journal Pain.

Brain abnormalities have been observed in other long-term chronic pain conditions. Apkarian’s study is the first to show brain structure abnormalities are a marker of a predisposition to the chronic pain, not a result of living with it.

The lead author of the study is Ali Mansour, M.D., formerly a postdoctoral fellow in Apkarian’s lab.

Apkarian’s research focuses on the relationship between chronic pain and the brain. One of his previous studies showed chronic pain patients lose gray matter volume over time.

Chronic pain is one of the most expensive health care conditions in the U.S. and takes an enormous toll on quality of life, yet there still is not a scientifically validated therapy for the condition. Lower back pain represents 28 percent of all causes of pain in the U.S.; about 23 percent of these patients suffer long-term pain.

The abnormalities identified in the study were found in multiple white matter axon bundles, some surrounding the nucleus accumbens and medial prefrontal cortex, two brain regions involved in processing emotion and pain. Last year, the Apkarian group showed that the physiological properties of these two regions identify which patients will persist with back pain. The new results identify a pre-existing culprit for these physiological responses to the injury.

“The brain abnormalities exist in the general population, but only those people with a back injury go on to develop the chronic pain,” Apkarian said.

For the study, Apkarian and his colleagues scanned the brains of 46 people who had an episode of lower back pain for at least four weeks and had not experienced any pain for at least one year before that. Their pain had to be rated at least five out of 10 on a pain scale for them to be included in the study.

Scientists followed the patients for a year, scanning their brains at the onset of study and one year later. After a year about half of them had improved, regardless of whether they took anything to treat the pain, and half of them continued to have pain. Those with the persistent pain had the same structural abnormalities in their white matter at the onset of the injury and after one year.

“The abnormality makes them vulnerable and predisposes them to enhanced emotional learning that then amplifies the pain and makes it more emotionally significant,” Apkarian said.

“Pain is becoming an enormous burden on the public,” said Linda Porter, the pain policy advisor at National Institute of Neurological Disorders and Stroke (NINDS) and a leader of the National Institutes of Health (NIH) Pain Consortium. “The U.S. government recently outlined steps to reduce the future burden of pain through broad-ranging efforts, including enhanced research. This study is a good example of the kind of innovative research we hope will reduce chronic pain, which affects a huge portion of the population.”

(Image: Shutterstock)

Filed under chronic pain white matter medial prefrontal cortex axons nucleus accumbens neuroimaging neuroscience science

80 notes

Study finds cognitive enhancers do not improve cognition or function in people with mild cognitive impairment but may cause gastrointestinal issues

Cognitive enhancers—drugs taken to enhance concentration, memory, alertness and moods—do not improve cognition or function in people with mild cognitive impairment in the long term, according to a new study by researchers at St. Michael’s Hospital.

In fact, patients on these medications experienced significantly more nausea, diarrhea, vomiting and headaches, according to the study published today in the Canadian Medical Association Journal.

“Our findings do not support the use of cognitive enhancers for mild cognitive impairment,” wrote Dr. Andrea Tricco and Dr. Sharon Straus, who are both scientists in the hospital’s Li Ka Shing Knowledge Institute. Dr. Straus is also a geriatrician at the hospital.

Mild cognitive impairment is a condition characterized by memory complaints without significant limitations in everyday activity. Between 3 and 42 per cent of people are diagnosed with the condition each year, about 4.6 million people worldwide. Each year about 3 to 17 per cent of people with mild cognitive impairment will develop dementia, such as Alzheimer’s disease. Given the aging population, it’s estimated the number of Canadians with dementia will double to more than 1 million in the next 25 years.

It has been hypothesized that cognitive enhancers may delay the onset of dementia. Families and patients are increasingly requesting these drugs even though their efficacy for patients with mild cognitive impairment has not been established. In Canada, cognitive enhancers can be obtained only with special authorization.

Drs. Tricco and Straus conducted a review of existing evidence to understand the efficacy and safety of cognitive enhancers. They looked at eight randomized trials that compared one of four cognitive enhancers (donepezil, rivastigmine, galantamine or memantine) to a placebo among patients diagnosed with mild cognitive impairment.

While they found short-term benefits to using these drugs on one cognition scale, there were no long-term effects after about a year and a half. No other benefits were observed on the second cognition scale or on function, behaviour, and mortality. As well, patients on these medications experienced significantly more nausea, diarrhea, vomiting and headaches. One study also found a higher risk of a heart condition known as bradycardia (slow heartbeat) among patients who received galantamine.

“Our results do not support the use of cognitive enhancers for patients with mild cognitive impairment,” the authors wrote. “These agents were not associated with any benefit and led to an increase in harms. Patients and their families should consider this information when requesting these medications. Similarly, health care decision-makers may not wish to approve the use of these medications for mild cognitive impairment, because these drugs might not be effective and are likely associated with harm.”

This study was funded by the Drug Safety and Effectiveness Network/Canadian Institutes of Health Research.

Another St. Michael’s study published in the CMAJ in April found no evidence that drugs, herbal products or vitamin supplements help prevent cognitive decline in healthy older adults. That review, led by Dr. Raza Naqvi, a University of Toronto resident, found some evidence that mental exercises, such as computerized memory training programs, might help.

Filed under alzheimer's disease dementia memory loss cognitive impairment neuroscience science

free counters