Neuroscience

Articles and news from the latest research reports.

62 notes

Researchers Identify Brain Areas Activated by Itch-Relieving Drug

Areas of the brain that respond to reward and pleasure are linked to the ability of a drug known as butorphanol to relieve itch, according to new research led by Gil Yosipovitch, MD, Professor and Chair of the Department of Dermatology at Temple University School of Medicine (TUSM), and Director of the Temple Itch Center. The findings point to the involvement of the brain’s opioid receptors—widely known for their roles in pain, reward, and addiction—in itch relief, potentially opening up new avenues to the development of treatments for chronic itch.

image

The article, published online September 11, in the Journal of Investigative Dermatology, is the first to show precisely where in the brain butorphanol works to relieve itch. In identifying those areas, the study helps to explain why butorphanol works better for chronic itching mediated by histamine, a small molecule involved in allergic reactions, than for nonhistamine-related types of itch.

"The research allows us to assess butorphanol’s effects," Dr. Yosipovitch said. "We can now identify better targets in the brain that drugs can work on to relieve itch."

The research marks an important step toward the development of itch-specific agents. As Dr. Yosipovitch explained, chronic itching, which affects roughly 12 percent of the population, comprises not just one disease, but many—ranging from atopic eczema and psoriasis to systemic diseases such as lymphoma and chronic liver failure. Biochemically, each of those diseases induces itching via one of two main pathways: one that is mediated by histamine and one that is not. Most pathological itching originates along nonhistaminergic pathways.

Working with Alexandru D. P. Papoiu, MD, PhD, at Wake Forest University School of Medicine, Dr. Yosipovitch experimentally induced itch in human volunteers using either histamine or cowhage, which incites nonhistaminergic itching. Study volunteers were then treated with either butorphanol or a placebo and subjected to functional magnetic resonance imaging (fMRI) to analyze brain activity and assess the effects of butorphanol (or placebo). When volunteers returned seven days later, they received the other treatment and again underwent fMRI.

Butorphanol suppressed histamine itching in all cases and reduced cowhage itching in 35 percent of subjects. The drug’s suppression of histamine itching was associated specifically with the activation of brain areas known as the nucleus accumbens and septal nuclei—areas located deep at the base of the forebrain. The regions are notably rich in so-called kappa (κ)-opioid receptors, on which butorphanol acts. By contrast, the relief of cowhage itch by butorphanol was linked to effects in other brain areas.

The findings suggest that butorphanol works primarily on κ-opioid receptors to suppress the itch sensation induced by histamine. But the drug also has important effects on an itch pathway that does not involve histamine, where the demand for new treatments is greatest.

How nonhistaminergic itching is reduced through the involvement of opioid receptors remains unclear. Opioid receptors modulate the transmission of information about itch in the brain and occur in high levels in the areas of the brain that house neural pathways associated with reward. Reward pathways are known particularly for their response to pleasurable stimuli. Dr. Yosipovitch and Dr. Papoiu have shown in previous work that the activation of reward circuits is correlated with pleasurability and the degree of itch relief derived from self-scratching.

The new study, which Yosipovitch carried out at Wake Forest University prior to joining the TUSM faculty in 2013, further illustrates the power of applying imaging technologies to basic questions in itch research. At Temple’s Itch Center, Yosipovitch is continuing to explore those applications.

"We are in a position now to better understand the itch-scratch cycle," he said. "To break the cycle from the top down, knowing where to target receptors in the brain, would be a major achievement."

(Source: temple.edu)

Filed under nucleus accumbens opioid receptors butorphanol itching histamine neuroscience science

173 notes

Infant Cooing, Babbling Linked to Hearing Ability
Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.
“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”
Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.
“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”
Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.
“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”
In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.
Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Infant Cooing, Babbling Linked to Hearing Ability

Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.

“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”

Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.

“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”

Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.

“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”

In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.

Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Filed under hearing cochlear implant vocalizations language development psychology neuroscience science

88 notes

Identification of a protein that may increase the currently short therapeutic window in stroke
A new study published in the prestigious publication The EMBO Journal shows that the mitochondrial protein Mfn2 may be a future therapeutic target for neuronal death reduction in the late phases of an ischemic stroke. The study has been coordinated by Dr Francesc Soriano, Ramón y Cajal researcher at the Department of Cell Biology of the University of Barcelona (UB) and member of the Research Group Celltec UB. The study, funded by the Fundació La Marató de TV3, is part of the PhD thesis developed by Àlex Martorell Riera (UB), first author of the article. Experts Antonio Zorzano and Manuel Palacín, from the Department of Biochemistry and Molecular Biology of UB and the Institute for Research in Biomedicine (IRB Barcelona), and Jesús Pérez Clausell and Manuel Reina, from the Department of Cell Biology of UB, also collaborated in the study.
When blood flow is blocked in the brain
According to the World Health Organization (WHO), strokes are the second leading cause of death in the world. A stroke occurs when a blood vessel is blocked interrupting blood flow in the brain. Ictus damage is progressive: it begins some minutes after the attack. Recommended treatment consists in restoring blood flow to the brain, but it must be done during the first four hours after the stroke.
According to researcher Francesc Soriano, “one of the main causes of brain death in ictus events is glutamate increase; glutamate is the main excitatory neurotransmitter in the central nervous system. Glutamate extracellular concentrations remain low due to the activity of membrane transporters, which require energy to work”.
When blood flow is blocked, energy levels are reduced in the affected area. This phenomenon leads glutamate transporters to work inversely, so glutamate is expelled to the extracellular space. Glutamate activates its receptors —particularly, the N-methyl-D-aspartate receptor (NMDA)— on neurons’ surface, a process that triggers an excessive flux of calcium, the activation of a series of reactions and neuronal death, in a process known as excitotoxicity. “Many of these excitotoxic cascades —points out Soriano— converge on the mitochondrion, an organelle which plays a major role not only in energy production, but also in apoptosis”.
New therapeutic strategies against ischemic ictus
Specifically, Mfn2 is a mitochondrial protein involved in the regulation of organelles’ morphology and function. The team led by Dr Francesc Soriano has just discovered that the reduction in Mfn2 protein levels occurs four hours after the initiation of the excitotoxic process in in vitro and in vivo animal models.
In vivo experiments proved that if Mfn2 reduction is stopped, delayed excitotoxic cell death is blocked. The research team from the Department of Cell Biology of UB found that the Mfn2 reduction is triggered by a genetic transcription mechanism (DNA is transcribed into RNA molecules). UB experts also discovered that MEF2 is the transcription factor involved in this process. Authors affirm that these findings are essential to find a strategy to reverse Mfn2 reduction.
Currently, the team led by Dr Francesc Soriano are researching on brain damage in excitotoxic conditions in animal models where the gene Mfn2 has been removed. The main objective is to design therapeutic strategic in order to reduce damage.

Identification of a protein that may increase the currently short therapeutic window in stroke

A new study published in the prestigious publication The EMBO Journal shows that the mitochondrial protein Mfn2 may be a future therapeutic target for neuronal death reduction in the late phases of an ischemic stroke. The study has been coordinated by Dr Francesc Soriano, Ramón y Cajal researcher at the Department of Cell Biology of the University of Barcelona (UB) and member of the Research Group Celltec UB. The study, funded by the Fundació La Marató de TV3, is part of the PhD thesis developed by Àlex Martorell Riera (UB), first author of the article. Experts Antonio Zorzano and Manuel Palacín, from the Department of Biochemistry and Molecular Biology of UB and the Institute for Research in Biomedicine (IRB Barcelona), and Jesús Pérez Clausell and Manuel Reina, from the Department of Cell Biology of UB, also collaborated in the study.

When blood flow is blocked in the brain

According to the World Health Organization (WHO), strokes are the second leading cause of death in the world. A stroke occurs when a blood vessel is blocked interrupting blood flow in the brain. Ictus damage is progressive: it begins some minutes after the attack. Recommended treatment consists in restoring blood flow to the brain, but it must be done during the first four hours after the stroke.

According to researcher Francesc Soriano, “one of the main causes of brain death in ictus events is glutamate increase; glutamate is the main excitatory neurotransmitter in the central nervous system. Glutamate extracellular concentrations remain low due to the activity of membrane transporters, which require energy to work”.

When blood flow is blocked, energy levels are reduced in the affected area. This phenomenon leads glutamate transporters to work inversely, so glutamate is expelled to the extracellular space. Glutamate activates its receptors —particularly, the N-methyl-D-aspartate receptor (NMDA)— on neurons’ surface, a process that triggers an excessive flux of calcium, the activation of a series of reactions and neuronal death, in a process known as excitotoxicity. “Many of these excitotoxic cascades —points out Soriano— converge on the mitochondrion, an organelle which plays a major role not only in energy production, but also in apoptosis”.

New therapeutic strategies against ischemic ictus

Specifically, Mfn2 is a mitochondrial protein involved in the regulation of organelles’ morphology and function. The team led by Dr Francesc Soriano has just discovered that the reduction in Mfn2 protein levels occurs four hours after the initiation of the excitotoxic process in in vitro and in vivo animal models.

In vivo experiments proved that if Mfn2 reduction is stopped, delayed excitotoxic cell death is blocked. The research team from the Department of Cell Biology of UB found that the Mfn2 reduction is triggered by a genetic transcription mechanism (DNA is transcribed into RNA molecules). UB experts also discovered that MEF2 is the transcription factor involved in this process. Authors affirm that these findings are essential to find a strategy to reverse Mfn2 reduction.

Currently, the team led by Dr Francesc Soriano are researching on brain damage in excitotoxic conditions in animal models where the gene Mfn2 has been removed. The main objective is to design therapeutic strategic in order to reduce damage.

Filed under stroke Mfn2 glutamate excitotoxicity cell death neuroscience science

74 notes

Compound from hops aids cognitive function in young animals

Xanthohumol, a type of flavonoid found in hops and beer, has been shown in a new study to improve cognitive function in young mice, but not in older animals.

image

The research was just published in Behavioral Brain Research by scientists from the Linus Pauling Institute and College of Veterinary Medicine at Oregon State University. It’s another step toward understanding, and ultimately reducing the degradation of memory that happens with age in many mammalian species, including humans.

Flavonoids are compounds found in plants that often give them their color. The study of them – whether in blueberries, dark chocolate or red wine - has increased in recent years due to their apparent nutritional benefits, on issues ranging from cancer to inflammation or cardiovascular disease. Several have also been shown to be important in cognition.

Xanthohumol has been of particular interest because of possible value in treating metabolic syndrome, a condition associated with obesity, high blood pressure and other concerns, including age-related deficits in memory. The compound has been used successfully to lower body weight and blood sugar in a rat model of obesity.

The new research studied use of xanthohumol in high dosages, far beyond what could be obtained just by diet. At least in young animals, it appeared to enhance their ability to adapt to changes in the environment. This cognitive flexibility was tested with a special type of maze designed for that purpose.

“Our goal was to determine whether xanthohumol could affect a process we call palmitoylation, which is a normal biological process but in older animals may become harmful,” said Daniel Zamzow, a former OSU doctoral student and now a lecturer at the University of Wisconsin/Rock County.

“Xanthohumol can speed the metabolism, reduce fatty acids in the liver and, at least with young mice, appeared to improve their cognitive flexibility, or higher level thinking,” Zamzow said. “Unfortunately it did not reduce palmitoylation in older mice, or improve their learning or cognitive performance, at least in the amounts of the compound we gave them.”

Kathy Magnusson, a professor in the OSU Department of Biomedical Sciences, principal investigator with the Linus Pauling Institute and corresponding author on this study, said that xanthohumol continues to be of significant interest for its biological properties, as are many other flavonoids.

“This flavonoid and others may have a function in the optimal ability to form memories,” Magnusson said. “Part of what this study seems to be suggesting is that it’s important to begin early in life to gain the full benefits of healthy nutrition.”

It’s also important to note, Magnusson said, that the levels of xanthohumol used in this study were only possible with supplements. As a fairly rare micronutrient, the only normal dietary source of it would be through the hops used in making beer, and “a human would have to drink 2000 liters of beer a day to reach the xanthohumol levels we used in this research.”

In this and other research, Magnusson’s research has primarily focused on two subunits of the NMDA receptor, called GluN1 and GluN2B. Their decline with age appears to be related to the decreased ability to form and quickly recall memories.

In humans, many adults start to experience deficits in memory around the age of 50, and some aspects of cognition begin to decline around age 40, the researchers noted in their report.

(Source: oregonstate.edu)

Filed under cognitive function xanthohumol memory aging NMDA receptor animal studies neuroscience science

115 notes

Presence or absence of early language delay alters anatomy of the brain in autism
A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.
The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.
Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.
The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).
Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.
“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”
He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”
Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.
“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”
“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Presence or absence of early language delay alters anatomy of the brain in autism

A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.

The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.

Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.

The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).

Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.

“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”

He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”

Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.

“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”

“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Filed under autism language language development brain volume individual differences neuroscience science

147 notes

Brain Wave May Be Used to Detect What People Have Seen, Recognize

Brain activity can be used to tell whether someone recognizes details they encountered in normal, daily life, which may have implications for criminal investigations and use in courtrooms, new research shows.

image

The findings, published in Psychological Science, a journal of the Association for Psychological Science, suggest that a particular brain wave, known as P300, could serve as a marker that identifies places, objects, or other details that a person has seen and recognizes from everyday life.

Research using EEG recordings of brain activity has shown that the P300 brain wave tends to be large when a person recognizes a meaningful item among a list of nonmeaningful items. Using P300, researchers can give a subject a test called the Concealed Information Test (CIT) to try to determine whether they recognize information that is related to a crime or other event.

Most studies investigating P300 and recognition have been conducted in lab settings that are far removed from the kinds of information a real witness or suspect might be exposed to. This new study marks an important advance, says lead research John B. Meixner of Northwestern University, because it draws on details from activities in participants’ normal, daily lives.

“Much like a real crime, our participants made their own decisions and were exposed to all of the distracting information in the world,” he explains.

“Perhaps the most surprising finding was the extent to which we could detect very trivial details from a subject’s day, such as the color of umbrella that the participant had used,” says Meixner. “This precision is exciting for the future because it indicates that relatively peripheral crime details, such as physical features of the crime scene, might be usable in a real-world CIT — though we still need to do much more work to learn about this.”

To achieve a more realistic CIT, Meixner and co-author J. Peter Rosenfeld outfitted 24 college student participants with small cameras that recorded both video and sound — the students wore the cameras clipped to their clothes for 4 hours as they went about their day.

For half of the students, the researchers used the recordings to identify details specific to each person’s day, which became “probe” items for that person. The researchers also came up with corresponding, “irrelevant” items that the student had not encountered — if the probe item was a specific grocery store, for example, the irrelevant items might include other grocery stores.

For the other half of the students, the “probe” items related to details or items they had not encountered, but which were instead drawn from the recordings of other participants. The researchers wanted to simulate a real investigation, in which a suspect with knowledge of a crime would be shown the same crime-related details as a suspect who may have no crime-related knowledge.

The next day, all of the students returned to the lab and were shown a series of words that described different details or items (i.e., the probe and irrelevant items), while their brain activity was recorded via EEG.

The results showed that the P300 was larger for probe items than for irrelevant items, but only for the students who had actually seen or encountered the probe.

Further analyses revealed that P300 responses effectively distinguished probe items from irrelevant items on the level of each individual participant, suggesting that it is a robust and reliable marker of recognition.

These findings have implications for memory research, but they may also have real-world application in the domain of criminal law given that some countries, like Japan and Israel, use the CIT in criminal investigations.

“One reason that the CIT has not been used in the US is that the test may not meet the criteria to be admissible in a courtroom,” says Meixner. “Our work may help move the P300-based CIT one step closer to admissibility by demonstrating the test’s validity and reliability in a more realistic context.”

Meixner, Rosenfeld, and colleagues plan on investigating additional factors that may impact detection, including whether images from the recordings may be even more effective at eliciting recognition than descriptive words – preliminary data suggest this may be the case.

Filed under memory eyewitness memory brain activity neuroimaging P300 psychology neuroscience science

175 notes

Dying brain cells cue new brain cells to grow in songbird
Brain cells that multiply to help birds sing their best during breeding season are known to die back naturally later in the year. For the first time researchers have described the series of events that cues new neuron growth each spring, and it all appears to start with a signal from the expiring cells the previous fall that primes the brain to start producing stem cells.
If scientists can further tap into the process and understand how those signals work, it might lead to ways to exploit these signals and encourage replacement of cells in human brains that have lost neurons naturally because of aging, severe depression or Alzheimer’s disease, said Tracy Larson, a University of Washington doctoral student in biology. She’s lead author of a paper in the Sept. 23 Journal of Neuroscience on brain cell birth that follows natural brain cell death.
Neuroscientists have long known that new neurons are generated in the adult brains of many animals, but the birth of new neurons – or neurogenesis – appears to be limited in mammals and humans, especially where new neurons are generated after there’s been a blow to the head, stroke or some other physical loss of brain cells, Larson said. That process, referred to as “regenerative” neurogenesis, has been studied in mammals since the 1990s.
This is the first published study to examine the brain’s ability to replace cells that have been lost naturally, Larson said.
“Many neurodegenerative disorders are not injury-induced,” the co-authors write, “so it is critical to determine if and how reactive neurogenesis occurs under non-injury-induced neurodegenerative conditions.”
The researchers worked with Gambel’s white-crowned sparrows, a medium-sized species 7 inches (18 centimeters) long that breeds in Alaska, then winters in California and Mexico. Sometimes in flocks of more than 100 birds, they can be so plentiful in parts of California that they are considered pests. The ones in this work came from Eastern Washington.
Like most songbirds, Gambel’s white-crowned sparrows experience growth in the area of the brain that controls song output during the breeding season when a superior song helps them attract mates and define their territories. At the end of the season, probably because having extra cells exacts a toll in terms of energy and steroids they require, the cells begin dying naturally and the bird’s song degrades.
Gambel’s white-crowned sparrows are particularly good to work with because their breeding cycle is closely tied to the amount of sunlight they receive. Give them 20 hours of light in the lab, along with the right increase of steroids, and they are ready to breed. Cut the light to eight to 12 hours and taper the steroids, the breeding behavior ends.
“As the hormone levels decrease, the cells in the part of the brain controlling song no longer have the signal to ‘stay alive,’” Larson said. “Those cells undergo programmed cell death – or cell suicide as some call it. As those cells die it is likely they are releasing some kind of signal that somehow gets transmitted to the stem cells that reside in the brain. Whatever that signal is then triggers those cells to divide and replace the loss of the cell that sent the signal to begin with.”
The next spring, all that’s needed is for steroids to ramp up and new cells start to proliferate in the song center of the brain.
“This paper doesn’t describe the exact nature of the signals that stimulate proliferation,” Larson said. “We’re just describing the phenomenon that there is this connection between cells dying and this stem cell proliferation. Finding the signal is the next step.”
“Tracy really nailed this down by going in and blocking cell death at the end of the breeding season,” said Eliot Brenowitz, UW professor of psychology and of biology, and co-author on the paper. “There are chemicals you can use to turn off the cell suicide pathway. When this was done, far fewer stem cells divided. You don’t get that big uptick in new neurons being born. That’s important because it shows there’s something about the cells dying that turns on the replacement process.’
“There’s no reason to think what goes on in a bird brain doesn’t also go on in mammal brains, in human brains,” Brenowitz said. “As far as we know, the molecules are the same, the pathways are the same, the hormones are the same. That’s the ultimate purpose of all this, to identify these molecular mechanisms that will be of use in repairing human brains.”
In mammals, the area of the brain that controls the sense of smell and the one that is thought to have a role in memories can produce tiny numbers of new brain cells but it is not understood how or why. The numbers of new cells is so low that trying to identify and quantify whether dying cells are being replaced and if so, the steps that are involved, is much more difficult than when using a songbird like Gambel’s white-crowned sparrow, Larson and Brenowitz said.

Dying brain cells cue new brain cells to grow in songbird

Brain cells that multiply to help birds sing their best during breeding season are known to die back naturally later in the year. For the first time researchers have described the series of events that cues new neuron growth each spring, and it all appears to start with a signal from the expiring cells the previous fall that primes the brain to start producing stem cells.

If scientists can further tap into the process and understand how those signals work, it might lead to ways to exploit these signals and encourage replacement of cells in human brains that have lost neurons naturally because of aging, severe depression or Alzheimer’s disease, said Tracy Larson, a University of Washington doctoral student in biology. She’s lead author of a paper in the Sept. 23 Journal of Neuroscience on brain cell birth that follows natural brain cell death.

Neuroscientists have long known that new neurons are generated in the adult brains of many animals, but the birth of new neurons – or neurogenesis – appears to be limited in mammals and humans, especially where new neurons are generated after there’s been a blow to the head, stroke or some other physical loss of brain cells, Larson said. That process, referred to as “regenerative” neurogenesis, has been studied in mammals since the 1990s.

This is the first published study to examine the brain’s ability to replace cells that have been lost naturally, Larson said.

“Many neurodegenerative disorders are not injury-induced,” the co-authors write, “so it is critical to determine if and how reactive neurogenesis occurs under non-injury-induced neurodegenerative conditions.”

The researchers worked with Gambel’s white-crowned sparrows, a medium-sized species 7 inches (18 centimeters) long that breeds in Alaska, then winters in California and Mexico. Sometimes in flocks of more than 100 birds, they can be so plentiful in parts of California that they are considered pests. The ones in this work came from Eastern Washington.

Like most songbirds, Gambel’s white-crowned sparrows experience growth in the area of the brain that controls song output during the breeding season when a superior song helps them attract mates and define their territories. At the end of the season, probably because having extra cells exacts a toll in terms of energy and steroids they require, the cells begin dying naturally and the bird’s song degrades.

Gambel’s white-crowned sparrows are particularly good to work with because their breeding cycle is closely tied to the amount of sunlight they receive. Give them 20 hours of light in the lab, along with the right increase of steroids, and they are ready to breed. Cut the light to eight to 12 hours and taper the steroids, the breeding behavior ends.

“As the hormone levels decrease, the cells in the part of the brain controlling song no longer have the signal to ‘stay alive,’” Larson said. “Those cells undergo programmed cell death – or cell suicide as some call it. As those cells die it is likely they are releasing some kind of signal that somehow gets transmitted to the stem cells that reside in the brain. Whatever that signal is then triggers those cells to divide and replace the loss of the cell that sent the signal to begin with.”

The next spring, all that’s needed is for steroids to ramp up and new cells start to proliferate in the song center of the brain.

“This paper doesn’t describe the exact nature of the signals that stimulate proliferation,” Larson said. “We’re just describing the phenomenon that there is this connection between cells dying and this stem cell proliferation. Finding the signal is the next step.”

“Tracy really nailed this down by going in and blocking cell death at the end of the breeding season,” said Eliot Brenowitz, UW professor of psychology and of biology, and co-author on the paper. “There are chemicals you can use to turn off the cell suicide pathway. When this was done, far fewer stem cells divided. You don’t get that big uptick in new neurons being born. That’s important because it shows there’s something about the cells dying that turns on the replacement process.’

“There’s no reason to think what goes on in a bird brain doesn’t also go on in mammal brains, in human brains,” Brenowitz said. “As far as we know, the molecules are the same, the pathways are the same, the hormones are the same. That’s the ultimate purpose of all this, to identify these molecular mechanisms that will be of use in repairing human brains.”

In mammals, the area of the brain that controls the sense of smell and the one that is thought to have a role in memories can produce tiny numbers of new brain cells but it is not understood how or why. The numbers of new cells is so low that trying to identify and quantify whether dying cells are being replaced and if so, the steps that are involved, is much more difficult than when using a songbird like Gambel’s white-crowned sparrow, Larson and Brenowitz said.

Filed under songbirds brain cells neurogenesis cell death neuroscience science

142 notes

Taste memory

Have you ever eaten something totally new and it made you sick? Don’t give up; if you try the same food in a different place, your brain will be more “forgiving” of the new attempt. In a new study conducted by the Sagol Department of Neurobiology at the University of Haifa, researchers found for the first time that there is a link between the areas of the brain responsible for taste memory in a negative context and those areas in the brain responsible for processing the memory of the time and location of the sensory experience. When we experience a new taste without a negative context, this link doesn’t exist.

image

The area of the brain responsible for storing memories of new tastes is the taste cortex, found in a relatively insulated area of the human brain known as the insular cortex. The area responsible for formulating a memory of the place and time of the experience (the episode) is the hippocampus. Until now, researchers assumed that there was no direct connection between these areas – i.e., the processing of information about a taste is not related to the time or the place one experiences the taste. The accepted thinking was that a negative experience – for example, being exposed to a bad taste – would be negative in the same way anywhere, and the brain would create a memory of the taste itself, divorced from the time or place.

But in this new study, conducted by doctoral student Adaikkan Chinnakkaruppan in the laboratory of Prof. Kobi Rosenblum of the Sagol Department of Neurobiology at the University of Haifa, in cooperation with the Riken Institute, the leading brain research institute in Tokyo, the researchers demonstrate for the first time that there is a functional link between the two brain regions.

In the study the researchers sought to examine the relationship between the taste cortex (which is responsible for taste memory), and three different areas in the hippocampus: CA1, which is responsible for encoding the concept of space (where we are located); DG, the area responsible for encoding the time relationship between events; and CA3, responsible for filling in missing information. To do this the researchers took ordinary mice and mice that were genetically engineered by their Japanese colleagues such that these three areas of the brain functioned normally but were lacking plasticity, which did not allow new memories reliant on them to be created.

“In brain research, the manipulation we do must be very delicate and precise, otherwise the changes can make the entire experiment irrelevant to proving or refuting the research hypothesis,” said Prof. Rosenblum.

The mice were exposed to two new tastes, one that caused stomach pains (to mimic exposure to toxic food) and another that didn’t cause that feeling. By comparing the two groups it emerged that when the new taste was not accompanied by an association with toxic food, there was no difference between the normal mice and those whose various functional areas in the hippocampus didn’t allow plasticity. But when the taste caused a negative feeling, there was clear involvement of the CA1 area, which is responsible for encoding the space.

“The significance of this is that the moment we go back to the same place at which we experienced the taste associated with a bad feeling, subconsciously the negative memory will be much stronger than if we come to taste the same taste in a totally different place,” explained Prof. Rosenblum. Similarly, the DG area, which is responsible for encoding the time between incidents, was involved the more time that passed between the new taste and the stomach discomfort. “This means that even during a simple associative taste, the brain operates the hippocampus to produce an integrated experience that includes general information about the time between events and their location,” he said.

The findings, which were recently published in the Journal of Neuroscience, expose the complexity and richness of the simple sensory experiences that are engraved in our brains and that in most cases we aren’t even aware of. Moreover, the study can help explain behavioral results and the difficulty in producing memories when certain areas of the brain become dysfunctional following and illness or accident. The better we understand the encoding of simple sensory experiences in the brain and the link between the feeling, time and place of the experiences; we will better understand the complex process of creating memories and storing them in our brains.

(Source: newmedia-eng.haifa.ac.il)

Filed under taste taste learning hippocampus insular cortex plasticity neuroscience science

207 notes

Blood test may help determine who is at risk for psychosis
A study led by University of North Carolina at Chapel Hill researchers represents an important step forward in the accurate diagnosis of people who are experiencing the earliest stages of psychosis.
Psychosis includes hallucinations or delusions that define the development of severe mental disorders such as schizophrenia. Schizophrenia emerges in late adolescence and early adulthood and affects about 1 in every 100 people. In severe cases, the impact on a young person can be a life compromised, and the burden on family members can be almost as severe.
The study published in the journal Schizophrenia Bulletin reports preliminary results showing that a blood test, when used in psychiatric patients experiencing symptoms that are considered to be indicators of a high risk for psychosis, identifies those who later went on to develop psychosis.
“The blood test included a selection of 15 measures of immune and hormonal system imbalances as well as evidence of oxidative stress,” said Diana O. Perkins, MD, MPH, professor of psychiatry in the UNC School of Medicine and corresponding author of the study. She is also medical director of UNC’s Outreach and Support Intervention Services (OASIS) program for schizophrenia.
“While further research is required before this blood test could be clinically available, these results provide evidence regarding the fundamental nature of schizophrenia, and point towards novel pathways that could be targets for preventative interventions,” Perkins said.
Clark D. Jeffries, PhD, bioinformatics scientist at the UNC-based Renaissance Computing Institute (RENCI), is a co-author of the study, which was conducted as part of the North American Prodrome Longitudinal Study (NAPLS), an international effort to understand risk factors and mechanisms for development of psychotic disorders. 
“Modern, computer-based methods can readily discover seemingly clear patterns from nonsensical data,” said Jeffries. “Added to that, scientific results from studies of complex disorders like schizophrenia can be confounded by many hidden dependencies. Thus, stringent testing is necessary to build a useful classifier. We did that.”
The study concludes that the multiplex blood assay, if independently replicated and if integrated with studies of other classes of biomarkers, has the potential to be of high value in the clinical setting.
(Image: Shutterstock)

Blood test may help determine who is at risk for psychosis

A study led by University of North Carolina at Chapel Hill researchers represents an important step forward in the accurate diagnosis of people who are experiencing the earliest stages of psychosis.

Psychosis includes hallucinations or delusions that define the development of severe mental disorders such as schizophrenia. Schizophrenia emerges in late adolescence and early adulthood and affects about 1 in every 100 people. In severe cases, the impact on a young person can be a life compromised, and the burden on family members can be almost as severe.

The study published in the journal Schizophrenia Bulletin reports preliminary results showing that a blood test, when used in psychiatric patients experiencing symptoms that are considered to be indicators of a high risk for psychosis, identifies those who later went on to develop psychosis.

“The blood test included a selection of 15 measures of immune and hormonal system imbalances as well as evidence of oxidative stress,” said Diana O. Perkins, MD, MPH, professor of psychiatry in the UNC School of Medicine and corresponding author of the study. She is also medical director of UNC’s Outreach and Support Intervention Services (OASIS) program for schizophrenia.

“While further research is required before this blood test could be clinically available, these results provide evidence regarding the fundamental nature of schizophrenia, and point towards novel pathways that could be targets for preventative interventions,” Perkins said.

Clark D. Jeffries, PhD, bioinformatics scientist at the UNC-based Renaissance Computing Institute (RENCI), is a co-author of the study, which was conducted as part of the North American Prodrome Longitudinal Study (NAPLS), an international effort to understand risk factors and mechanisms for development of psychotic disorders. 

“Modern, computer-based methods can readily discover seemingly clear patterns from nonsensical data,” said Jeffries. “Added to that, scientific results from studies of complex disorders like schizophrenia can be confounded by many hidden dependencies. Thus, stringent testing is necessary to build a useful classifier. We did that.”

The study concludes that the multiplex blood assay, if independently replicated and if integrated with studies of other classes of biomarkers, has the potential to be of high value in the clinical setting.

(Image: Shutterstock)

Filed under oxidative stress psychosis schizophrenia blood test inflammation neuroscience science

214 notes

Neuroscientists challenge long-held understanding of the sense of touch
Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.
Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.
"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."
Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.
In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.
Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.
Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.
In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.
Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.
"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.
Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.
Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.
"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Neuroscientists challenge long-held understanding of the sense of touch

Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.

Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.

"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."

Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.

In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.

Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.

Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.

In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.

Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.

"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.

Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.

Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.

"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Filed under sense of touch perception somatosensory cortex neuroscience science

free counters