Neuroscience

Articles and news from the latest research reports.

97 notes

How Alzheimer’s could occur

Protein spheres in the nucleus give wrong signal for cell division

image

RUB researchers develop new hypothesis for the degeneration of nerve cells

A new hypothesis has been developed by researchers in Bochum on how Alzheimer’s disease could occur. They analysed the interaction of the proteins FE65 and BLM that regulate cell division. In the cell culture model, they discovered spherical structures in the nucleus that contained FE65 and BLM. The interaction of the proteins triggered a wrong signal for cell division. This may explain the degeneration and death of nerve cells in Alzheimer’s patients. The team led by Dr. Thorsten Müller and Prof. Dr. Katrin Marcus from the Department of Functional Proteomics in cooperation with the RUB’s Medical Proteome Centre headed by Prof. Helmut E. Meyer reported on the results in the “Journal of Cell Science”.

Components of spherical structures in the nucleus identified

The so-called amyloid precursor protein APP is central to Alzheimer’s disease. It spans the cell membrane, and its cleavage products are linked to protein deposits that form in Alzheimer patients outside the nerve cells. APP anchors the protein FE65 to the membrane, which was the focus of the current study. FE65 can migrate into the nucleus, where it plays a role in DNA replication and repair. Based on cells grown in the laboratory, the team led by Dr. Müller established that FE65 can unite with other proteins in the cell nucleus to form spherical structures, so-called “nuclear spheres”. Video microscopy showed that these ring-like structures merge with each other and can thus grow. “By using a special cell culture model, we were able to identify additional components of these spheres”, says Andreas Schrötter, PhD student in the working group Morbus Alzheimer at the Institute for Functional Proteomics. Among other things, the scientists found the protein BLM, which is known from Bloom’s syndrome – an extremely rare hereditary disease, which is associated with dwarfism, immunodeficiency, and an increased risk of cancer. BLM is involved in DNA replication and repair in the nucleus.

The amount of FE65 determines the amount of BLM in the cell nucleus

Müller’s team took a closer look at the function of FE65. By means of genetic manipulation, the researchers generated cell cultures, in which the FE65-production was reduced. A smaller amount of FE65 thus generated a smaller amount of the protein BLM in the nucleus. Instead, BLM collected in another area of the cell, the endoplasmic reticulum. In addition, the researchers found a lower rate of DNA replication in the genetically modified cells. In this way, FE65 influences the replication of the genetic material via the BLM protein. When the researchers cranked up the FE65-production again, the amount of BLM in the nucleus also increased again.

FE65 as a possible trigger for Alzheimer’s

In patients with Alzheimer’s disease, the protein APP, an interaction partner of FE65, changes. The interaction of the two molecules is important for the transport of FE65 into the nucleus, where it regulates cell division in combination with BLM. Müller’s team assumes that the altered APP-FE65 interaction mistakenly sends the cells the signal to divide. Since nerve cells normally cannot divide, they degenerate instead and die. “This hypothesis, which we pursue in the working group Morbus Alzheimer, also delivers new starting points for potential therapies, which are urgently needed for Alzheimer’s disease,” says Dr. Mueller. In the future, the team will also investigate whether and how the amount of BLM is altered in Alzheimer’s patients compared to healthy subjects.

(Source: alphagalileo.org)

Filed under alzheimer's disease neurodegeneration nerve cells amyloid precursor protein neuroscience science

244 notes

Researchers show brain’s battle for attention

We’ve all been there: You’re at work deeply immersed in a project when suddenly you start thinking about your weekend plans. It happens because behind the scenes, parts of your brain are battling for control.

image

Now, University of Florida researchers and their colleagues are using a new technique that allows them to examine how parts of the brain battle for dominance when a person tries to concentrate on a task. Addressing these fluctuations in attention may help scientists better understand many neurological disorders such as autism, depression and mild cognitive impairment.

Mingzhou Ding, a professor of biomedical engineering, and Xiaotong Wen, an assistant research scientist of biomedical engineering, both of the University of Florida; Yijun Liu of the McKnight Brain Institute of the University of Florida and Peking University, Beijing; and Li Yao of Beijing Normal University, report their findings in the current issue of The Journal of Neuroscience.

Scientists know different networks within the brain have distinct functions. Ding, Wen and their colleagues used a brain imaging technique called functional magnetic resonance imaging and biostatistical methods to examine interactions between a set of areas they call the task control network and another set of areas known as the default mode network.

The task control network regulates attention to surroundings, controlling concentration on a task such as doing homework, or listening for emotional cues during a conversation. The default mode network is thought to regulate self-reflection and emotion, and often becomes active when a person seems to be doing nothing else.

“We knew that the default mode network decreases in activity when a task is being performed, but we didn’t know why or how,” said Ding, a professor of biomedical engineering in the J. Crayton Pruitt department of biomedical engineering. “We also wanted to know what is driving that activity decrease.

“For a long time, the questions we are asking could not be answered.”

In the past, researchers could not distinguish between directions of interactions between regions of the brain, and could come up with only one number to represent an average of the back-and-forth interactions. Ding and his colleagues used a new technique to untangle the interactions in each direction to show how the different brain regions interact with one another.

In their study, the researchers used fMRI to examine the brains of people performing a task that required concentration. The scientists can see the activity in certain areas of the brain at the same time a person is performing a given task. They can see which parts of the brain are active and which are not and correlate this to how successful a person is at a given task. They then applied the Granger causality technique to look at the data they saw in the fMRI. Named for Nobel Prize-winning economist Clive Granger, this technique allows scientists to examine how one variable affects another variable; in this case, how one region of the brain influences another.

“People have hypothesized different functions for signals going in different directions,” Ding said. “We show that when the task control network suppresses the default mode network, the person can do the task better and faster. The better the default mode network is shut down, the better a person performs.”

However, when the default mode network is not sufficiently suppressed, it sends signals to the task control network that effectively distract the person, causing his or her performance to drop. So while the task control network suppresses the default mode network, the default mode network also interferes with the task control network.

“Your brain is a constant seesaw back and forth,” even when trying to concentrate on a task, Ding said.

The Granger causality technique may help researchers learn more about how neurological disorders work. Researchers have found that the default mode network remains unchanged in people with autism whether they are performing a task or interacting with the environment, which could explain symptoms such as difficulty reading social cues or being easily overwhelmed by sensory stimulation. Scientists have made similar findings with depression and mild cognitive impairment. However, until now no one has been able to address what areas of the brain might be regulating the default mode network and which might be interfering with that regulation.

“Now we are able to address these questions,” Ding said.

(Source: news.ufl.edu)

Filed under brain attention emotional cues neurological disorders brain imaging concentration neuroscience science

231 notes

Despite what you may think, your brain is a mathematical genius
The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having “turned off your brain.”
"Actually, you’ve just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you’re resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?
The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk’s Vision Center Laboratory. “It’s a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?”
Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences (Correction). It may reconcile the puzzling contradictions in many previous studies.
Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. “Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus,” says Albright, holder of Salk’s Conrad T. Prebys Chair in Vision Research.
The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system’s perspective?
It turns out something’s got to give.
"It’s as if the brain’s on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you’re attuned to high speeds, you’ll be able to see faster moving things that you couldn’t see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."
Summing up, Albright says, “Simply put, it’s a tradeoff: The price of getting better at one thing is getting worse at another.”
Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician’s point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.
Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn’t his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.
"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.
The location of a signal is simply that: where is the signal at what point in time. The content-the “what” of a signal-is “written” in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.
The challenge comes when you’re trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).
The obvious answer is that you’re stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you’ve come up with the best possible compromise? Gabor’s answer was what’s become known as a “Gabor Filter” that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.
"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."
In essence, the first stages of the visual process act like a filter. “It describes which stimuli get in, and which do not,” Gepshtein says. “When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in.”
"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."
From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously ‘pay attention’ to the change in scene.
Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.
(Image: Gary Meader)

Despite what you may think, your brain is a mathematical genius

The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having “turned off your brain.”

"Actually, you’ve just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you’re resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?

The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk’s Vision Center Laboratory. “It’s a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?”

Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences (Correction). It may reconcile the puzzling contradictions in many previous studies.

Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. “Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus,” says Albright, holder of Salk’s Conrad T. Prebys Chair in Vision Research.

The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system’s perspective?

It turns out something’s got to give.

"It’s as if the brain’s on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you’re attuned to high speeds, you’ll be able to see faster moving things that you couldn’t see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."

Summing up, Albright says, “Simply put, it’s a tradeoff: The price of getting better at one thing is getting worse at another.”

Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician’s point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.

Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn’t his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.

"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.

The location of a signal is simply that: where is the signal at what point in time. The content-the “what” of a signal-is “written” in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.

The challenge comes when you’re trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).

The obvious answer is that you’re stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you’ve come up with the best possible compromise? Gabor’s answer was what’s become known as a “Gabor Filter” that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.

"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."

In essence, the first stages of the visual process act like a filter. “It describes which stimuli get in, and which do not,” Gepshtein says. “When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in.”

"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."

From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously ‘pay attention’ to the change in scene.

Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.

(Image: Gary Meader)

Filed under brain visual system visual adaptation signal processing neuroscience science

83 notes

Restoring paretic hand function via an artificial neural connection bridging spinal cord injury
Functional loss of limb control in individuals with spinal cord injury or stroke can be caused by interruption of the neural pathways between brain and spinal cord, although the neural circuits located above and below the lesion remain functional. An artificial neural connection that bridges the lost pathway and connects brain to spinal circuits has potential to ameliorate the functional loss. Yukio Nishimura, Associate Professor of the National Institute for Physiological Sciences, Japan, and Eberhard Fetz, Professor and Steve Perlmuter, Research Associate Professor at the University of Washington, United States investigated the effects of introducing a novel artificial neural connection which bridged a spinal cord lesion in a paretic monkey. This allowed the monkey to electrically stimulate the spinal cord through volitionally controlled brain activity and thereby to restore volitional control of the paretic hand. This study demonstrates that artificial neural connections can compensate for interrupted descending pathways and promote volitional control of upper limb movement after damage of neural pathways such as spinal cord injury or stroke. The study will be published online in Frontiers in Neural Circuits on April 11.
"The important point is that individuals who are paralyzed want to be able to move their own bodies by their own will. This study was different from what other research groups have done up to now; we didn’t use any prosthetic limbs like robotic arms to replace the original arm. What’s new is that we have been able to use this artificial neuronal connection bypassing the lesion site to restore volitional control of the subject’s own paretic arm. I think that for lesions of the corticospinal pathway this might even have a better chance of becoming a real prosthetic treatment rather than the sort of robotic devices that have been developed recently", Associate professor Nishimura said.

Restoring paretic hand function via an artificial neural connection bridging spinal cord injury

Functional loss of limb control in individuals with spinal cord injury or stroke can be caused by interruption of the neural pathways between brain and spinal cord, although the neural circuits located above and below the lesion remain functional. An artificial neural connection that bridges the lost pathway and connects brain to spinal circuits has potential to ameliorate the functional loss. Yukio Nishimura, Associate Professor of the National Institute for Physiological Sciences, Japan, and Eberhard Fetz, Professor and Steve Perlmuter, Research Associate Professor at the University of Washington, United States investigated the effects of introducing a novel artificial neural connection which bridged a spinal cord lesion in a paretic monkey. This allowed the monkey to electrically stimulate the spinal cord through volitionally controlled brain activity and thereby to restore volitional control of the paretic hand. This study demonstrates that artificial neural connections can compensate for interrupted descending pathways and promote volitional control of upper limb movement after damage of neural pathways such as spinal cord injury or stroke. The study will be published online in Frontiers in Neural Circuits on April 11.

"The important point is that individuals who are paralyzed want to be able to move their own bodies by their own will. This study was different from what other research groups have done up to now; we didn’t use any prosthetic limbs like robotic arms to replace the original arm. What’s new is that we have been able to use this artificial neuronal connection bypassing the lesion site to restore volitional control of the subject’s own paretic arm. I think that for lesions of the corticospinal pathway this might even have a better chance of becoming a real prosthetic treatment rather than the sort of robotic devices that have been developed recently", Associate professor Nishimura said.

Filed under spinal cord injury spinal cord neural circuits limb control brain activity neuroscience science

434 notes

Scientists create phantom sensations in non-amputees
The sensation of having a physical body is not as self-evident as one might think. Almost everyone who has had an arm or leg amputated experiences a phantom limb: a vivid sensation that the missing limb is still present. A new study by neuroscientists at the Karolinska Institutet in Sweden shows that it is possible to evoke the illusion of having a phantom hand in non-amputated individuals.
In an article in the scientific periodical Journal of Cognitive Neuroscience, the researchers describe a perceptual illusion in which healthy volunteers experience having an invisible hand. The experiment involves the participant sitting at a table with their right arm hidden from their view behind a screen. To evoke the illusion, the scientist touches the right hand of the participant with a small paintbrush while imitating the exact movements with another paintbrush in mid-air within full view of the participant.
"We discovered that most participants, within less than a minute, transfer the sensation of touch to the region of empty space where they see the paintbrush move, and experience an invisible hand in that position", says Arvid Guterstam, lead author of the study. "Previous research has shown that non-bodily objects, such as a block of wood, cannot be experienced as ones own hand, so we were extremely surprised to find that the brain can accept an invisible hand as part of the body."
The study comprises eleven experiments that explore in detail the illusory experience and include 234 volunteers. To demonstrate that the illusion actually worked, the researchers would make a stabbing motion with a knife towards the empty space ‘occupied’ by the invisible hand and measure the participant’s sweat response to the perceived threat. They found that the participants stress responses were elevated while experiencing the illusion but absent when the illusion was broken.
In another experiment, the volunteers were asked to close their eyes and quickly point with their left hand to their right hand (or to where they perceived it to be). After having experienced the illusion for a while, they would point to the location of the invisible hand rather than to their real hand.
The researchers also measured the brain activity of the participants using functional magnetic resonance imaging (fMRI). Perceiving the invisible hand illusion led to increased activity in the same parts of the brain that are normally active when individuals see their real hand being touched or when participants experience a prosthetic hand as their own.
"Taken together, our results show that the sight of a physical hand is remarkably unimportant to the brain for creating the experience of one’s physical self," says Arvid Guterstam.
The researchers hope that the results of their study will offer insight into future research on phantom pain in amputees.
"This illusion suggests that the experience of phantom limbs is not unique to amputated individuals, but can easily be created in non-amputees," says the principal investigator, Dr Henrik Ehrsson, Docent at the Department of Neuroscience. "These results add to our understanding of how phantom sensations are produced by the brain, which can contribute to future research on alleviating phantom pain in amputees."

Scientists create phantom sensations in non-amputees

The sensation of having a physical body is not as self-evident as one might think. Almost everyone who has had an arm or leg amputated experiences a phantom limb: a vivid sensation that the missing limb is still present. A new study by neuroscientists at the Karolinska Institutet in Sweden shows that it is possible to evoke the illusion of having a phantom hand in non-amputated individuals.

In an article in the scientific periodical Journal of Cognitive Neuroscience, the researchers describe a perceptual illusion in which healthy volunteers experience having an invisible hand. The experiment involves the participant sitting at a table with their right arm hidden from their view behind a screen. To evoke the illusion, the scientist touches the right hand of the participant with a small paintbrush while imitating the exact movements with another paintbrush in mid-air within full view of the participant.

"We discovered that most participants, within less than a minute, transfer the sensation of touch to the region of empty space where they see the paintbrush move, and experience an invisible hand in that position", says Arvid Guterstam, lead author of the study. "Previous research has shown that non-bodily objects, such as a block of wood, cannot be experienced as ones own hand, so we were extremely surprised to find that the brain can accept an invisible hand as part of the body."

The study comprises eleven experiments that explore in detail the illusory experience and include 234 volunteers. To demonstrate that the illusion actually worked, the researchers would make a stabbing motion with a knife towards the empty space ‘occupied’ by the invisible hand and measure the participant’s sweat response to the perceived threat. They found that the participants stress responses were elevated while experiencing the illusion but absent when the illusion was broken.

In another experiment, the volunteers were asked to close their eyes and quickly point with their left hand to their right hand (or to where they perceived it to be). After having experienced the illusion for a while, they would point to the location of the invisible hand rather than to their real hand.

The researchers also measured the brain activity of the participants using functional magnetic resonance imaging (fMRI). Perceiving the invisible hand illusion led to increased activity in the same parts of the brain that are normally active when individuals see their real hand being touched or when participants experience a prosthetic hand as their own.

"Taken together, our results show that the sight of a physical hand is remarkably unimportant to the brain for creating the experience of one’s physical self," says Arvid Guterstam.

The researchers hope that the results of their study will offer insight into future research on phantom pain in amputees.

"This illusion suggests that the experience of phantom limbs is not unique to amputated individuals, but can easily be created in non-amputees," says the principal investigator, Dr Henrik Ehrsson, Docent at the Department of Neuroscience. "These results add to our understanding of how phantom sensations are produced by the brain, which can contribute to future research on alleviating phantom pain in amputees."

Filed under phantom limb perceptual illusion sensation sweat response stress response neuroscience science

138 notes

Mutations found in individuals with autism interfere with endocannabinoid signaling in the brain
Mutations found in individuals with autism block the action of molecules made by the brain that act on the same receptors that marijuana’s active chemical acts on, according to new research reported online April 11 in the Cell Press journal Neuron. The findings implicate specific molecules, called endocannabinoids, in the development of some autism cases and point to potential treatment strategies.
"Endocannabinoids are molecules that are critical regulators of normal neuronal activity and are important for many brain functions," says first author Dr. Csaba Földy, of Stanford University Medical School. "By conducting studies in mice, we found that neuroligin-3, a protein that is mutated in some individuals with autism, is important for relaying endocannabinoid signals that tone down communication between neurons."
When the researchers introduced different autism-associated mutations in neuroligin-3 into mice, this signaling was blocked and the overall excitability of the brain was changed.
"These findings point out an unexpected link between a protein implicated in autism and a signaling system that previously had not been considered to be particularly important for autism," says senior author Dr. Thomas Südhof, also of Stanford. "Thus, the findings open up a new area of research and may suggest novel strategies for understanding the underlying causes of complex brain disorders."
The results also indicate that targeting components of the endocannabinoid signaling system may help reverse autism symptoms.
The study’s findings resulted from a research collaboration between the Stanford laboratories of Dr. Südhof and Dr. Robert Malenka, who is also an author on the paper.

Mutations found in individuals with autism interfere with endocannabinoid signaling in the brain

Mutations found in individuals with autism block the action of molecules made by the brain that act on the same receptors that marijuana’s active chemical acts on, according to new research reported online April 11 in the Cell Press journal Neuron. The findings implicate specific molecules, called endocannabinoids, in the development of some autism cases and point to potential treatment strategies.

"Endocannabinoids are molecules that are critical regulators of normal neuronal activity and are important for many brain functions," says first author Dr. Csaba Földy, of Stanford University Medical School. "By conducting studies in mice, we found that neuroligin-3, a protein that is mutated in some individuals with autism, is important for relaying endocannabinoid signals that tone down communication between neurons."

When the researchers introduced different autism-associated mutations in neuroligin-3 into mice, this signaling was blocked and the overall excitability of the brain was changed.

"These findings point out an unexpected link between a protein implicated in autism and a signaling system that previously had not been considered to be particularly important for autism," says senior author Dr. Thomas Südhof, also of Stanford. "Thus, the findings open up a new area of research and may suggest novel strategies for understanding the underlying causes of complex brain disorders."

The results also indicate that targeting components of the endocannabinoid signaling system may help reverse autism symptoms.

The study’s findings resulted from a research collaboration between the Stanford laboratories of Dr. Südhof and Dr. Robert Malenka, who is also an author on the paper.

Filed under autism endocannabinoids mutations marijuana neural activity neuroscience science

56 notes

Tiny Wireless Device Shines Light on Mouse Brain, Generating Reward

Using a miniature electronic device implanted in the brain, scientists have tapped into the internal reward system of mice, prodding neurons to release dopamine, a chemical associated with pleasure.

image

The researchers, at Washington University School of Medicine in St. Louis and the University of Illinois at Urbana-Champaign, developed tiny devices, containing light emitting diodes (LEDs) the size of individual neurons. The devices activate brain cells with light. The scientists report their findings April 12 in the journal Science.

“This strategy should allow us to identify and map brain circuits involved in complex behaviors related to sleep, depression, addiction and anxiety,” says co-principal investigator Michael R. Bruchas, PhD, assistant professor of anesthesiology at Washington University. “Understanding which populations of neurons are involved in these complex behaviors may allow us to target specific brain cells that malfunction in depression, pain, addiction and other disorders.”

For the study, Washington University neuroscientists teamed with engineers at the University of Illinois to design microscale (LED) devices thinner than a human hair. This was the first application of the devices in optogenetics, an area of neuroscience that uses light to stimulate targeted pathways in the brain. The scientists implanted them into the brains of mice that had been genetically engineered so that some of their brain cells could be activated and controlled with light.

Although a number of important pathways in the brain can be studied with optogenetics, many neuroscientists have struggled with the engineering challenge of delivering light to precise locations deep in the brain. Most methods have tethered animals to lasers with fiber optic cables, limiting their movement and altering natural behaviors.

But with the new devices, the mice freely moved about and were able to explore a maze or scamper on a wheel. The electronic LEDs are housed in a tiny fiber implanted deep in the brain. That’s important to the device’s ability to activate the proper neurons, according to John A. Rogers, PhD, professor of materials science and engineering at the University of Illinois.

“You want to be able to deliver the light down into the depth of the brain,” Rogers says. “We think we’ve come up with some powerful strategies that involve ultra-miniaturized devices that can deliver light signals deep into the brain and into other organs in the future.”

Using light from the cellular-scale LEDs to stimulate dopamine-producing cells in the brain, the investigators taught the mice to poke their noses through a specific hole in a maze. Each time a mouse would poke its nose through the hole, that would trigger the system to wirelessly activate the LEDs in the implanted device, which then would emit light, causing neurons to release dopamine, a chemical related to the brain’s natural reward system.

“We used the LED devices to activate networks of brain cells that are influenced by the things you would find rewarding in life, like sex or chocolate,” says co-first author Jordan G. McCall, a neuroscience graduate student in Washington University’s Division of Biology and Biomedical Sciences. “When the brain cells were activated to release dopamine, the mice quickly learned to poke their noses through the hole even though they didn’t receive any food as a reward. They also developed an associated preference for the area near the hole, and they tended to hang around that part of the maze.”

The researchers believe the LED implants may be useful in other types of neuroscience studies or may even be applied to different organs. Related devices already are being used to stimulate peripheral nerves for pain management. Other devices with LEDs of multiple colors may be able to activate and control several neural circuits at once. In addition to the tiny LEDs, the devices also carry miniaturized sensors for detecting temperature and electrical activity within the brain.

Bruchas and his colleagues already have begun other studies of mice, using the LED devices to manipulate neural circuits that are involved in social behaviors. This could help scientists better understand what goes on in the brain in disorders such as depression and anxiety.

“We believe these devices will allow us to study complex stress and social interaction behaviors,” Bruchas explains. “This technology enables us to map neural circuits with respect to things like stress and pain much more effectively.”

The wireless, microLED implant devices represent the combined efforts of Bruchas and Rogers. Last year, along with Robert W. Gereau IV, PhD, professor of anesthesiology, they were awarded an NIH Director’s Transformative Research Project award to develop and conduct studies using novel device development and optogenetics, which involves activating or inhibiting brain cells with light.

(Source: newswise.com)

Filed under reward system brain cells optogenetics dopamine brain circuit depression addiction neuroscience science

98 notes

New Findings on the Brain’s Immune Cells during Alzheimer’s Disease Progression
The plaque deposits in the brain of Alzheimer’s patients are surrounded by the brain’s own immune cells, the microglia. This was already recognized by Alois Alzheimer more than one hundred years ago. But until today it still remains unclear what role microglia play in Alzheimer’s disease. Do they help to break down the plaque deposit? A study by researchers of the Max Delbrück Center for Molecular Medicine (MDC) Berlin-Buch and Charité – Universitätsmedizin Berlin has now shed light on these mysterious microglia during the progression of Alzheimer’s disease.
Dr. Grietje Krabbe of the laboratory of Professor Helmut Kettenmann (MDC) and Dr. Annett Halle of the Neuropathology Department of the Charité headed by Professor Frank Heppner demonstrated that the microglial cells around the deposits do not show the classical activation pattern in mouse models of Alzheimer´s disease. On the contrary, in the course of the Alzheimer’s disease they lose two of their biological functions. Both their ability to remove cell fragments or harmful structures and their directed process motility towards acute lesions are impaired. The impact of the latter loss-of-function needs further investigation. The plaques consist of protein fragments, the beta-amyloid peptides, which in Alzheimer’s disease are deposited in the brain over the course of years. They are believed to be involved in destroying the nerve cells of the affected patients, resulting in an incurable cognitive decline.
However, just why the microglial cells, which cluster around the deposits, are inactivated or lose their functionality is still not fully understood. The researchers concluded that this process occurs at a very early stage of disease development and is likely triggered by the beta-amyloid. This is confirmed by the fact that the loss-of-function of the microglial cells in the mice could be reversed by beta-amyloid antibodies thereby decreasing the beta-amyloid burden. According to the researchers, the potential to restore microglial function by directed manipulation should be pursued and exploited to develop treatments for Alzheimer’s disease.

New Findings on the Brain’s Immune Cells during Alzheimer’s Disease Progression

The plaque deposits in the brain of Alzheimer’s patients are surrounded by the brain’s own immune cells, the microglia. This was already recognized by Alois Alzheimer more than one hundred years ago. But until today it still remains unclear what role microglia play in Alzheimer’s disease. Do they help to break down the plaque deposit? A study by researchers of the Max Delbrück Center for Molecular Medicine (MDC) Berlin-Buch and Charité – Universitätsmedizin Berlin has now shed light on these mysterious microglia during the progression of Alzheimer’s disease.

Dr. Grietje Krabbe of the laboratory of Professor Helmut Kettenmann (MDC) and Dr. Annett Halle of the Neuropathology Department of the Charité headed by Professor Frank Heppner demonstrated that the microglial cells around the deposits do not show the classical activation pattern in mouse models of Alzheimer´s disease. On the contrary, in the course of the Alzheimer’s disease they lose two of their biological functions. Both their ability to remove cell fragments or harmful structures and their directed process motility towards acute lesions are impaired. The impact of the latter loss-of-function needs further investigation. The plaques consist of protein fragments, the beta-amyloid peptides, which in Alzheimer’s disease are deposited in the brain over the course of years. They are believed to be involved in destroying the nerve cells of the affected patients, resulting in an incurable cognitive decline.

However, just why the microglial cells, which cluster around the deposits, are inactivated or lose their functionality is still not fully understood. The researchers concluded that this process occurs at a very early stage of disease development and is likely triggered by the beta-amyloid. This is confirmed by the fact that the loss-of-function of the microglial cells in the mice could be reversed by beta-amyloid antibodies thereby decreasing the beta-amyloid burden. According to the researchers, the potential to restore microglial function by directed manipulation should be pursued and exploited to develop treatments for Alzheimer’s disease.

Filed under alzheimer's disease microglia cells beta amyloid nerve cells neuroscience science

80 notes

'Strikingly similar' brains of man and fly may aid mental health research
A new study by scientists at King’s College London’s Institute of Psychiatry and the University of Arizona (UA) published in Science reveals the deep similarities in how the brain regulates behaviour in arthropods (such as flies and crabs) and vertebrates (such as fish, mice and humans).
The findings shed new light on the evolution of the brain and behaviour and may aid understanding of disease mechanisms underlying mental health problems.
Based on their own findings and available literature, Dr Frank Hirth (King’s) and Dr Nicholas Strausfeld (UA) compared the development and function of the central brain regions in arthropods (the ‘central complex’) and vertebrates (the ‘basal ganglia’).
Research suggests that both brain structures derive from embryonic stem cells at the base of the developing forebrain and that, despite the major differences between species, their respective constitutions and specifications derive from similar genetic programmes.
The authors describe that nerve cells in the central complex and the basal ganglia become inter-connected and communicate with each other in similar ways, facilitating the regulation of adaptive behaviours. In other words, the response of a fly or a mouse to internal stimuli such as hunger or sleep, and external stimuli such as light/dark or temperature, are regulated by similar neural mechanisms.
Dr Hirth from the Department of Neuroscience at King’s Institute of Psychiatry says: “Flies, crabs, mice, humans: all experience hunger, need sleep and have a preference for a comfortable temperature so we speculated there must be a similar mechanism regulating these behaviours. We were amazed to find just how deep the similarities go, despite the differences in size and appearance of these species and their brains.”
Dr Strausfeld, a Regents Professor in the UA’s Department of Neuroscience and the Director of the UA’s Center for Insect Science, says: “When you compare the two structures, you find that they are very similar in terms of how they’re organized. Their development is orchestrated by a whole suite of genes that are homologous between flies and mice, and the behavioral deficits resulting from disturbances in the two systems are remarkably similar as well.”
In humans, dysfunction of the basal ganglia can cause severe mental health problems ranging from autism, schizophrenia and psychosis, to neurodegeneration - as seen in Parkinson’s disease, motor neurone disease and dementia - as well as sleep disturbances, attention deficits and memory impairment. Similarly, when parts of the central complex are affected in fruit flies, they display similar impairments.
Dr Hirth (King’s) adds: “The deep similarities we see between how our brains and those of insects regulate behaviour suggest a common evolutionary origin. It means that prototype brain circuits, essential for behavioural choice, originated very early and have been maintained across animal species throughout evolutionary time. As surprising as it may seem, from insects’ dysfunctional brains, we can learn a great deal about how human brain disorders come about.”
The findings suggest that arthropod and vertebrate brain circuitries derive from a common ancestor already possessing a complex neural structure mediating the selection and maintenance of behavioural actions.
Although no fossil remains of the common ancestor exist, trace fossils, in the form of tracks criss-crossing the seafloor hundreds of millions of years ago, reveal purposeful changes in direction.
Dr Strausfeld (UA) says: “If you compare these tracks to the tracks left behind by a foraging fly larva on an agar plate or the tunnels made by a leaf-mining insect, they’re very similar. They all suggest that the animal chose to perform various different actions, and action selection is precisely what the central complex and the basal ganglia do.”
The trace fossils may thus support the early existence of brains complex enough to allow for action selection and a shared ancestry of neural structures between invertebrates and vertebrates.

'Strikingly similar' brains of man and fly may aid mental health research

A new study by scientists at King’s College London’s Institute of Psychiatry and the University of Arizona (UA) published in Science reveals the deep similarities in how the brain regulates behaviour in arthropods (such as flies and crabs) and vertebrates (such as fish, mice and humans).

The findings shed new light on the evolution of the brain and behaviour and may aid understanding of disease mechanisms underlying mental health problems.

Based on their own findings and available literature, Dr Frank Hirth (King’s) and Dr Nicholas Strausfeld (UA) compared the development and function of the central brain regions in arthropods (the ‘central complex’) and vertebrates (the ‘basal ganglia’).

Research suggests that both brain structures derive from embryonic stem cells at the base of the developing forebrain and that, despite the major differences between species, their respective constitutions and specifications derive from similar genetic programmes.

The authors describe that nerve cells in the central complex and the basal ganglia become inter-connected and communicate with each other in similar ways, facilitating the regulation of adaptive behaviours. In other words, the response of a fly or a mouse to internal stimuli such as hunger or sleep, and external stimuli such as light/dark or temperature, are regulated by similar neural mechanisms.

Dr Hirth from the Department of Neuroscience at King’s Institute of Psychiatry says: “Flies, crabs, mice, humans: all experience hunger, need sleep and have a preference for a comfortable temperature so we speculated there must be a similar mechanism regulating these behaviours. We were amazed to find just how deep the similarities go, despite the differences in size and appearance of these species and their brains.”

Dr Strausfeld, a Regents Professor in the UA’s Department of Neuroscience and the Director of the UA’s Center for Insect Science, says: “When you compare the two structures, you find that they are very similar in terms of how they’re organized. Their development is orchestrated by a whole suite of genes that are homologous between flies and mice, and the behavioral deficits resulting from disturbances in the two systems are remarkably similar as well.”

In humans, dysfunction of the basal ganglia can cause severe mental health problems ranging from autism, schizophrenia and psychosis, to neurodegeneration - as seen in Parkinson’s disease, motor neurone disease and dementia - as well as sleep disturbances, attention deficits and memory impairment. Similarly, when parts of the central complex are affected in fruit flies, they display similar impairments.

Dr Hirth (King’s) adds: “The deep similarities we see between how our brains and those of insects regulate behaviour suggest a common evolutionary origin. It means that prototype brain circuits, essential for behavioural choice, originated very early and have been maintained across animal species throughout evolutionary time. As surprising as it may seem, from insects’ dysfunctional brains, we can learn a great deal about how human brain disorders come about.”

The findings suggest that arthropod and vertebrate brain circuitries derive from a common ancestor already possessing a complex neural structure mediating the selection and maintenance of behavioural actions.

Although no fossil remains of the common ancestor exist, trace fossils, in the form of tracks criss-crossing the seafloor hundreds of millions of years ago, reveal purposeful changes in direction.

Dr Strausfeld (UA) says: “If you compare these tracks to the tracks left behind by a foraging fly larva on an agar plate or the tunnels made by a leaf-mining insect, they’re very similar. They all suggest that the animal chose to perform various different actions, and action selection is precisely what the central complex and the basal ganglia do.”

The trace fossils may thus support the early existence of brains complex enough to allow for action selection and a shared ancestry of neural structures between invertebrates and vertebrates.

Filed under fruit flies central complex basal ganglia nerve cells mental health evolution neuroscience science

131 notes

Do drugs for bipolar disorder “normalize” brain gene function?
Every day, millions of people with bipolar disorder take medicines that help keep them from swinging into manic or depressed moods. But just how these drugs produce their effects is still a mystery.
Now, a new University of Michigan Medical School study of brain tissue helps reveal what might actually be happening. And further research using stem cells programmed to act like brain cells is already underway.
Using genetic analysis, the new study suggests that certain medications may help “normalize” the activity of a number of genes involved in communication between brain cells. It is published in the current issue of Bipolar Disorders.
The study involved brain tissue from deceased people with and without bipolar disorder, which the U-M team analyzed to see how often certain genes were activated, or expressed. Funding support came from the National Institutes of Health and the Heinz C. Prechter Bipolar Research Fund.
“We found there are hundreds of genes whose activity is adjusted in individuals taking medication – consistent with the fact that there are a number of genes that are potentially amiss in people with bipolar,” says senior author Melvin McInnis, M.D., the U-M psychiatrist, U-M Depression Center member and principal investigator of the Prechter Fund Projects who helped lead the study. “Taking the medications, specifically ones in a class called antipsychotics, seemed to normalize the gene expression pattern in these individuals so that it approached that of a person without bipolar.”
Digging deeper into bipolar genetics 
Scientists already know that bipolar disorder’s roots lie in genetic differences in the brain — though they are still searching for the specific gene combinations involved.  
McInnis and his colleagues have now embarked on research developing several a lines of induced pluripotent stem cells derived (iPSC) from volunteers with and without bipolar disorder, which will allow even more in-depth study of the development and genetics of bipolar disorder.
The newly published study looked at the expression, or activity levels, of 2,191 different genes in the brains of 14 people with bipolar disorder, and 12 with no mental health conditions. The brains were all part of a privately funded nonprofit brain bank that collected and stored donated brains, and recorded what medications the individuals were taking at the time of death.
Seven of the brains were from people with bipolar disorder who had been taking one or more antipsychotics when they died. These drugs include clozapine, risperidone, and haloperidol, and are often used to treat bipolar disorder. Most of the 14 brain donors with bipolar disorder were also taking other medications, such as antidepressants, at the time of death.
When the researchers compared the gene activity patterns among the brains of bipolar disorder patients who had been exposed to antipsychotics with patterns among those who weren’t, they saw striking differences.
Then, when they compared the activity patterns of patients who had been taking antipsychotics with those of people without bipolar disorder, they found similar patterns.
The similarities were strongest in the expression of genes involved in the transmission of signals across synapses – the gaps between brain cells that allow cells to ‘talk’ to one another. There were also similarities in the organization of nodes of Ranvier – locations along nerve cells where signals can travel faster.
McInnis, who is the Thomas B. and Nancy Upjohn Woodworth Professor of Bipolar Disorder and Depression in the U-M Department of Psychiatry, worked with U-M scientists Haiming Chen, M.D. and K. Sue O’Shea, Ph.D., of the U-M Department of Cell and Developmental Biology. They also teamed with Johns Hopkins University researcher Christopher Ross, M.D., Ph.D. on the new research; U-M and Johns Hopkins have a long history of collaboration on bipolar disorder research.
The research used brain tissue samples from the Stanley Brain Collection of the Stanley Medical Research Institute in Maryland.
Using “gene chip” analysis to measure the presence of messenger RNA molecules that indicate gene activity, and sophisticated data analysis, they were able to map the expression patterns from the brains and break the results down by bipolar status and medication use. The bipolar and control (non-bipolar) brains were matched by age, gender and other factors.
“In bipolar disorder, it’s not just one gene that’s involved – it’s a whole symphony of them,” says McInnis, who has helped lead U-M’s bipolar genetics research for nearly a decade. “Medications appear to nudge them in a direction that aligns more with the normal expression pattern.”
Among those that were “nudged” were genes that have already been shown to be linked to bipolar disorder, including glycogen synthase kinase 3 beta (GSK3β), FK506 binding protein 5 (FKBP5), and Ankyrin 3 (ANK3).
Going forward, says McInnis, cell culture studies will be critical to studying how medications for bipolar disorder work, and to screen new molecules as potential new medications.

Do drugs for bipolar disorder “normalize” brain gene function?

Every day, millions of people with bipolar disorder take medicines that help keep them from swinging into manic or depressed moods. But just how these drugs produce their effects is still a mystery.

Now, a new University of Michigan Medical School study of brain tissue helps reveal what might actually be happening. And further research using stem cells programmed to act like brain cells is already underway.

Using genetic analysis, the new study suggests that certain medications may help “normalize” the activity of a number of genes involved in communication between brain cells. It is published in the current issue of Bipolar Disorders.

The study involved brain tissue from deceased people with and without bipolar disorder, which the U-M team analyzed to see how often certain genes were activated, or expressed. Funding support came from the National Institutes of Health and the Heinz C. Prechter Bipolar Research Fund.

“We found there are hundreds of genes whose activity is adjusted in individuals taking medication – consistent with the fact that there are a number of genes that are potentially amiss in people with bipolar,” says senior author Melvin McInnis, M.D., the U-M psychiatrist, U-M Depression Center member and principal investigator of the Prechter Fund Projects who helped lead the study. “Taking the medications, specifically ones in a class called antipsychotics, seemed to normalize the gene expression pattern in these individuals so that it approached that of a person without bipolar.”

Digging deeper into bipolar genetics

Scientists already know that bipolar disorder’s roots lie in genetic differences in the brain — though they are still searching for the specific gene combinations involved.  

McInnis and his colleagues have now embarked on research developing several a lines of induced pluripotent stem cells derived (iPSC) from volunteers with and without bipolar disorder, which will allow even more in-depth study of the development and genetics of bipolar disorder.

The newly published study looked at the expression, or activity levels, of 2,191 different genes in the brains of 14 people with bipolar disorder, and 12 with no mental health conditions. The brains were all part of a privately funded nonprofit brain bank that collected and stored donated brains, and recorded what medications the individuals were taking at the time of death.

Seven of the brains were from people with bipolar disorder who had been taking one or more antipsychotics when they died. These drugs include clozapine, risperidone, and haloperidol, and are often used to treat bipolar disorder. Most of the 14 brain donors with bipolar disorder were also taking other medications, such as antidepressants, at the time of death.

When the researchers compared the gene activity patterns among the brains of bipolar disorder patients who had been exposed to antipsychotics with patterns among those who weren’t, they saw striking differences.

Then, when they compared the activity patterns of patients who had been taking antipsychotics with those of people without bipolar disorder, they found similar patterns.

The similarities were strongest in the expression of genes involved in the transmission of signals across synapses – the gaps between brain cells that allow cells to ‘talk’ to one another. There were also similarities in the organization of nodes of Ranvier – locations along nerve cells where signals can travel faster.

McInnis, who is the Thomas B. and Nancy Upjohn Woodworth Professor of Bipolar Disorder and Depression in the U-M Department of Psychiatry, worked with U-M scientists Haiming Chen, M.D. and K. Sue O’Shea, Ph.D., of the U-M Department of Cell and Developmental Biology. They also teamed with Johns Hopkins University researcher Christopher Ross, M.D., Ph.D. on the new research; U-M and Johns Hopkins have a long history of collaboration on bipolar disorder research.

The research used brain tissue samples from the Stanley Brain Collection of the Stanley Medical Research Institute in Maryland.

Using “gene chip” analysis to measure the presence of messenger RNA molecules that indicate gene activity, and sophisticated data analysis, they were able to map the expression patterns from the brains and break the results down by bipolar status and medication use. The bipolar and control (non-bipolar) brains were matched by age, gender and other factors.

“In bipolar disorder, it’s not just one gene that’s involved – it’s a whole symphony of them,” says McInnis, who has helped lead U-M’s bipolar genetics research for nearly a decade. “Medications appear to nudge them in a direction that aligns more with the normal expression pattern.”

Among those that were “nudged” were genes that have already been shown to be linked to bipolar disorder, including glycogen synthase kinase 3 beta (GSK3β), FK506 binding protein 5 (FKBP5), and Ankyrin 3 (ANK3).

Going forward, says McInnis, cell culture studies will be critical to studying how medications for bipolar disorder work, and to screen new molecules as potential new medications.

Filed under bipolar disorder depression brain tissue brain cells gene expression antipsychotics stem cells neuroscience science

free counters