Posts tagged neuroscience

Posts tagged neuroscience
Diary of becoming an NHS-funded cyborg
From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.
My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.
A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.
They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.
They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.
This is what it will sound like, apparently.
Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.
In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.
I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?
If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.
The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.
Placebo and the Brain: How Does it Work?
Placebo, the positive effect of a drug that lacks any beneficial ingredients, has been researched for centuries but remain a mystery for psychologists and neuroscientists alike. Although there is now a considerable amount of amassed knowledge of how placebo can be induced, through which mechanisms it works, and which individuals are susceptible to the effect, the explicit answer to why and how our brains have the ability to ‘cure’ themselves under certain circumstances is yet to be found. Having dived into the literature on the phenomenon, a picture has emerged in which one of the brain’s greatest tricks can be better understood and the fascinating implications it has for how we look at the body-mind distinction.
What is termed a placebo is usually defined in research trying to pin down its nature as the treatment that results in a change in symptom or condition that differs from the natural course of the specific disease. Placebo effects have been shown for mainly relief of pain, but also in studies of depression, parkinson’s, and anxiety. While the sugar pill is still in use, we now know that there are a two factors that are crucial for a placebo effect to occur. These are the level of expectancy and desire to get better/not get worse that the patient feels and both are in turn sensitive to a host of psychosocial variables such as their faith in medical staff, the emotional tone of the physician-patient interaction (whether it is optimistic or pessimistic for example), memories of past experiences with the effects of medicine, and so on.
While some individuals show reliable placebo effects, others do not and the underlying causes have recently been suggested to be tied to our individual genetic makeup. Researchers from the Harvard Program for Placebo Studies found that the magnitude of the placebo effect was tied to genes coding for an anzyme that regulates the levels of dopamine in various regions of the brain. Dopamine plays a key role in processing of reward, pain, memory, and learning, all areas in which the placebo effect has been demonstrated. The study, led by Kathryn Hall, concluded that persons whose genes promote an upregulation of the levels of dopamine in the brain also exhibit the greatest placebo effects. In other studies examining release of another group of transmitters called opioids, which regulate the activity in areas that code for pain, higher amounts of opioids were matched to the size of the placebo effect found.
As for where the effect originates, research using brain imaging have found that when a real drug is compared to the effects of a placebo very similar areas show activation but some areas, such as the lateral and central prefrontal cortex, show a greater response in the placebo condition. This part of the brain is often described as overseeing and exerting control over other processing in the brain and act as a connecting point for different streams of information that build up our expectations and desires.
So, how can this knowledge about the placebo effect influence the way doctors discuss, promote, and administer their own treatments? Surely, if we know that an encouraging prognosis given together with a sugar pill can be as effective in some cases as a pharmacological product but without the side- effects, we should be using that. However, having doctors treat their patients through deception leads to obvious problems such as public mistrust in the profession. A finding from the scientists at the very same Harvard program for placebo studies might have the answer. They namely demonstrated that the placebo effect remained when participants were told explicitly that the treatment they were given was in effect useless.
Long-Term Anabolic-Androgenic Steroid Use May Severely Impact Visuospatial Memory
The long-term use of anabolic-androgenic steroids (AAS) may severely impact the user’s ability to accurately recall the shapes and spatial relationships of objects, according to a recent study conducted by McLean Hospital and Harvard Medical School investigators.
In the study, published online in the journal Drug and Alcohol Dependence, McLean Hospital Research Psychiatrist Harrison Pope, MD, used a variety of tests to determine whether AAS users developed cognitive defects due to their admitted history of abuse.
"Our work clearly shows that while some areas of brain function appear to be unaffected by the use of AAS, users performed significantly worse on the visuospatial tests that were administered. Those deficits directly corresponded to their length of use of anabolic-androgenic steroids," explained Pope. "Impaired visuospatial memory means that a person might have difficulty, for example, in remembering how to find a location, such as an address on a street or a room in a building… We are worried that with higher doses of AAS and longer periods of lifetime exposure, some people might even eventually develop visuospatial deficits similar to those sometimes seen in elderly people with dementia, who can easily become lost or disoriented."

What mechanism generates our fingers and toes?
Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.
By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.
“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”
In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.
“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”
The end of a dogma: Bipolar cells generate action potentials
To make information transmission to the brain reliable, the retina first has to “digitize” the image. Until now, it was widely believed that this step takes place in the retinal ganglion cells, the output neurons of the retina. Scientists in the lab of Thomas Euler at the University of Tübingen, the Werner Reichardt Centre for Integrative Neuroscience and the Bernstein Center Tübingen were now able to show that already bipolar cells can generate “digital” signals. At least three types of mouse BC showed clear evidence of fast and stereotypic action potentials, so called “spikes”. These results show that the retina is by no means as well understood as is commonly believed.
The retina in our eyes is not just a sheet of light sensors that – like a camera chip – faithfully transmits patterns of light to the brain. Rather, it performs complex computations, extracting several features from the visual stimuli, e.g., whether the light intensity at a certain place increases or decreases, in which direction a light source moves or whether there is an edge in the image. To transmit this information reliably across the optic nerve - acting as a kind of a cable - to the brain, the retina reformats it into a succession of stereotypic action potentials – it “digitizes” it. Classical textbook knowledge holds that this digital code – similar to the one employed by computers – is applied only in the retina’s ganglion cells, which send the information to the brain. Almost all other cells in the retina were believed to employ graded, analogue signals. But the Tübingen scientists could now show that, in mammals, already the bipolar cells, which are situated right after the photoreceptors within the retinal network, are able to work in a “digital mode” as well.
Using a new experimental technique, Tom Baden and colleagues recorded signals in the synaptic terminals of bipolar cells in the mouse retina. Based on the responses of these cells to simple light stimuli, they were able to separate the neurons into eight different response types. These types closely resembled those expected from physiological and anatomical studies. But surprisingly, the responses of the fastest cell types looked quite different than expected: they were fast, stereotypic and occurred in an all-or-nothing instead of a graded fashion. All these are typical features of action potentials. Such “digital” signals had occasionally been observed in bipolar cells before, but these were believed to be rare exceptional cases. Studies from the past two years on the fish retina had already cast doubt on the long-held belief that BCs do not spike. The new data from Tübingen clearly show that these “digital” signals are systematically generated in certain types of mammalian bipolar cells. Action potentials allow for much faster and temporally more precise signal transmission than graded potentials, thus offering advantages in certain situations. The results from Tübingen call a widely held dogma of neuroscience into question - and open up many new questions.
Alzheimer’s disease is the most common form of dementia, affecting over 35 million people worldwide. It is generally assumed that the clumping of beta-amyloid (Aß) protein causes neuronal loss in patients. Medication focuses on reducing Aß42, one of the most common proteins and the most harmful. University of Twente PhD student Annelies Vandersteen is refining the current approach. She explains: “The results of my research provide a broader understanding of the processes that lead to Alzheimer’s disease and in this way may help to bring about new medication”.
The Aß protein occurs in the body in various lengths, ranging from 33 to 49 amino acids. The shorter varieties are regarded as ‘safe’, unlike the longer ones – Aß42 and longer – which are highly aggregating. Current therapeutic strategy tries to reduce the clumping of Aß42, and its harmful effects, by limiting the release of Aß42. Reducing Aß42 production at the same time results in a rise in Aß38 levels. Vandersteen comments: “One of the findings of my research is that small amounts of Aß38 can in fact increase or temper the clumping and toxic effects of longer Aß proteins. The processes that result in Alzheimer’s disease are determined by the whole spectrum of Aß proteins. So the picture is far less black and white than has been assumed so far, and less common forms of Aß are far less harmless than we thought.”
The study
Vandersteen examined the protein mixtures in a laboratory situation. She devised a series of experiments based on a computer-calculated hypothesis. The behaviour of the various Aß proteins and mixtures was studied in detail and described using various biophysical techniques. The influence of the various Aß proteins and mixtures on neurons was then studied in a cell culture.
(Source: alphagalileo.org)
The split brain: A tale of two halves
In the first months after her surgery, shopping for groceries was infuriating. Standing in the supermarket aisle, Vicki would look at an item on the shelf and know that she wanted to place it in her trolley — but she couldn’t. “I’d reach with my right for the thing I wanted, but the left would come in and they’d kind of fight,” she says. “Almost like repelling magnets.” Picking out food for the week was a two-, sometimes three-hour ordeal. Getting dressed posed a similar challenge: Vicki couldn’t reconcile what she wanted to put on with what her hands were doing. Sometimes she ended up wearing three outfits at once. “I’d have to dump all the clothes on the bed, catch my breath and start again.”
In one crucial way, however, Vicki was better than her pre-surgery self. She was no longer racked by epileptic seizures that were so severe they had made her life close to unbearable. She once collapsed onto the bar of an old-fashioned oven, burning and scarring her back. “I really just couldn’t function,” she says. When, in 1978, her neurologist told her about a radical but dangerous surgery that might help, she barely hesitated. If the worst were to happen, she knew that her parents would take care of her young daughter. “But of course I worried,” she says. “When you get your brain split, it doesn’t grow back together.”
In June 1979, in a procedure that lasted nearly 10 hours, doctors created a firebreak to contain Vicki’s seizures by slicing through her corpus callosum, the bundle of neuronal fibres connecting the two sides of her brain. This drastic procedure, called a corpus callosotomy, disconnects the two sides of the neocortex, the home of language, conscious thought and movement control. Vicki’s supermarket predicament was the consequence of a brain that behaved in some ways as if it were two separate minds.
After about a year, Vicki’s difficulties abated. “I could get things together,” she says. For the most part she was herself: slicing vegetables, tying her shoe laces, playing cards, even waterskiing.
But what Vicki could never have known was that her surgery would turn her into an accidental superstar of neuroscience. She is one of fewer than a dozen ‘split-brain’ patients, whose brains and behaviours have been subject to countless hours of experiments, hundreds of scientific papers, and references in just about every psychology textbook of the past generation. And now their numbers are dwindling.
Through studies of this group, neuroscientists now know that the healthy brain can look like two markedly different machines, cabled together and exchanging a torrent of data. But when the primary cable is severed, information — a word, an object, a picture — presented to one hemisphere goes unnoticed in the other. Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara, and the godfather of modern split-brain science, says that even after working with these patients for five decades, he still finds it thrilling to observe the disconnection effects first-hand. “You see a split-brain patient just doing a standard thing — you show him an image and he can’t say what it is. But he can pull that same object out of a grab-bag,” Gazzaniga says. “Your heart just races!”
Countering brain chemical could prevent suicides
Researchers have found the first proof that a chemical in the brain called glutamate is linked to suicidal behavior, offering new hope for efforts to prevent people from taking their own lives.
Writing in the journal Neuropsychopharmacology, Michigan State University’s Lena Brundin and an international team of co-investigators present the first evidence that glutamate is more active in the brains of people who attempt suicide. Glutamate is an amino acid that sends signals between nerve cells and has long been a suspect in the search for chemical causes of depression.
“The findings are important because they show a mechanism of disease in patients,” said Brundin, associate professor of translational science and molecular medicine in MSU’s College of Human Medicine. “There’s been a lot of focus on another neurotransmitter called serotonin for about 40 years now. The conclusion from our paper is that we need to turn some of that focus to glutamate.”
Brundin and colleagues examined glutamate activity by measuring quinolinic acid – which flips a chemical switch that makes glutamate send more signals to nearby cells – in the spinal fluid of 100 patients in Sweden. About two-thirds of the participants were admitted to a hospital after attempting suicide and the rest were healthy.
They found that suicide attempters had more than twice as much quinolinic acid in their spinal fluid as the healthy people, which indicated increased glutamate signaling between nerve cells. Those who reported the strongest desire to kill themselves also had the highest levels of the acid.
The results also showed decreased quinolinic acid levels among a subset of patients who came back six months later, when their suicidal behavior had ended.
The findings explain why earlier research has pointed to inflammation in the brain as a risk factor for suicide. The body produces quinolinic acid as part of the immune response that creates inflammation.
Brundin said anti-glutamate drugs are still in development, but could soon offer a promising tool for preventing suicide. In fact, recent clinical studies have shown the anesthetic ketamine – which inhibits glutamate signaling – to be extremely effective in fighting depression, though its side effects prevent it from being used widely today.
In the meantime, Brundin said physicians should be aware of inflammation as a likely trigger for suicidal behavior. She is partnering with doctors in Grand Rapids, Mich., to design clinical trials using anti-inflammatory drugs.
“In the future, it’s likely that blood samples from suicidal and depressive patients will be screened for inflammation,” Brundin said. “It is important that primary health care physicians and psychiatrists work closely together on this.”
A Key Gene for Brain Development
About one in ten thousand babies is born with an abnormally small head. The cause for this disorder – which is known as microcephaly – is a defect in the develoment of the embryonic brain. Children with microcephaly are severely retarded and their life expectancy is low. Certain cases of autism and schizophrenia are also associated with the dysregulation of brain size.
The causes underlying impaired brain development can be environmental stress (such as alcohol abuse or radiation) or viral infections (such as rubella) during pregnancy. In many cases, however, a mutant gene causes the problem.
David Keays, a group leader at the IMP, has now found a new gene which is responsible for Microcephaly. Together with his PhD-student Martin Breuss, he was able to identify TUBB5 as the culprit. The gene is responsible for making tubulins, the building blocks of the cell’s internal skeleton. Whenever a cell moves or divides, it relies on guidance from this internal structure, acting like a scaffold.
The IMP-researchers, together with collaborators at Monash University (Victoria, Australia), were able to interfere with the function of the TUBB5 in the brains of unborn mice. This led to massive disturbances in the stem cell population and impaired the migration of nerve cells. Both, the generation of large numbers of neurons from the stem cell reservoir and their correct positioning in the cortex, are essential for the development of the mammalian brain.
To determine whether the findings are also relevant in humans, David Keays collaborates with clinicians from the Paris-Sorbonne University. The French team led by Jamel Chelly, examined 120 patients with pathological brain structures and severe disabilities. Three of the children were found to have a mutated TUBB5-gene.
This information will prove vital to doctors treating children with brain disease. It will allow the development of new genetic tests which will form the basis of genetic counseling, helping parents plan for the future. By understanding how different genes cause brain disorders, it is hoped that one day scientists will be able to create new drugs and therapies to treat them.
The new findings by the IMP-researchers are published in the current issue of the journal “Cell Reports”. For David Keays, understanding the function of TUBB5 is the key to understanding brain development. “Our project shows how research in the lab can help improve lives in the clinic”, he adds.
The paper “Mutations in the β-tubulin Gene TUBB5 Cause Microcephaly with Structural Brain Abnormalities” is published on December 13, 2012, in the online Journal Cell Reports.

Follow the Eyes: Head-Mounted Cameras Could Help Robots Understand Social Interactions
What is everyone looking at? It’s a common question in social settings because the answer identifies something of interest, or helps delineate social groupings. Those insights someday will be essential for robots designed to interact with humans, so researchers at Carnegie Mellon University’s Robotics Institute have developed a method for detecting where people’s gazes intersect.
The researchers tested the method using groups of people with head-mounted video cameras. By noting where their gazes converged in three-dimensional space, the researchers could determine if they were listening to a single speaker, interacting as a group, or even following the bouncing ball in a ping-pong game.
The system thus uses crowdsourcing to provide subjective information about social groups that would otherwise be difficult or impossible for a robot to ascertain.
The researchers’ algorithm for determining “social saliency” could ultimately be used to evaluate a variety of social cues, such as the expressions on people’s faces or body movements, or data from other types of visual or audio sensors.
"This really is just a first step toward analyzing the social signals of people," said Hyun Soo Park, a Ph.D. student in mechanical engineering, who worked on the project with Yaser Sheikh, assistant research professor of robotics, and Eakta Jain of Texas Instruments, who was awarded a Ph.D. in robotics last spring. "In the future, robots will need to interact organically with people and to do so they must understand their social environment, not just their physical environment."