Posts tagged neuroscience

Posts tagged neuroscience
Chimps solve puzzles for the thrill of it
The apes, which are our closest relatives in the animal kingdom, seem to get the same level of satisfaction out of solving brain teasers as their human evolutionary cousins.
A study published by the Zoological Society of London shows that six chimpanzees who were given a game which involved moving red dice or Brazil through a maze of pipes enjoyed solving the puzzle whether they got a reward or not.
The researchers claim this suggests they got the same kind of psychological reward as humans get when solving problems.
Most problem solving witnessed in the animal kingdom, where animals use tools or navigate mazes, are with the aim of reaching food. Hyenas, octopuses and birds such as crows all show the ability to solve problems.
Chimpanzees have also been witnessed in the wild using tools such as a stick to forage for insects or honey in hard to reach places like tree stumps.
But ZSL researcher Fay Clark said their research said they could be motivated by more than just food.
She said: “We noticed that the chimps were keen to complete the puzzle regardless of whether or not they received a food reward.
"This strongly suggests they get similar feelings of satisfaction to humans who often complete brain games for a feel-good reward.”
People in a vegetative state may feel pain
It is a nightmare situation. A person diagnosed as being in a vegetative state has an operation without anaesthetic because they cannot feel pain. Except, maybe they can.
Alexandra Markl at the Schön clinic in Bad Aibling, Germany, and colleagues studied people with unresponsive wakefulness syndrome (UWS) – also known as vegetative state – and identified activity in brain areas involved in the emotional aspects of pain. People with UWS can make reflex movements but can’t show subjective awareness.
There are two distinct neural networks that work together to create the sensation of pain. The more basic of the two – the sensory-discriminative network – identifies the presence of an unpleasant stimulus. It is the affective network that attaches emotions and subjective feelings to the experience. Crucially, without the activity of the emotional network, your brain detects pain but won’t interpret it as unpleasant.
Using PET scans, previous studies have detected activation in the sensory-discriminative network in people with UWS but their findings were consistent with a lack of subjective awareness, the hallmark of the condition.
Now Markl and her colleagues have found evidence of activation in the affective or emotional network too (Brain and Behavior).
Her team gave moderately painful electric shocks to 30 people with UWS, while scanning their brains using fMRI. Sixteen people had some kind of brain activation – seven only in the sensory network but nine in the affective network as well.
These results question whether some diagnoses should change from UWS to minimally conscious, which is characterised by some level of awareness.
"I don’t think this paper alone will change the clinical approach to people with diagnoses such as UWS," says Donald Weaver at Dalhousie University in Halifax, Nova Scotia, Canada, who was not involved in the work. But it will encourage future study, he says.
Changing a diagnosis depends on whether neurologists are ready to accept alternative ways of diagnosing disorders of consciousness, says Boris Kotchoubey at the Institute of Medical Psychology and Behavioural Neurobiology in Tübingen, Germany, who worked on the study.
Nonetheless, Kotchoubey is confident that the way people with UWS are cared for will change, even if their diagnoses remain the same. “I know that many doctors working with such patients have been instructed to treat their patients as if they can understand them and perceive at least something in the environment, perhaps pain, pleasure, or emotion,” he says.
But not all people are treated this way. Prior to the study, one of the people in Markl’s study was given no anaesthesia before a tracheotomy, which involves an incision in the neck to allow breathing without using the nose or mouth. As people with UWS are clinically considered unable to understand pain, doctors do not have to give an anaesthetic.
London neuroscience centre to map ‘connectome’ of foetal brain
A state-of-the-art imaging facility at St Thomas’ Hospital in London has been awarded a 15m euro grant to map the development of nerve connections in the brain before and just after birth.
The Centre for the Developing Brain — which is partly funded by King’s College London (KCL) — has built a unique neonatal Magnetic Resonance Imaging Clinical Research Facility based in the intensive care unit of the Evelina Children’s Hospital at St Thomas’. It is one of two centres in the world — the other being at Imperial College — to have such a clinical research facility and associated scanner within a neonatal intensive care unit.
Over the next few years a team headed up by David Edwards, a consultant neonatologist and KCL Professor of Paediatrics and Neonatal Medicine, will build up a diagram of connections in the brain of babies as they develop in the womb and then after they are born. The aim is to understand how the human brain assembles itself from a functional and structural perspective. The resulting map is called a connectome and is the brain equivalent of the human genome. It will be made available to the research community to help improve understanding of neurological disorders.
Lessons from cockroaches could inform robotics
Running cockroaches start to recover from being shoved sideways before their dawdling nervous system kicks in to tell their legs what to do, researchers have found. These new insights on how biological systems stabilize could one day help engineers design steadier robots and improve doctors’ understanding of human gait abnormalities.
In experiments, the roaches were able to maintain their footing mechanically—using their momentum and the spring-like architecture of their legs, rather than neurologically, relying on impulses sent from their central nervous system to their muscles.
"The response time we observed is more than three times longer than you’d expect," said Shai Revzen, an assistant professor of electrical engineering and computer science, as well as ecology and evolutionary biology, at the University of Michigan. Revzen is the lead author of a paper on the findings published online in Biological Cybernetics. It will appear in a forthcoming print edition.
"What we see is that the animals’ nervous system is working at a substantial delay," he said. "It could potentially act a lot sooner, within about a thirtieth of a second, but instead, it kicks in after about a step and a half or two steps—about a tenth of a second. For some reason, the nervous system is waiting and seeing how it shapes out."
Revzen said the new findings might imply that the biological brain, at least in cockroaches, adjusts the gait only at whole-step intervals rather than at any point in a step. Periodic, rather than continuous, feedback systems might lead to more stable (not to mention energy-efficient) walking robots—whether they travel on two feet or six.
Robot makers often look to nature for inspiration. As animals move through the world, they have to respond to unexpected disturbances like rocky, uneven ground or damaged limbs. Revzen and his team believe that patterns in how they move as they adjust could give away how their machinery and neurology work together.
"The fundamental question is, ‘What can you do with a mechanical suspension versus one that requires electronic feedback?" Revzen said. "The animals obviously have much better mechanical designs than anything we know how to build. But if we could learn how they do it, we might be able to reproduce it."
World premiere of muscle and nerve controlled arm prosthesis
For the first time an operation has been conducted, at Sahlgrenska University Hospital, where electrodes have been permanently implanted in nerves and muscles of an amputee to directly control an arm prosthesis. The result allows natural control of an advanced robotic prosthesis, similarly to the motions of a natural limb.
A surgical team led by Dr Rickard Brånemark, Sahlgrenska University Hospital, has carried out the first operation of its kind, where neuromuscular electrodes have been permanently implanted in an amputee. The operation was possible thanks to new advanced technology developed by Max Ortiz Catalan, supervised by Rickard Brånemark at Sahlgrenska University Hospital and Bo Håkansson at Chalmers University of Technology.
“The new technology is a major breakthrough that has many advantages over current technology, which provides very limited functionality to patients with missing limbs,” says Rickard Brånemark.
Big challenges
There have been two major issues on the advancement of robotic prostheses: 1) how to firmly attach an artificial limb to the human body; 2) how to intuitively and efficiently control the prosthesis in order to be truly useful and regain lost functionality.
“This technology solves both these problems by combining a bone anchored prosthesis with implanted electrodes,” said Rickard Brånemark, who along with his team has developed a pioneering implant system called Opra, Osseointegrated Prostheses for the Rehabilitation of Amputees.
A titanium screw, so-called osseointegrated implant, is used to anchor the prosthesis directly to the stump, which provides many advantages over a traditionally used socket prosthesis.
“It allows complete degree of motion for the patient, fewer skin related problems and a more natural feeling that the prosthesis is part of the body. Overall, it brings better quality of life to people who are amputees,” says Rickard Brånemark.
How it works
Presently, robotic prostheses rely on electrodes over the skin to pick up the muscles electrical activity to drive few actions by the prosthesis. The problem with this approach is that normally only two functions are regained out of the tens of different movements an able-body is capable of. By using implanted electrodes, more signals can be retrieved, and therefore control of more movements is possible. Furthermore, it is also possible to provide the patient with natural perception, or “feeling”, through neural stimulation.
“We believe that implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement,” says Rickard Brånemark.
The patient
The first patient has recently been treated with this technology, and the first tests gave excellent results. The patient, a previous user of a robotic hand, reported major difficulties in operating that device in cold and hot environments and interference from shoulder muscles. These issues have now disappeared, thanks to the new system, and the patient has now reported that almost no effort is required to generate control signals. Moreover, tests have shown that more movements may be performed in a coordinated way, and that several movements can be performed simultaneously.
“The next step will be to test electrical stimulation of nerves to see if the patient can sense environmental stimuli, that is, get an artificial sensation. The ultimate goal is to make a more natural way to replace a lost limb, to improve the quality of life for people with amputations,” says Rickard Brånemark.

Concepts in our minds – from Luke Skywalker to our grandmother - are represented by their own distinct group of neurons, according to new research involving a University of Leicester neuroscientist.
The research, by neuroscientist Professor Rodrigo Quian Quiroga from the University of Leicester Centre for Systems Neuroscience together with Professor Itzhak Fried, of the UCLA David Geffen School of Medicine, Tel Aviv Sourasky Medical Center and Tel Aviv University, and Professor Christof Koch, of the California Institute of Technology and Allen Institute for Brain Science, Seattle, is featured in a recent article of the prestigious Scientific American magazine.
Recent experiments during brain surgeries have shown that small groups of brain cells are responsible for encoding memories of specific people or objects.
These neurons may also represent different variations of one thing – from the name of a person to their appearance from many different viewpoints.
The researchers believe that single concepts may be held in as little as thousands of neurons or less – a tiny fraction of the billion or so neurons contained in the medial temporal lobe, which is a memory related structure within the brain.
The group were able to monitor the brain activity of consenting patients undergoing surgery to treat epilepsy. This allowed the team to monitor the activity of single neurons in conscious patients while they looked at images on laptop screens, creating and recalling memories.
In previous experiments, they had found that single neurons would ‘fire’ for specific concepts – such as Luke Skywalker – even when they were viewing images of him from different angles or simply hearing or reading his name.
They have also found that single neurons can also fire to related people and objects – for instance, the neuron that responded to Luke Skywalker also fired to Yoda, another Jedi from Star Wars.
They argue that relatively small groups of neurons hold concepts like Luke Skywalker and that related concepts such as Yoda are held by some but not all of the same neurons. At the same time, a completely separate set of neurons would hold an unrelated concept like Jennifer Aniston.
The group believes this partially overlapping representation of related concepts are the neural underpinnings of encoding associations, a key memory function.
Professor Quian Quiroga said: “After the first thrill when finding neurons in the human hippocampus with such remarkable firing characteristics, converging evidence from experiments we have been carrying out in the last years suggests that we may be hitting one of the key mechanisms of memory formation and recall.
“The abstract representation of concepts provided by these neurons is indeed ideal for representing the meaning of the sensory stimuli around us, the internal representation we use to form and retrieve memories. These concepts cells, we believe, are the building blocks of memory functions.”
Working with a group from Nagasaki University, a research group at the Center for iPS Cell Research and Application (CiRA) has successfully modeled Alzheimer’s disease (AD) using both familial and sporadic patient-derived induced pluripotent stem cells (iPSCs), and revealed stress phenotypes and differential drug responsiveness associated with intracellular amyloid β oligomers in AD neurons and astrocytes.
In a study published online in Cell Stem Cell, Associate Professor Haruhisa Inoue and his team at CiRA and a research group led by Professor Nobuhisa Iwata of Nagasaki University generated cortical neurons and astrocytes from iPSCs derived from two familial AD patients with mutations in amyloid precursor protein (APP), and two sporadic AD patients. The neural cells from one of the familial and one of the sporadic patients showed endoplasmic reticulum (ER)-stress and oxidative-stress phenotypes associated with intracellular Aβ oligomers. The team also found that these stress phenotypes were attenuated with docosahexaenoic acid (DHA) treatment. These findings may help explain the variable clinical results obtained using DHA treatment, and suggest that DHA may in fact be effective only for a subset of patients.
Using both familial and sporadic AD iPSCs, the researchers discovered that pathogenesis differed between individual AD patients. For example, secreted Aβ42 levels were depressed in familial AD with APP E693Δ mutation, elevated in familial AD with APP V717L mutation, but normal in sporadic AD.
"This shows that patient classification by iPSC technology may contribute to a preemptive therapeutic approach toward AD," said Inoue, a principal investigator at CiRA who is also a research director for the CREST research program funded by the Japan Science and Technology Agency. "Further advances in iPSC technology will be required before large-scale analysis of AD patient-specific iPSCs is possible."
IQ loss linked to Schizophrenia genes
People at greater genetic risk of schizophrenia could see a fall in IQ as they age, study shows.
Scientists at the University say IQ decline in those at risk could happen even if they do not develop schizophrenia.
The findings could lead to new research into how different genes for schizophrenia affect brain function over time. Schizophrenia - a severe mental disorder characterised by delusions and by hallucinations - is in part caused by genetic factors.
The researchers used the latest genetic analysis techniques to reach their conclusion on how thinking skills change with age.
Retaining our thinking skills as we grow older is important for living well and independently. If nature has loaded a person’s genes towards schizophrenia, then there is a slight but detectable worsening in cognitive functions between childhood and old age. -Professor Ian Deary (Director of the University of Edinburgh’s Centre for Cognitive Ageing and Cognitive Epidemiology)
Historical data
They compared the IQ scores of more than 1,000 people from Edinburgh.
The people were tested for general cognitive functions in 1947, aged 11, and again when they were around 70 years old.
The researchers were able to examine people’s genes and calculate each subject’s genetic likelihood of developing schizophrenia, even though none of the group had ever developed the illness.
They then compared the IQ scores of people with a high and low risk of developing schizophrenia.
Scientists found that there was no difference at age 11, but people with a greater genetic risk of schizophrenia had slightly lower IQs at age 70.
Those people who had more genes linked to schizophrenia also had a greater estimated fall in IQ over their lifetime than those at lower risk.
Cognitive impact
With further research into how these genes affect the brain, it could become possible to understand how genes linked to schizophrenia affect people’s cognitive functions as they age. -Professor Andrew McIntosh (Centre for Clinical Brain Sciences)
Schizophrenia affects around 1 per cent of the population, often in the teenage or early adult years, and is associated with problems in mental ability and memory.
The study, which was funded by the BBSRC, Age UK, and the Chief Scientist Office, is published in the journal Biological Psychiatry.
Has evolution given humans unique brain structures?
Humans have at least two functional networks in their cerebral cortex not found in rhesus monkeys. This means that new brain networks were likely added in the course of evolution from primate ancestor to human. These findings, based on an analysis of functional brain scans, were published in a study by neurophysiologist Wim Vanduffel (KU Leuven and Harvard Medical School) in collaboration with a team of Italian and American researchers.
Our ancestors evolutionarily split from those of rhesus monkeys about 25 million years ago. Since then, brain areas have been added, have disappeared or have changed in function. This raises the question, ‘Has evolution given humans unique brain structures?’. Scientists have entertained the idea before but conclusive evidence was lacking. By combining different research methods, we now have a first piece of evidence that could prove that humans have unique cortical brain networks.
Professor Vanduffel explains: “We did functional brain scans in humans and rhesus monkeys at rest and while watching a movie to compare both the place and the function of cortical brain networks. Even at rest, the brain is very active. Different brain areas that are active simultaneously during rest form so-called ‘resting state’ networks. For the most part, these resting state networks in humans and monkeys are surprisingly similar, but we found two networks unique to humans and one unique network in the monkey.”
“When watching a movie, the cortex processes an enormous amount of visual and auditory information. The human-specific resting state networks react to this stimulation in a totally different way than any part of the monkey brain. This means that they also have a different function than any of the resting state networks found in the monkey. In other words, brain structures that are unique in humans are anatomically absent in the monkey and there no other brain structures in the monkey that have an analogous function. Our unique brain areas are primarily located high at the back and at the front of the cortex and are probably related to specific human cognitive abilities, such as human-specific intelligence.”
The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to a particular task, thus allowing activity to be tracked.
Linguistics and biology researchers propose a new theory on the deep roots of human speech.

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”
Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.
“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.
The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.
Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.
“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”
These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.
“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”
A new chapter in the songbook
The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.
To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”
Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.
By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.
Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.
“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.
As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”
Birds and bees
Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”
Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.
The researchers acknowledge that further empirical studies on the subject would be desirable.
“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”
Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.
MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.
“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”
(Source: web.mit.edu)