Posts tagged brain

Posts tagged brain

IBM Research And LLNL Claim 1014 Synapse Simulation
Inspired by the function, power, and volume of the organic brain, IBMis reportedly developing TrueNorth, a novel modular, scalable, non-von Neumann, ultra-low power, cognitive computing architecture. The TrueNorth system consists of a scalable network of neurosynaptic cores, with each core containing neurons, dendrites, synapses, and axons. Also, to help the computation of TrueNorth, IBM has developed Compass, a multi-threaded, massively parallel functional simulator and a parallel compiler that maps a network of long-distance pathways in the macaque monkey brain to TrueNorth.
The research was recently presented at the Super Computing 2012 (SC12) conference in Salt Lake City. The paper, “Compass: A scalable simulator for an architecture for Cognitive Computing" is available online.
IBM and Lawrence Livermore National Laboratory (LBNL) demonstrated near-perfect weak scaling on a 16 rack IBM Blue Gene/Q (262,144 processor cores, 256 TB memory), achieving an unprecedented scale of 256 million neurosynaptic cores containing 65 billion neurons and 16 trillion synapses running only 388× slower than real time with an average spiking rate of 8.1 Hz. By using emerging PGAS communication primitives, IBM also demonstrated 2× better real-time performance over MPI primitives on a 4 rack Blue Gene/P (16384 processor cores, 16 TB memory).
Also, since submitting the original paper, the work has continued using 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads), IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 53x1010 neurons and 1.37x1014 synapses running only 1542× slower than real time. Here is PDF of IBM Research Report, RJ 10502.
As in the image above, A Network of Neurosynaptic Cores Derived from Long-distance Wiring in the Monkey Brain -Neuro-synaptic cores are locally clustered into brain-inspired regions, and each core is represented as an individual point along the ring. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core.

Virtual Reality Could Spot Real-World Impairments
A virtual reality test being developed at UTSC might do a better job than pencil-and-paper tests of predicting whether a cognitive impairment will have real-world consequences.
The test developed by Konstantine Zakzanis, associate professor of psychology, and colleagues, uses a computer-game-like virtual world and asks volunteers to navigate their ways through tasks such as delivering packages or running errands around town.
“If we’re being asked to tell if people could do things like work, houseclean, and take care of their kids, we need to show that our tests predict performance in the real world,” says Zakzanis.
But standard tests don’t do that very well, he says. Although tests that ask people to do things like solve math problems, sort cards, remember names, or judge the relative positions of lines in visual two dimensional space, can detect cognitive impairments caused by circumscribed lesions following a stroke or head injury, they’re not very good at predicting who will be able to function in the real world and who won’t.
That’s a problem for cognitively impaired people who might be denied insurance benefits or workers compensation based on tests that are insensitive to demonstrating their impairment. It is akin to having a broken arm with no x-ray to prove it.
Optogenetics illuminates pathways of motivation through brain
Whether you are an apple tree or an antelope, survival depends on using your energy efficiently. In a difficult or dangerous situation, the key question is whether exerting effort — sending out roots in search of nutrients in a drought or running at top speed from a predator — will be worth the energy.
In a paper published online Nov. 18 in Nature, Karl Deisseroth, MD, PhD, a professor of bioengineering and of psychiatry and behavioral sciences at Stanford University, and postdoctoral scholar Melissa Warden, PhD, describe how they have isolated the neurons that carry these split-second decisions to act from the higher brain to the brain stem. In doing so, they have provided insight into the causes of severe brain disorders such as depression.
In organisms as complex as humans, the neural mechanisms that help answer the question, “Is it worth my effort?” can fail, leading to debilitating mental illnesses. Major depressive disorder, for instance, which affects nearly 20 percent of people at some point in life, is correlated with underperformance in the parts of the brain involved in motivation. But researchers have struggled to work out the exact cause and effect.
“It’s challenging because we do not have a fundamental understanding of the circuitry that controls this sort of behavioral pattern selection. We don’t understand what the brain is doing wrong when these behaviors become dysfunctional, or even what the brain is supposed to be doing when things are working right,” Deisseroth said. “This is the level of the mystery we face in this field.”
Clinicians refer to this slowing down of motivation in depressed patients as “psychomotor retardation.” According to Deisseroth, who is also a practicing psychiatrist, patients may experience this symptom mentally, finding it hard to envision the positive results of an action, or, he said, they may feel physically heavy, like their limbs just do not want to move.
“This is one of the most debilitating aspects of depression, and motivation to take action is something that we can model in animals. That’s the exciting opportunity for us as researchers,” said Deisseroth, who also holds the D.H. Chen Professorship.
Schizophrenia wrecks the lives of millions worldwide – and has defeated researchers looking for a single cause. Time for complex new thinking.
PAUL is 21. He thinks the voices started a couple of years ago, but it’s hard to remember exactly because they just seemed to fade in. They whisper insistently, commenting on his actions, trying to control his thoughts and feelings. Living with them is a constant battle, causing him to drop out of college and stop seeing friends. He has been treated in hospital and is being prescribed antipsychotic drugs, but he sees all this as part of a conspiracy.
Paul’s world view is informed by psychosis. This mental state disrupts perception and the interpretation of reality, and is characterised by hallucinations and delusions. Doctors recognise psychosis as a marker for many medical conditions ranging from those caused by electrolyte disturbance to epilepsy, dementia and rare autoimmune disorders.
In Paul’s case these conditions are rapidly excluded. After other short-lived, mood or drug-related causes are also excluded, Paul is diagnosed with schizophrenia - one of a group of disorders characterised by psychosis. But schizophrenia also affects Paul’s emotional and verbal responsiveness, motivation and insight. And it is these functional symptoms that are its most disabling features because they erode the ability to interact with others, maintain social contacts and work.
So what is schizophrenia? In the late 19th century German psychiatrist Emil Kraepelin identified the symptoms and presentation of a disease later called schizophrenia by Eugen Bleuler, a Swiss psychiatrist. Bleuler saw it as an umbrella term for a collection of diseases. Despite attempts to define subtypes or identify specific forms, schizophrenia is still treated broadly as a single disease, and it affects around 1 per cent of adults.
So a shorter, more honest answer to the question of what schizophrenia is would be that we won’t really know until we can define its neurobiological basis. For now, psychosis represents a major frontier in neuroscience because it shakes our certainties about the way we see the world - and understand the brain.
Researchers at the University of Copenhagen have found that a protein, known for causing cancer cells to spread around the body, is also one of the molecules that trigger repair processes in the brain. These findings are the subject of a paper, published this week in Nature Communications. They point the way to new avenues of research into degenerative brain diseases like Alzheimer’s.

How to repair brain injuries is a fundamental question facing brain researchers. Scientists have been familiar with the protein S100A4 for some time as a factor in metastasis, or how cancer spreads. However it’s the first time the protein has been shown to play a role in brain protection and repair.
“This protein is not normally in the brain, only when there’s trauma or degeneration. When we deleted the protein in mice we discovered that their brains were less protected and able to resist injury. We also discovered that S100A4 works by activating signalling pathways inside neurons,” says Postdoc Oksana Dmytriyeva, who worked on the research in a team at the Protein Laboratory in the Department of Neuroscience and Pharmacology at the University of Copenhagen.
The villain turns out to be the hero
This research stands on the shoulders of many years of work on S100A4 in its deadlier role in cancer progression. The discovery represents a significant development for the new Neuro-Oncology Group that moved to the University of Copenhagen’s Protein Laboratory Group from the Danish Cancer Society in October.
“We were surprised to find this protein in this role, as we thought it was purely a cancer protein. We are very excited about it and we’re looking forward to continuing our research in a practical direction. We hope that the findings will eventually benefit people who need treatment for neurodegenerative disorders like Alzheimer’s disease, although obviously we have a long way to go before we get to that point,” says Oksana Dmytriyeva.
(Source: news.ku.dk)
Zooming in on the human brain
A visually compelling tour of the human brain, from anatomy to cells to genes and back.
Could neuroscientists be the next great architects?
In everyday life we rarely consciously try to lip-read. However, in a noisy environment it is often very helpful to be able to see the mouth of the person you are speaking to. Researcher Helen Blank at the MPI in Leipzig explains why this is so: “When our brain is able to combine information from different sensory sources, for example during lip-reading, speech comprehension is improved.” In a recent study, the researchers of the Max Planck Research Group “Neural Mechanisms of Human Communication” investigated this phenomenon in more detail to uncover how visual and auditory brain areas work together during lip-reading.
In the experiment, brain activity was measured using functional magnetic resonance imaging (fMRI) while participants heard short sentences. The participants then watched a short silent video of a person speaking. Using a button press, participants indicated whether the sentence they had heard matched the mouth movements in the video. If the sentence did not match the video, a part of the brain network that combines visual and auditory information showed greater activity and there were increased connections between the auditory speech region and the STS.
“It is possible that advanced auditory information generates an expectation about the lip movements that will be seen”, says Blank. “Any contradiction between the prediction of what will be seen and what is actually observed generates an error signal in the STS.”
How strong the activation is depends on the lip-reading skill of participants: The strong-er the activation, the more correct responses were. “People that were the best lip-readers showed an especially strong error signal in the STS”, Blank explains. This effect seems to be specific to the content of speech - it did not occur when the subjects had to decide if the identity of the voice and face matched.
The results of this study are very important to basic research in this area. A better understanding of how the brain combines auditory and visual information during speech processing could also be applied in clinical settings. “People with hearing impairment are often strongly dependent on lip-reading”, says Blank. The researchers suggest that further studies could examine what happens in the brain after lip-reading training or during a combined use of sign language and lip-reading.

A team of cognitive neuroscientists has identified the areas of the brain responsible for processing specific words meanings, bringing us one step closer to developing multilingual mind reading machines.
Presenting the findings at the Society for the Neurobiology of Language Conference in San Sebastián, Spain, Joao Correia of Maastricht University explained that his team decided to answer one central question: “how do we represent the meaning of words independent of the language we are listening to?”
Past studies have focused on identifying areas of the brain that generate and hear general terms or feelings. However, if we can locate where the actual concept of a word — which transcends language — is processed, we would be able to read the mind of any individual. The recent case of 39-year-old Scott Routley letting doctors know he is not in pain, just by thinking, is a prime example of where this could be extremely effective in the future. After not responding to any stimulation for more than a decade, Routley was thought to be in a persistent vegetative state. However, by studying fMRI scans in real time neurologists could identify that Routley was in fact responding to their questions — they asked him to think about playing tennis or walking around at home to indicate yes or no. These two actions are processed in different areas of the brain, so answers could be extracted by reading scans. With Correia’s approach, we would need no signifier for yes or no — we could go straight to the source where the processing of the meaning of positive and negative takes place; the “hub”, as he puts it.
"This fMRI study investigates the neural network of speech processing responsible for transforming sound to meaning, by exploring the semantic similarities between bilingual wordpairs," explains an abstract of the study. To achieve this, they needed bilingual volunteers, so worked with eight Dutch candidates all fluent in English. First off, the team monitored the volunteers’ neural activity while saying the words "bull", "horse", "shark" and "duck" in English. All the words chosen had one syllable, were from a similar group and were probably learnt round the same period — this ensured that any differences would specifically relate to meaning. Different brain activity patterns appeared in the left anterior temporal cortex, and each of these were then fed into an algorithm so it would be able to flag up when one of the words was uttered again.
The hypothesis was, if the algorithm could still correctly identify the words when they were spoken in Dutch, these patterns would hold the key to where the word concepts are derived. The algorithm did exactly that. It demonstrates that words are encoded in the same way in the brain, regardless of language.
There is one pretty major drawback to the process, which quashes any visions of a full-on real-time mind translation machine hitting stores anytime soon — the neural activity patterns differed slightly from person to person. Our neurons learn and identify in unique ways, and understanding these pathway patterns through machine learning would be a long process. “You would have to scan a person as they thought their way through a dictionary,” said Matt Davis of the MRC Cognition and Brain Sciences Unit in Cambridge. It would be difficult to translate a mind now without this concept map. However, we are only at the beginning of this line of study, and an algorithm could potentially be devised to aggregate hundreds of neural activity patterns to help indicate what the brain activity of an individual unable to communicate represents.