Posts tagged science

Posts tagged science
(Image caption: 3D image of the hippocampus of a rat. Credit: M. Pyka)
People who wish to know how memory works are forced to take a glimpse into the brain. They can now do so without bloodshed: RUB researchers have developed a new method for creating 3D models of memory-relevant brain structures. They published their results in the trade journal “Frontiers in Neuroanatomy”.
Sea Horse gave the hippocampus the name
The way neurons are interconnected in the brain is very complicated. This holds especially true for the cells of the hippocampus. It is one of the oldest brain regions and its form resembles a sea horse (hippocampus in Latin). The hippocampus enables us to navigate space securely and to form personal memories. So far, the anatomic knowledge of the networks inside the hippocampus and its connection to the rest of the brain has left scientists guessing which information arrived where and when.
Signals spread through the brain
Accordingly, Dr Martin Pyka and his colleagues from the Mercator Research Group have developed a method which facilitates the reconstruction of the brain’s anatomic data as a 3D model on the computer. This approach is quite unique, because it enables automatic calculation of the neural interconnection on the basis of their position inside the space and their projection directions. Biologically feasible network structures can thus be generated more easily than it used to be the case with the method available to date. Deploying 3D models, the researchers use this technique to monitor the way neural signals spread throughout the network time-wise. They have, for example, found evidence that the hippocampus’ form and size could explain why neurons in those networks fire in certain frequencies.
Information become memories
In future, this method may help us understand how animals, for example, combine various information to form memories within the hippocampus, in order to memorise food sources or dangers and to remember them in certain situations.
(Image caption: The structure determines the function: AMPA receptors in the nerve cells of the brain are composed of a range of more than 30 different proteins. Source: Bernd Fakler)
Understanding the Components of Memory
Dr. Uwe Schulte, Dr. Jochen Schwenk, Prof. Dr. Bernd Fakler, and their team have elucidated the enormous spatial and temporal dynamics in protein composition of the AMPA-type glutamate receptors, the most important excitatory neurotransmitter receptors in the brain. These receptors are located in the synapses, the contact points between two nerve cells, where they are responsible for the rapid signal transduction and information processing. The results illustrate that the receptors are far more diverse than previously anticipated and pave the way for research into their functions in the various regions of the brain. The biologists published their findings in the journal Neuron.
The researchers have thus opened up the possibility to investigate the properties and functions of the AMPA receptors in the various regions of the brain at the level of their protein components. This is of particular significance as the AMPA receptors and their dynamics are regarded as central elements for memory formation. The researchers succeeded in elucidating the subunit structure of the AMPA receptors in various regions of the brain and even in different groups of distinct nerve cells. It became clear that the receptors exhibit an enormous range of variation in structure and molecular architecture and can evidently be precisely adapted to the function of the nerve cells and brain region in which they are located. In addition, the researchers demonstrated that this diversity in protein composition of the receptors is also exploited during the development of the brain.
In 2012, Fakler’s research team already used novel proteomic technologies to show that AMPA receptors in the brain are assembled from a pool of more than 30 different proteins - whose primary function(s) is are most parts as yet unknown. In fact in another recent study, also published in Neuron, the researchers demonstrated just how significant these unknown components are or can be: They showed that the cornichon protein dictates the time course of the AMPA receptor-mediated synaptic transmission and thus accounts for the difference between various types of nerve cells in the brain.
The Nobel Assembly at Karolinska Institutet has decided to award the 2014 Nobel Prize in Physiology or Medicine with one half to John O´Keefe and the other half jointly to May-Britt Moser and Edvard I. Moser for their discoveries of cells that constitute a positioning system in the brain.
How do we know where we are? How can we find the way from one place to another? And how can we store this information in such a way that we can immediately find the way the next time we trace the same path? This year´s Nobel Laureates have discovered a positioning system, an “inner GPS” in the brain that makes it possible to orient ourselves in space, demonstrating a cellular basis for higher cognitive function.
In 1971, John O´Keefe discovered the first component of this positioning system. He found that a type of nerve cell in an area of the brain called the hippocampus that was always activated when a rat was at a certain place in a room. Other nerve cells were activated when the rat was at other places. O´Keefe concluded that these “place cells” formed a map of the room.
More than three decades later, in 2005, May-Britt and Edvard Moser discovered another key component of the brain’s positioning system. They identified another type of nerve cell, which they called “grid cells”, that generate a coordinate system and allow for precise positioning and pathfinding. Their subsequent research showed how place and grid cells make it possible to determine position and to navigate.
The discoveries of John O´Keefe, May-Britt Moser and Edvard Moser have solved a problem that has occupied philosophers and scientists for centuries – how does the brain create a map of the space surrounding us and how can we navigate our way through a complex environment?
How do we experience our environment?
The sense of place and the ability to navigate are fundamental to our existence. The sense of place gives a perception of position in the environment. During navigation, it is interlinked with a sense of distance that is based on motion and knowledge of previous positions.
Questions about place and navigation have engaged philosophers and scientists for a long time. More than 200 years ago, the German philosopher Immanuel Kant argued that some mental abilities exist as a priori knowledge, independent of experience. He considered the concept of space as an inbuilt principle of the mind, one through which the world is and must be perceived. With the advent of behavioural psychology in the mid-20th century, these questions could be addressed experimentally. When Edward Tolman examined rats moving through labyrinths, he found that they could learn how to navigate, and proposed that a “cognitive map” formed in the brain allowed them to find their way. But questions still lingered - how would such a map be represented in the brain?
John O´Keefe and the place in space
John O´Keefe was fascinated by the problem of how the brain controls behaviour and decided, in the late 1960s, to attack this question with neurophysiological methods. When recording signals from individual nerve cells in a part of the brain called the hippocampus, in rats moving freely in a room, O’Keefe discovered that certain nerve cells were activated when the animal assumed a particular place in the environment (Figure 1). He could demonstrate that these “place cells” were not merely registering visual input, but were building up an inner map of the environment. O’Keefe concluded that the hippocampus generates numerous maps, represented by the collective activity of place cells that are activated in different environments. Therefore, the memory of an environment can be stored as a specific combination of place cell activities in the hippocampus.
May-Britt and Edvard Moser find the coordinates
May-Britt and Edvard Moser were mapping the connections to the hippocampus in rats moving in a room when they discovered an astonishing pattern of activity in a nearby part of the brain called the entorhinal cortex. Here, certain cells were activated when the rat passed multiple locations arranged in a hexagonal grid (Figure 2). Each of these cells was activated in a unique spatial pattern and collectively these “grid cells” constitute a coordinate system that allows for spatial navigation. Together with other cells of the entorhinal cortex that recognize the direction of the head and the border of the room, they form circuits with the place cells in the hippocampus. This circuitry constitutes a comprehensive positioning system, an inner GPS, in the brain (Figure 3).
A place for maps in the human brain
Recent investigations with brain imaging techniques, as well as studies of patients undergoing neurosurgery, have provided evidence that place and grid cells exist also in humans. In patients with Alzheimer´s disease, the hippocampus and entorhinal cortex are frequently affected at an early stage, and these individuals often lose their way and cannot recognize the environment. Knowledge about the brain´s positioning system may, therefore, help us understand the mechanism underpinning the devastating spatial memory loss that affects people with this disease.
The discovery of the brain’s positioning system represents a paradigm shift in our understanding of how ensembles of specialized cells work together to execute higher cognitive functions. It has opened new avenues for understanding other cognitive processes, such as memory, thinking and planning.
(Source: nobelprize.org)

Neuroimaging could be the key to a better society
Neuroimaging techniques are a strongly emerging technology and could bring about a revolution in various areas of society, as long as we choose the direction we want to steer these developments in on time. This is one of the conclusions from a series of dialogues between neuroscientists and future users, organised for the research project Towards an appropriate societal embedding of neuroimaging. The project is part of the NWO research programme Responsible Innovation.
Ache, agony, distress and pain draw more attention than non-pain related words when it comes to people who suffer from chronic pain, a York University research using state-of-the-art eye-tracking technology has found.

“People suffering from chronic pain pay more frequent and longer attention to pain-related words than individuals who are pain-free,” says Samantha Fashler, a PhD candidate in the Faculty of Health and the lead author of the study. “Our eye movements — the things we look at — generally reflect what we attend to, and knowing how and what people pay attention to can be helpful in determining who develops chronic pain.”
Chronic pain currently affects about 20 per cent of the population in Canada.
The current study, “More than meets the eye: visual attention biases in individuals reporting chronic pain”, published in the Journal of Pain Research, incorporated an eye-tracker, which is a more sophisticated measuring tool to test reaction time than the previously used dot-probe task in similar studies.
“The use of an eye-tracker opens up a number of previously unavailable avenues for research to more directly tap what people with chronic pain attend to and how this attention may influence the presence of pain,” says Professor Joel Katz, Canada Research Chair in Health Psychology, the co-author of the study.
The researchers recorded both reaction time and eye movements of chronic pain (51) and pain-free (62) participants. Both groups viewed neutral and sensory pain-related words on a dot-probe task. They found reaction time did not indicate attention, but “the eye-tracking technology captured eye gaze patterns with millimetre precision,” according to Fashler. She points out that this helped researchers to determine how frequently and how long individuals looked at sensory pain words.
“We now know that people with and without chronic pain differ in terms of how, where and when they attend to pain-related words. This is a first step in identifying whether the attentional bias is involved in making pain more intense or more salient to the person in pain,” says Katz.
(Source: news.yorku.ca)
Scientists studying two genes that are mutated in an early-onset form of Parkinson’s disease have deciphered how normal versions of these genes collaborate to help rid cells of damaged mitochondria. Mitochondria are the cell’s primary energy source, and maintaining their health is critical for cellular function. Mitochondrial dysfunction may underlie multiple neurodegenerative diseases, including Parkinson’s.

(Image caption: PARKIN (green) is localized on damaged mitochondria. Image: Harper Lab)
In their analysis published in Molecular Cell, Harvard Medical School researchers used powerful quantitative mass spectrometry and live-cell imaging approaches to elucidate a multistep mechanism by which the two proteins mutated in Parkinson’s disease—PINK1 and PARKIN—mark mitochondria as damaged by attaching chains of a small protein called ubiquitin. This work paves the way for a deeper understanding of what molecular steps are defective when these proteins are mutated in patients with Parkinson’s disease.
“The PINK1-PARKIN pathway has been studied for many years, yet its mechanisms weren’t clearly defined,” said Wade Harper, Bert and Natalie Vallee Professor of Molecular Pathology in the Department of Cell Biology at HMS and senior author of the paper. “Combining imaging and advanced mass spectrometry approaches has allowed us for the first time to determine with molecular precision the biochemical output of the PINK1-PARKIN pathway in living cells.”
One hypothesis about the origin of Parkinson’s disease suggests that neurons place high energy demands on their mitochondria. When mitochondria become damaged and their energy production falls, they must be cleared away; if not, cell death results when the damaged mitochondria create harmful chemicals called reactive oxygen species.
People who have certain early-onset mutations in PINK1 or PARKIN genes may live normal lives until they enter their 30s when movement disorders begin to appear, reflecting the loss of neurons that make the neurotransmitter dopamine. These neurons seem to be the cells that are the most sensitive to an inability to remove damaged mitochondria.
Only in the last few years have scientists understood that the enzymes PARKIN and PINK1 work together to remove damaged mitochondria. The PINK1 kinase, an enzyme that transfers phosphate to other proteins, is activated specifically on damaged mitochondria where it then functions to promote accumulation of PARKIN on the mitochondrial surface. Once there, PARKIN—a ubiquitin ligase— marks numerous proteins on the surface of the mitochondria with chains of ubiquitin, which in turn target the damaged mitochondria for removal from the cell.
In their new work, Harper’s team identifies a multistep “feed-forward” mechanism that involves intertwined ubiquitylation and phosphorylation in a sequence of reactions that successively build on one another. To the authors’ knowledge, this is the first report of a feed-forward mechanism of this type.
The team, led by postdoctoral fellow Alban Ordureau, found that PINK1 actually has two functions in a multistep pathway. First, PINK1 phosphorylates PARKIN, greatly stimulating its ability to attach ubiquitin to mitochondrial substrates. Second, PINK1 phosphorylates ubiquitin chains that PARKIN has just built. Unexpectedly, these phosphorylated ubiquitin chains then bind tightly to activated PARKIN, thereby facilitating its retention on the mitochondrial surface and furthering ubiquitin chain assembly through a feed-forward mechanism. Eventually these chains become so dense that the damaged mitochondria are marked for degradation.
“Our finding that PARKIN binds phosphorylated-ubiquitin chains as its mechanism of retention on damaged mitochondria was completely unexpected,” Harper said. “Ubiquitin has been studied for almost 40 years, but only recently has regulation of ubiquitin by phosphorylation emerged as a major focus for the field.”
Methods employed in this study have their origins in prior work of Steven Gygi, HMS professor of cell biology and a co-author of the paper, who developed ways to quantify ubiquitin chains more than a decade ago. Harper says there is “enormous potential in the application of these approaches to understand how defects in the ubiquitin system lead to disease.”
The team also included Brenda Schulman, a Howard Hughes Medical Institute investigator, the co-director of the Cancer Genetics, Biochemistry and Cell Biology Program at St. Jude Children’s Research Hospital and a leading expert on ubiquitin biochemistry.
“This is a very intricate pathway,” Ordureau said. “We were surprised by our findings at every step.”
(Source: hms.harvard.edu)
How to tell a missile from a pylon: a tale of two cortices
During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.
Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.
In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.
The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.
However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.
Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”
The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.
“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.
(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)
The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.
The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.
Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.
These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.
The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.
Johnny Depp has an unforgettable face. Tony Angelotti, his stunt double in “Pirates of the Caribbean,” does not. So why is it that when they’re swashbuckling on screen, audiences worldwide see them both as the same person? UC Berkeley scientists have cracked that mystery.

Researchers have pinpointed the brain mechanism by which we latch on to a particular face even when it changes. While it may seem as though our brain is tricking us into morphing, say, an actor with his stunt double, this “perceptual pull” is actually a survival mechanism, giving us a sense of stability, familiarity and continuity in what would otherwise be a visually chaotic world, researchers point out.
“If we didn’t have this bias of seeing a face as the same from one moment to the next, our perception of people would be very confusing. For example, a friend or relative would look like a completely different person with each turn of the head or change in light and shade,” said Alina Liberman, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Thursday, Oct. 2 in the online edition of the journal, Current Biology.
In searching for an exact match to a “target” face on a computer screen, study participants consistently identified a face that was not the target face, but a composite of the faces they had seen over the past few seconds. Moreover, participants judged the match to be more similar to the target face than it really was. The results help explain how humans process visual information from moment to moment to stabilize their environment.
“Our visual system loses sensitivity to stunt doubles in movies, but that’s a small price to pay for perceiving our spouse’s identity as stable,” said David Whitney, a professor of psychology at UC Berkeley and senior author of the study.
Previous research in Whitney’s lab established the existence of a “Continuity Field” in which we visually meld similar objects seen within a 15-second time frame. For example, that study helped explain why we miss movie-mistake jump cuts, such as Harry Potter’s T-shirt abruptly changing from a crewneck into a henley shirt in the “Order of the Phoenix.”
This latest study builds on that by testing how a Continuity Field applies to our observation and recognition of faces, arguably one of the most important human social and perceptual functions, researchers said.
“Without the extraordinary ability to recognize faces, many social functions would be lost.Imagine picking up your child at school and not being able to recognize which kid is yours,” Whitney said. “Fortunately, this type of face blindness is rare. What is common, however, are changes in viewpoint, noise, blur, and lighting changes that could cause faces to appear very different from moment to moment. Our results suggest that the visual system is biased against such wavering perception in favor of continuity.”
To test this phenomenon, study participants viewed dozens of faces that varied in similarity. Each six seconds, a “target face” flashed on the computer screen for less than a second, followed by a series of faces that morphed with each click of an arrow key from one to the next. Participants clicked through the faces until they found the one that most closely matched the “target face.” Time and again, the face they picked was a combination of the two most recently seen target faces.
“Regardless of whether study participants cycled through many faces until they found a match or quickly named which face they saw, perception of a face was always pulled towards face identities they saw within the last 10 seconds,” Liberman said. “Importantly, if the faces that participants recently saw all looked very distinct, the visual system did not merge these identities together, indicating that this perceptual pull does depend on the similarity of recently seen faces.”
In a follow up experiment, the faces were viewed from different angles instead of frontal views to ensure that study participants were not latching on to a particular feature, say, bushy eyebrows or a distinct shadow across a cheekbone, but actually recognizing the entire visage.
“Sequential faces that are somewhat similar will display a much more striking family resemblance than is actually present, simply because of this Continuity Field for faces,” Liberman said.
Neuroscientists use snail research to help explain “chemo brain”
It is estimated that as many as half of patients taking cancer drugs experience a decrease in mental sharpness. While there have been many theories, what causes “chemo brain” has eluded scientists.
In an effort to solve this mystery, neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) conducted an experiment in an animal memory model and their results point to a possible explanation. Findings appeared in The Journal of Neuroscience.
In the study involving a sea snail that shares many of the same memory mechanisms as humans and a drug used to treat a variety of cancers, the scientists identified memory mechanisms blocked by the drug. Then, they were able to counteract or unblock the mechanisms by administering another agent.
“Our research has implications in the care of people given to cognitive deficits following drug treatment for cancer,” said John H. “Jack” Byrne, Ph.D., senior author, holder of the June and Virgil Waggoner Chair and chairman of the Department of Neurobiology and Anatomy at the UTHealth Medical School. “There is no satisfactory treatment at this time.”
While much work remains, Byrne, who runs the university’s Neuroscience Research Center, said understanding how these drugs impact the brain is an important first step in alleviating this condition characterized by forgetfulness, trouble concentrating and difficulty multitasking.
Byrne’s laboratory is known for its use of a large snail called Aplysia californica to further the understanding of the biochemical signaling among nerve cells (neurons). The snails have large neurons that relay information much like those in humans.
When Byrne’s team compared cell cultures taken from normal snails to those administered a dose of a cancer drug called doxorubicin, the investigators pinpointed a neuronal pathway that was no longer passing along information properly.
With the aid of an experimental drug, the scientists were able to reopen the pathway. Unfortunately, this drug would not be appropriate for humans, Byrne said. “We want to identify other drugs that can rescue these memory mechanisms,” he added.
The scientists confirmed their findings in tests on the nerve cells of rats.
“The big picture is to determine if this cancer drug acts in the same way in humans,” Byrne said.