Posts tagged brain

Posts tagged brain

According to psychological lore, when it comes to items of information the mind can cope with before confusion sets in, the “magic” number is seven. But a new analysis by a leading Australian psychiatrist challenges this long-held view, suggesting the number might actually be four.
In 1956, American psychologist George Miller published a paper in the influential journal Psychological Review arguing the mind could cope with a maximum of only seven chunks of information. The paper, “The Magical Number Seven, Plus or Minus Two. Some Limits on Our Capacity for Processing Information”, has since become one of the most highly cited psychology articles and has been judged by the Psychological Review as its most influential paper of all time.
But UNSW professor of psychiatry Gordon Parker says a re-analysis of the experiments used by Miller shows he missed the correct number by a wide mark. Writing in the journal Acta Psychiatrica Scandinavica, Scientia Professor Parker says a closer look at the evidence shows the human mind copes with a maximum of four ‘chunks’ of information, not seven.
“So to remember a seven numeral phone number, say 6458937, we need to break it into four chunks: 64. 58. 93. 7. Basically four is the limit to our perception.
“That’s a big difference for a paper that is one of the most highly referenced psychology articles ever – nearly a 100 percent discrepancy,” he suggests.
Professor Parker says the success of the original paper lies “more in its multilayered title and Miller’s evocative use of the word ‘magic’,” than in the science.
Professor Parker says 50 years after Miller there is still uncertainty about the nature of the brain’s storage capacity limits: “There may be no limit in storage capacity per se but only a limit to the duration in which items can remain active in short-term memory”. “Regardless, the consensus now is that humans can best store only four chunks in short-term memory tasks,” he says.
Moral evaluations of harm are instant and emotional
People are able to detect, within a split second, if a hurtful action they are witnessing is intentional or accidental, new research on the brain at the University of Chicago shows.
The study is the first to explain how the brain is hard-wired to recognize when another person is being intentionally harmed. It also provides new insights into how such recognition is connected with emotion and morality, said lead author Jean Decety, the Irving B. Harris Professor of Psychology and Psychiatry at UChicago.
“Our data strongly support the notion that determining intentionality is the first step in moral computations,” said Decety, who conducted research on the topic with Stephanie Cacioppo, a research associate (assistant professor) in psychology at UChicago. They published the results in a paper, “The Speed of Morality: A High-Density Electrical Neurological Study,” to be published Dec. 1 and now on early preview in the Journal of Neurophysiology.
The researchers studied adults who watched videos of people who suffered accidental harm (such as being hit with a golf club) and intentional harm (such as being struck with a baseball bat). While watching the videos, brain activity was collected with equipment that accurately maps responses in different regions of the brain and importantly, the timing between these regions. The technique is known as high-density, event-related potentials technology.
The intentional harm sequence produced a response in the brain almost instantly. The study showed that within 60 milliseconds, the right posterior superior temporal sulcus (also known as TPJ area), located in the back of the brain, was first activated, with different activity depending on whether the harm was intentional or accidental. It was followed in quick succession by the amygdala, often linked with emotion, and the ventromedial prefrontal cortex (180 milliseconds), the portion of the brain that plays a critical role in moral decision-making.
There was no such response in the amygdala and ventromedial prefrontal cortex when the harm was accidental.
Alcohol Drinking Behavior Reduced By Inhibiting Brain Protein in Rodents
Decreasing the level of a key brain protein led to significantly less drinking and alcohol-seeking behavior in rats and mice that had been trained to drink, according to a study by researchers at the Ernest Gallo Clinic and Research Center at UCSF.
The scientists identified the protein, known as H-Ras, as a promising target for development of new medications to treat alcohol abuse disorders in humans.
The study, which was published on Nov. 7 in the Journal of Neuroscience, was recommended as being of special significance in its field by the Faculty of 1000, an online service that identifies great peer-reviewed biomedical research.
The researchers, led by Gallo investigator Dorit Ron, PhD, first demonstrated that alcohol intake significantly increased H-Ras activity in the animals’ nucleus accumbens, a brain region that in both rodents and humans is part of the reward system that affects craving for alcohol and other addictive substances.
They then showed that suppressing H-Ras levels in the nucleus accumbens with a targeted virus reduced alcohol consumption among mice that had been trained to seek out and drink alcohol in an animal model of binge drinking.
The researchers then administered FTI-276, an experimental compound that has been shown to inhibit H-Ras production, to binge-drinking rats. They observed a significant reduction in alcohol consumption after the compound was given.
The scientists also found that H-Ras inhibition reduced alcohol-seeking behavior among rats that had been trained to receive a drink of alcohol when they pressed a lever. When alcohol was withheld, rats that had received FTI-276 discontinued pressing the lever significantly sooner than rats that did not receive the compound.
Importantly, the rodents’ consumption of water, sugar solution, saccharine solution and quinine was not reduced when H-Ras was inhibited, indicating that H-Ras activity is specific to alcohol.

Researchers Identify Physiological Evidence of ‘Chemo Brain’
Chemotherapy can induce changes in the brain that may affect concentration and memory, according to a study presented at the annual meeting of the Radiological Society of North America (RSNA). Using positron emission tomography combined with computed tomography (PET/CT), researchers were able to detect physiological evidence of chemo brain, a common side effect in patients undergoing chemotherapy for cancer treatment.
"The chemo brain phenomenon is described as ‘mental fog’ and ‘loss of coping skills’ by patients who receive chemotherapy," said Rachel A. Lagos, D.O., diagnostic radiology resident at the West Virginia University School of Medicine and West Virginia University Hospitals in Morgantown, W.V. "Because this is such a common patient complaint, healthcare providers have generically referred to its occurrence as ‘chemo brain’ for more than two decades."
While the complaint may be common, the cause of chemo brain phenomenon has been difficult to pinpoint. Some prior studies using magnetic resonance imaging (MRI) have found small changes in brain volume after chemotherapy, but no definitive link has been found.
Instead of studying chemotherapy’s effect on the brain’s appearance, Dr. Lagos and colleagues set out to identify its effect on brain function. By using PET/CT, they were able to assess changes to the brain’s metabolism after chemotherapy.
"When we looked at the results, we were surprised at how obvious the changes were," Dr. Lagos said. "Chemo brain phenomenon is more than a feeling. It is not depression. It is a change in brain function observable on PET/CT brain imaging."
PET/CT results demonstrated statistically significant decreases in regional brain metabolism that were closely associated with symptoms of chemo brain phenomenon.
"The study shows that there are specific areas of the brain that use less energy following chemotherapy," Dr. Lagos said. "These brain areas are the ones known to be responsible for planning and prioritizing."
Dr. Lagos believes that PET/CT could be used to help facilitate clinical diagnosis and allow for earlier intervention.

Researchers find reading uses the same brain regions regardless of language
A team of French and Taiwanese researchers has found evidence to indicate that people use the same regions of the brain when reading, regardless of which language is being read. In their paper published in the Proceedings of the National Academy of Sciences, they describe how fMRI brain scans made while people were reading revealed that there are very few differences in how the brain works as reading occurs.
The researchers note that previous research has suggested that different neural networks might be involved when people read text written in very different types of languages. French, for example, is an alphabetic language, whereas Chinese is logographic. Those of Roman origin are based on abstract concepts while Chinese characters are based on realistic depictions of handwriting strokes.
To learn more, the researchers ran fMRI scans on volunteers reading either Chinese or French material as their native language. The material presented was shown in various forms, e.g. normal, static, backwards or distorted. The researchers also employed priming, which is where words are flashed on a screen for such a short period of time as to be unknown to the reader. Priming has been found to influence the rate at which readers recognize words that are shown thereafter for a normal duration of time. The material written in French was presented as cursive rather than block printed letters.
In analyzing the results, the researchers found the differences in brain activity between the two groups as they read to be minimal. Those differences that were found, centered around a slight increase in the brain regions associated with processing the physical movements that had occurred in creating the characters, which in the brain is recognized as motor skills.
The researchers suggest that their results indicate that because reading is a relatively new process for the human brain, it likely evolved using previously existing neural network circuitry, which would explain why it appears the brain works in roughly the same way when reading, regardless of language.
Auditory test predicts coma awakening
A coma patient’s chances of surviving and waking up could be predicted by changes in the brain’s ability to discriminate sounds, new research suggests.
Recovery from coma has been linked to auditory function before, but it wasn’t clear whether function depended on the time of assessment. Whereas previous studies tested patients several days or weeks after comas set in, a new study looks at the critical phase during the first 48 hours. At early stages, comatose brains can still distinguish between different sound patterns. How this ability progresses over time can predict whether a coma patient will survive and ultimately awaken, researchers report.
“It’s a very promising tool for prognosis,” says neurologist Mélanie Boly of the Belgian National Fund for Scientific Research, who was not involved with the study. “For the family, it’s very important to know if someone will recover or not.”
A team led by neuroscientist Marzia De Lucia of the University of Lausanne in Switzerland studied 30 coma patients who had experienced heart attacks that deprived their brains of oxygen. All the patients underwent therapeutic hypothermia, a standard treatment to minimize brain damage, in which their bodies were cooled to 33° Celsius for 24 hours.
De Lucia and colleagues played sounds for the patients and recorded their brain activity using scalp electrodes — once in hypothermic conditions during the first 24 hours of coma, and again a day later at normal body temperature. The sounds were a series of pure tones interspersed with sounds of different pitch, duration or location. The brain signals revealed how well patients could discriminate the sounds, compared with five healthy subjects.
After three months, the coma patients had either died or awoken. All the patients whose discrimination improved by the second day of testing survived and awoke from their comas. By contrast, many of those whose sound discrimination deteriorated by the second day did not survive. The results were reported online November 12 in Brain.
(Image credit: ANP)
By using a model, researchers at the University of Montreal have identified and “switched off” a chemical chain that causes neurodegenerative diseases such as Huntington’s disease, amyotrophic lateral sclerosis and dementia. The findings could one day be of particular therapeutic benefit to Huntington’s disease patients. “We’ve identified a new way to protect neurons that express mutant huntingtin proteins,” explained Dr. Alex Parker of the University of Montreal’s Department of Pathology and Cell Biology and its affiliated CRCHUM Research Centre. A cardinal feature of Huntington’s disease – a fatal genetic disease that typically affects patients in midlife and causes progressive death of specific areas of the brain – is the aggregation of mutant huntingtin protein in cells. “Our model revealed that increasing another cell chemical called progranulin reduced the death of neurons by combating the accumulation of the mutant proteins. Furthermore, this approach may protect against neurodegenerative diseases other than Huntington’s disease.”
There is no cure for Huntingdon’s disease and current strategies show only modest benefits, and a component of the protein aggregates involved are also present in other degenerative diseases. “My team and I wondered if the proteins in question, TDP-43 and FUS, were just innocent bystanders or if they affected the toxicity caused by mutant huntingtin,” Dr. Parker said. To answer this question, Dr. Parker and University of Montreal doctoral student Arnaud Tauffenberger turned to a simple genetic model based on the expression of mutant huntingtin in the nervous system of the transparent roundworm C. elegans. A large number of human disease genes are conserved in worms, and C. elegans in particular enables researchers to rapidly conduct genetic analyses that would not be possible in mammals.
Dr. Parker’s team found that deleting the TDP-43 and FUS genes, which produce the proteins of the same name, reduced neurodegeneration caused by mutant huntingtin. They then confirmed their findings in the cell of a mammal cell, again by using models. The next step was then to determining how neuroprotection works. TDP-43 targets a chemical called progranulin, a protein linked to dementia. “We demonstrated that removing progranulin from either worms or cells enhanced huntingtin toxicity, but increasing progranulin reduced cell death in mammalian neurons. This points towards progranulin as a potent neuroprotective agent against mutant huntingtin neurodegeneration,” Dr. Parker said. The researchers will need to do further testing this in more complex biological models to determine if the same chemical switches work in all mammals. If they do, then progranulin treatment may slow disease onset or progression in Huntington’s disease patients.
(Source: eurekalert.org)
Metabolic Protein Launches Sugar Feast That Nurtures Brain Tumors
Researchers at The University of Texas MD Anderson Cancer Center have tracked down a cancer-promoting protein’s pathway into the cell nucleus and discovered how, once there, it fires up a glucose metabolism pathway on which brain tumors thrive.
They also found a vital spot along the protein’s journey that can be attacked with a type of drug not yet deployed against glioblastoma multiforme, the most common and lethal form of brain cancer. Published online by Nature Cell Biology, the paper further illuminates the importance of pyruvate kinase M2 (PKM2) in cancer development and progression.
"PKM2 is very active during infancy, when you want rapid cell growth, and eventually it turns off. Tumor cells turn PKM2 back on - it’s overexpressed in many types of cancer," said Zhimin Lu, M.D., Ph.D., the paper’s senior author and an associate professor in MD Anderson’s Department of Neuro-Oncology.
Lu and colleagues showed earlier this year that PKM2 in the nucleus also activates a variety of genes involved in cell division. The latest paper shows how it triggers aerobic glycolysis, processing glucose into energy, also known as the Warburg effect, upon which many types of solid tumors rely to survive and grow.
"PKM2 must get to the nucleus to activate genes involved in cell proliferation and the Warburg effect," Lu said. "If we can keep it out of the nucleus, we can block both of those cancer-promoting pathways. PKM2 could be an Achilles’ heel for cancer."
By pinpointing the complicated steps necessary for PKM2 to penetrate the nucleus, Lu and colleagues found a potentially druggable target that could keep the protein locked in the cell’s cytoplasm.
(Image Credit: Wikimedia Commons)
Some people live their lives by the motto “no risk - no fun!” and avoid hardly any risks. Others are clearly more cautious and focus primarily on safety when investing and for other business activities. Scientists from the University of Bonn in cooperation with colleagues from the University of Zurich studied the attitudes towards risk in a group of 56 subjects. They found that in people who preferred safety, certain regions of the brain show a higher level of activation when they are confronted with quite unforeseeable situations. In addition, they do not distinguish as clearly as risk takers whether a situation is more or less risky than expected. The results have just been published in the renowned “Journal of Neuroscience.”
"We were especially interested in the link between risk preferences and the brain regions processing this information," says Prof. Dr. Bernd Weber from the Center for Economics and Neuroscience (CENs) at the University of Bonn. First, the researchers tested a total of 56 subjects for their willingness to take risks. "In an economic game, the test subjects had a choice between a secured payout and a lottery," reports Sarah Rudorf from CENs, the study’s principal author. Those who showed a strong preference for the lottery in this test were categorized as risk takers. Others preferred the secured payout even if the lottery’s odds of winning were clearly better. They were put in the risk-averse group.
In risk-averse individuals, certain regions of the brain are activated more strongly
Then the test subjects played a card game in a brain scanner to study their risk perception. Cards carrying numbers from one to ten were shown on the video glasses in front of their eyes. Each time, two cards were randomly drawn. Before the subjects were shown the cards, they were asked to place bets on whether the second card would have a higher or a lower number than the first one. “The statistical probability for either case to occur is always the same: fifty-fifty,” says Prof. Weber. “This is important so that all subjects, whether they are risk takers or not, experience risky situations inside the scanner.” They were not able to assess their probability of winning their bet until they saw the first card. Here, the researchers found that in the subjects who tended to avoid risks, two specific regions of the brain were activated more strongly than in those who were willing to take risks. These areas are the ventral striatum and the insular cortex. The ventral striatum reacts both to the probability of winning, as well as to how well an individual can predict the outcome of the bet. The insular cortex is particularly sensitive to the risk a situation carries, and for whether it is higher or lower than anticipated.
Risk seekers adjust their strategy after lucky streaks
Sarah Rudorf summarized the results, “Individuals in whom these regions of the brain are activated at a higher level seem to perceive risks more clearly and assess them as more negative than those who are willing to take risks.” Risk-averse individuals seem to overestimate the con¬sequences of risk, and they did not distinguish as clearly between situations that turned out to be more or less risky than expected. In contrast, the test subjects who tended to take greater risks also focused their behavior more towards the wins and losses, and more clearly changed their strategy after negative situations.
Study is first to show the neurobiological mechanisms
"This study is the first to show the neurobiological mechanisms of how individual risk preferences determine risk perception," says Prof. Weber. "This also has effects on behavior in the areas of finance and health."
In a next step, the researchers want to study the consequences these results have on economic decisions such as in the stock market. “This might even allow improving the advising process for investors with regard to their individual risk behavior,” says Prof. Weber. And he considers health another important area. Smokers know that what they do is very dangerous, and yet they smoke. “If we learned more about smokers’ attitudes towards risk, we might be able to provide information for developing better anti-smoking campaigns.”
(Source: www3.uni-bonn.de)
Study pinpoints brain area’s role in learning
An area of the brain called the orbitofrontal cortex is responsible for decisions made on the spur of the moment, but not those made based on prior experience or habit, according to a new basic science study from substance abuse researchers at the University of Maryland School of Medicine and the National Institute on Drug Abuse (NIDA). Scientists had previously believed that the area of the brain was responsible for both types of behavior and decision-making. The distinction is critical to understanding the neurobiology of decision-making, particularly with regard to substance abuse. The study was published online in the journal Science.
Scientists have assumed that the orbitofrontal cortex plays a role in “value-based” decision-making, when a person compares options and weights consequences and rewards to choose best alternative. The Science study shows that this area of the brain is involved in decision-making only when the value must be inferred or computed rapidly or hastily. If the value has been “cached” or pre-computed, like a habit, then the orbitofrontal cortex is not necessary.
The same is true for learning — if a person infers an outcome but it does not happen, the resulting error can drive learning. The study shows that the orbitofrontal cortex is necessary for the inferred value that is used for this type of learning.
"Our research showed that damage to the orbitofrontal cortex may decrease a person’s ability to use prior experience to make good decisions on the fly," says lead author Joshua Jones, Ph.D., a postdoctoral researcher at the University of Maryland School of Medicine and a research scientist at NIDA, part of the National Institutes of Health. "The person isn’t able to consider the whole continuum of the decision — the mind’s map of how choices play out further down the road. Instead, the person is going to regress to habitual behavior, gravitating toward the choice that provides the most value in its immediate reward."
The study enhances scientists’ understanding of how the brain works in healthy and unhealthy individuals, according to the researchers.
"This discovery has general implications in understanding how the brain processes information to help us make good decisions and to learn from our mistakes," says senior author Geoffrey Schoenbaum, M.D., Ph.D., adjunct professor at the University of Maryland School of Medicine and senior investigator and chief of the Cellular Neurobiology Research Branch at NIDA. "Understanding more about the orbitofrontal cortex also is important for understanding disorders such as addiction that seem to involve maladaptive decision-making and learning. Cocaine in particular seems to have long-lasting effects on the orbitofrontal cortex. One aspect of this work, which we are pursuing, is that perhaps some of the problems that characterize addiction are the result of drug-induced changes in this area of the brain."
(Image: iStock)