Neuroscience

Articles and news from the latest research reports.

Posts tagged science

59 notes

Professor finds neuroscience provides insights into brains of complex and adaptive leaders

“This study represents a fusion of the leadership and neuroscience fields, and this fusion can revolutionize approaches to assessing and developing leaders,” says Hannah, the Tylee Wilson Chair in business ethics and professor of management at the Wake Forest University School of Business. Hannah is lead author of the paper in the May 2013 Journal of Applied Psychology titled, “The Psychological and Neurological Bases of Leader Self-Complexity and Effects on Adaptive Decision-Making.”

Hannah and four colleagues tested 103 young military leaders between the ranks of officer cadet and major at a U.S. Army base on the east coast. They administered psychological exams to assess the complexity of leaders’ identities, and neurological exams to assess the complexity of soldiers’ brain activity. For the brain tests, the researchers attached quantitative electroencephalogram (qEEG) electrodes to 19 areas of the soldier’s scalp.

Hannah and his fellow researchers wanted to know if great leaders had more complex brains – measured by the electrodes which reported which parts of the brain were firing together at the same time. A low complex brain shows more areas of the brain operating at the same time at the same electrical amplitude and frequency – which suggests those areas converge to process the same task leaving fewer brain resources for other tasks and processes. It’s a process called “phase lock.”

But in high complex brains, the activity patterns are much more different and varied – which suggests more of the brains resources are available at any one time to handle other situations or tasks.

“Think of it as a single core versus a multicore computer’s central processing unit (CPU),” Hannah says. “A multicore CPU can multitask because one core can process a task while the other CPU cores remain free to process new tasks. More complex brains are also more efficient in locking together only the brain resources needed to process a task and then efficiently releasing them when no longer needed.”

The study showed the high complex brains of the great leaders had a different “landscape.” The scans showed more differentiated activation patterns in the frontal and prefrontal lobes of leaders who demonstrated greater decisiveness, adaptive thinking and positive action orientation in the experiment.

“Further, individuals who have developed richer and more elaborate self-concepts as leaders were found to be more complex and adaptable,” Hannah says. “These findings have important implications for identifying and developing leaders who can lead effectively in today’s changing, dynamic, and often volatile organizational contexts.”

The researcher team suggests that once they validate neurological profiles of leaders with high complex brains, they will be able to use established techniques like neuro-feedback to enhance these leadership skills in others. Neuro-feedback has been successfully used with elite athletes, concert musicians and financial traders in their training. These profiles can also be used to assess leaders and track their development over time.

These findings have relevance to the WFU Schools of Business’ new student development framework, which focuses on developing practical wisdom, strategic thinking and critical thinking skills, along with the ability to embrace complexity and ambiguity.

Hannah’s co-authors include Pierre Balthazard, dean of the School of Business at Saint Bonaventure University; David A. Waldman, professor of business at Arizona State University; Peter L. Jennings, of the Center for the Army Profession and Ethic at West Point; and Robert W. Thatcher of the University of South Florida.

This research team is at the forefront of applying neuroscience to study effective leadership. The team previously published a 2012 paper in the Leadership Quarterly, which identified unique brain functioning in leaders who are seen by their followers as highly inspirational and charismatic.

(Source: healthmedicinet.com)

Filed under brain activity leadership decision-making prefrontal cortex neuroscience science

91 notes

Musical memory deficits start in auditory cortex

Congenital amusia is a disorder characterized by impaired musical skills, which can extend to an inability to recognize very familiar tunes. The neural bases of this deficit are now being deciphered. According to a study conducted by researchers from CNRS and Inserm at the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1), amusics exhibit altered processing of musical information in two regions of the brain: the auditory cortex and the frontal cortex, particularly in the right cerebral hemisphere. These alterations seem to be linked to anatomical anomalies in these same cortices. This work, published in May in the journal Brain, adds invaluable information to our understanding of amusia and, more generally, of the “musical brain”, in other words the cerebral networks involved in the processing of music.

image

Congenital amusia, which affects between 2 and 4% of the population, can manifest itself in various ways: by difficulty in hearing a “wrong note”, by singing “out of tune” and sometimes by an aversion to music. For some of these individuals, music is like a foreign language or a simple noise. Amusia is not due to any auditory or psychological problem  and does not seem to be linked to other neurological disorders. Research on the neural bases of this impairment only began a decade ago with the work of the Canadian neuropsychologist Isabelle Peretz.

Two teams from the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1) have studied the encoding of musical information and the short-term memorization of notes. According to previous work, amusical individuals experience particular difficulty in hearing the pitch of notes (low or high) and, although they remember sequences of words normally, they have difficulty in memorizing sequences of notes.

In a bid to determine the regions of the brain concerned with these memorization difficulties, the researchers conducted magneto-encephalographs (a technique that allows very weak magnetic fields produced by neural activity to be measured at the surface of the head) on a group of amusics while they were performing a musical task. The task consisted in listening to two tunes separated by a two-second gap. The volunteers were asked to determine whether the tunes were identical or different.

The scientists observed that, when hearing and memorizing notes, amusics exhibited altered sound processing in two regions of the brain: the auditory cortex and the frontal cortex, essentially in the right hemisphere. Compared to non-amusics, their neural activity was delayed and impaired in these specific areas when encoding musical notes. These anomalies occurred 100 milliseconds after the start of a note.

These results agree with an anatomical observation that the researchers have confirmed using MRI: amusical individuals have an excess of grey matter in the inferior frontal cortex, accompanied by a deficit in white matter, one of whose essential constituents is myelin. This surrounds and protects the axons of the neurons, helping nerve signals to propagate rapidly. The researchers also observed anatomical anomalies  in the auditory cortex. This data lends weight to the hypothesis according to which amusia could be due to insufficient communication between the auditory cortex and the frontal cortex.

Amusia thus stems from impaired neural processing from the very first steps of sound processing in the auditory nervous system. This work makes it possible to envisage a program to remedy these musical difficulties, by targeting the early steps of the processing of sounds and their memorization.

(Source: www2.cnrs.fr)

Filed under congenital amusia auditory cortex pitch perception memory music neuroscience science

63 notes

Tiny worm sheds light on giant mystery about neurons

Scientists have identified a gene that keeps our nerve fibers from clogging up. Researchers in Ken Miller’s laboratory at the Oklahoma Medical Research Foundation (OMRF) found that the unc-16 gene of the roundworm Caenorhabditis elegans encodes a gatekeeper that restricts flow of cellular organelles from the cell body to the axon, a long, narrow extension that neurons use for signaling. Organelles clogging the axon could interfere with neuronal signaling or cause the axon to degenerate, leading to neurodegenerative disorders. This research, published in the May 2013 Genetics Society of America’s journal GENETICS, adds an unexpected twist to our understanding of trafficking within neurons.

Proteins equivalent to UNC-16 are present in the neurons of all animals, including humans And are known to interact with proteins associated with neurodegenerative disorders in humans (Hereditary Spastic Paraplegia) and mice (Legs at Odd Angles). However, the underlying cause of these disorders is not well understood.

"Our UNC-16 study provides the first insights into a previously unrecognized trafficking system that protects axons from invasion by organelles from the cell soma," Dr. Miller said. "A breakdown in this gatekeeper may be the underlying cause of this group of disorders," he added.

The use of the model organism C. elegans, a tiny, translucent roundworm with only 300 neurons, enabled the discovery because the researchers were able to apply complex genetic techniques and imaging methods in living organisms, which would be impossible in larger animals. Dr. Miller’s team tagged organelles with fluorescent proteins and then used time-lapse imaging to follow the movements of the organelles. In normal axons, organelles exited the cell body and entered the initial segment of the axon, but did not move beyond that. In axons of unc-16 mutants, the organelles hitched a ride on tiny motors that carried them deep into the axon, where they accumulated.

Dr. Miller acknowledges there are still a lot of unanswered questions. His lab is currently investigating how UNC-16 performs its crucial gatekeeper function by looking for other mutant worms with similar phenotypes. A Commentary on the article, also published in this issue of GENETICS, calls the work “provocative”, and highlights several important questions prompted by this pioneering study.

"This research once again shows how studies of simple model organisms can bring insight into complex neurodegenerative diseases in humans," said Mark Johnston, Editor-in-Chief of the journal GENETICS. “This kind of basic research is necessary if we are to understand diseases that can’t easily be studied in more complex animals.”

(Source: eurekalert.org)

Filed under C. elegans organelles neurodegenerative diseases neurons proteins neuroscience science

264 notes

Paralyzed Patient Moves Prosthetic Arm With Her Mind

It sounds like science fiction, but researchers are gaining ground in developing mind-controlled robotic arms that could give people with paralysis or amputated limbs more independence.

image

The technology, known as brain-computer (or brain-machine) interface, is in its infancy as far as human use — though scientists have been studying the concept for years. But experts say that people with paralysis or amputations could be using the technology at home within the next decade.

It basically boils down to people using their thoughts to control a robot arm that then performs a desired task, like grasping and moving a cup. That’s done via tiny electrode “grids” implanted in the brain that read the movement signals firing from individual nerve cells, then translate them to the robot arm.

"We have the ability to capture information from the brain and use it to control the robotic arm," said Dr. Elizabeth Tyler-Kabara, who presented her team’s latest findings on the technology Tuesday, at the annual meeting of the American Association of Neurological Surgeons, in New Orleans.

However, she stressed, “we still have a ton to learn.”

Right now, the robot arm is confined to the lab. After getting their electrodes implanted, study patients come to the lab to work with the robotic limb under the researchers’ supervision. So far, Tyler-Kabara and her colleagues at the University of Pittsburgh School of Medicine have tested the approach in one patient. Researchers at Brown University in Providence, R.I., have done it in a handful of others.

One of the big questions, Tyler-Kabara said, is “how much control is enough?” That is, how well does the mind-controlled arm need to work to bring real everyday benefits to people?

At the meeting on Tuesday, Tyler-Kabara presented an update on how her team’s patient is faring. The 53-year-old woman had long-standing quadriplegia due to a disease called spinocerebellar degeneration — where, for unknown reasons, the connections between the brain and muscles slowly deteriorate.

Tyler-Kabara performed the surgery, where two tiny electrode grids were placed in the area of the brain that would normally control the movement of the right hand and arm. The electrode points penetrate the brain’s surface by about one-sixteenth of an inch.

"The idea is pretty scary," Tyler-Kabara acknowledged. But her team’s patient had no complications from the surgery and left the hospital the next day. There’ve been no longer-term problems either, she said — though, in theory, there would be concerns about infection or bleeding over the long haul.

The surgery left the patient with two terminals that protrude through her skull. The researchers used those to connect the implanted electrodes to a computer, where they could see brain cells firing when the patient thought about moving her hand.

She was quickly able to master simple movements with the robotic arm, like high-fiving the researchers. And after six months, she was performing “10-degrees-of-freedom” movements, Tyler-Kabara reported at the meeting.

That includes not only moving the arm, but also flexing and rotating the wrist, grasping objects and affecting several different hand “postures.” She has accomplished feats like feeding herself chocolate.

The researchers initially used a computer in training sessions with the patient, but after that the robot arm is directly linked to the electrodes — so there is no need for “computer assistance,” according to Tyler-Kabara.

Still, before the technology can ultimately be used at home, she said, researchers have to devise a “fully implanted” wireless system for controlling the robot arm.

Another expert talked about the new technology.

"This is one more encouraging step toward developing something practical that people can use in their daily lives," said Dr. Robert Grossman, a neurosurgeon at Methodist Neurological Institute in Houston, who was not involved in the research.

It’s hard to put a time line on it all, Grossman said, since technological advances could changes things. He also noted that several research groups are looking at different approaches to brain-computer interfaces.

One, Grossman said, is to do it noninvasively, through electrodes placed on the scalp.

Study author Tyler-Kabara said that noninvasive approach has met with success in helping people perform simple tasks, like moving a cursor on a computer screen. “But I don’t think it will ever be good enough for performing complicated tasks,” she said, noting that it can’t work as precisely as the implanted electrodes.

A next step, Tyler-Kabara said, is to develop a “two-way” electrode system that stimulates the brain to generate sensation — with the aim of helping people adjust the robot’s grip strength.

She said there is also much to learn about which people will ultimately be good candidates for the technology. There may, for example, be some brain injuries that prevent people from benefiting.

Because this study was presented at a medical meeting, the data and conclusions should be viewed as preliminary until published in a peer-reviewed journal.

(Source: health.usnews.com)

Filed under BCI robots robotics prosthetic limbs prosthetic arm neuroscience science

171 notes

Researchers Successfully Treat Autism in Infants

Most infants respond to a game of peek-a-boo with smiles at the very least, and, for those who find the activity particularly entertaining, gales of laughter. For infants with autism spectrum disorders (ASD), however, the game can be distressing rather than pleasant, and they’ll do their best to tune out all aspects of it –– and that includes the people playing with them.

image

That disengagement is a hallmark of ASD, and one of the characteristics that amplifies the disorder as infants develop into children and then adults.

A study conducted by researchers at the Koegel Autism Center at UC Santa Barbara has found that replacing such games in favor of those the infant prefers can actually lessen the severity of the infants’ ASD symptoms, and, perhaps, alleviate the condition altogether. Their work is highlighted the current issue of the Journal of Positive Behavior Interventions.

Lynn Koegel, clinical director of the center and the study’s lead author, described the game-playing protocol as a modified Pivotal Response Treatment (PVT). Developed at UCSB, PRT is based on principles of positive motivation. The researchers identified the activities that seemed to be more enjoyable to the infants and taught the respective parents to focus on those rather than on the typical games they might otherwise choose. “We had them play with their infants for short periods, and then give them some kind of social reward,” Koegel said. “Over time, we conditioned the infants to enjoy all the activities that were presented by pairing the less desired activities with the highly desired ones.” The social reward is preferable to, say, a toy, Koegel noted, because it maintains the ever-crucial personal interaction.

"The idea is to get them more interested in people," she continued, "to focus on their socialization. If they’re avoiding people and avoiding interacting, that creates a whole host of other issues. They don’t form friendships, and then they don’t get the social feedback that comes from interacting with friends."

According to Koegel, by the end of the relatively short one- to three-month intervention period, which included teaching the parents how to implement the procedures, all the infants in the study had normal reactions to stimuli. “Two of the three have no disabilities at all, and the third is very social,” she said. “The third does have a language delay, but that’s more manageable than some of the other issues.”

On a large scale, Koegel hopes to establish some benchmark for identifying social deficits in infants so parents and health care providers can intervene sooner rather than later. “We have a grant from the Autism Science Foundation to look at lots of babies and try to really figure out which signs are red flags, and which aren’t,” she said. “A number of the infants who show signs of autism will turn out to be perfectly fine; but we’re saying, let’s not take the risk if we can put an intervention in play that really works. Then we don’t have to worry about whether or not these kids would develop the full-blown symptoms of autism.”

Historically, ASD is diagnosed in children 18 months or older, and treatment generally begins around 4 years. “You can pretty reliably diagnose kids at 18 months, especially the more severe cases,” said Koegel. “The mild cases might be a little harder, especially if the child has some verbal communication. There are a few measures –– like the ones we used in our study –– that can diagnose kids pre-language, even as young as six months. But ours was the first that worked with children under 12 months and found an effective intervention.”

Given the increasing number of children being diagnosed with ASD, Koegel’s findings could be life altering –– literally. “When you consider that the recommended intervention for preschoolers with autism is 30 to 40 hours per week of one-on-one therapy, this is a fairly easy fix,” she said. “We did a single one-hour session per week for four to 12 weeks until the symptoms improved, and some of these infants were only a few months old. We saw a lot of positive change.”

(Source: ia.ucsb.edu)

Filed under ASD autism infants socialization social interaction psychology neuroscience science

165 notes

Decoding ‘noisy’ language in daily life
Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”
A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.
“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.
Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.
“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”
Asymmetrical strategies
The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.
A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.
“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.
Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.
This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.
“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.
Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.
Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.
“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.
Reverse-engineering the message
Other scholars say the work helps illuminate the strategies people may use when they interpret language.
“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.
According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.
“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Decoding ‘noisy’ language in daily life

Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”

A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.

“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.

Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.

“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”

Asymmetrical strategies

The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.

A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.

“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.

Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.

This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.

“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.

Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.

Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.

“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.

Reverse-engineering the message

Other scholars say the work helps illuminate the strategies people may use when they interpret language.

“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.

According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.

“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Filed under language speech speech perception language processing linguistics psychology neuroscience science

78 notes

Size, wiring of brain structures in kids predict benefit from math tutoring

Why do some children learn math more easily than others? Research from the Stanford University School of Medicine has yielded an unexpected new answer.

In a study of third-graders’ responses to math tutoring, Stanford scientists found that the size and wiring of specific brain structures predicted how much an individual child would benefit from math tutoring. However, traditional intelligence measures, such as children’s IQs and their scores on tests of mathematical ability, did not predict improvements from tutoring.

image

The research is the first to use brain scans to look for a link between math-learning abilities and brain structure or function, and also the first to compare neural and cognitive predictors of kids’ responses to tutoring. In addition, it provides information on the differences between how children and adults learn math, and could help researchers understand the origins of math-learning disabilities.

The study was published online April 29 in Proceedings of the National Academy of Sciences.

"What was really surprising was that intrinsic brain measures can predict change - we can actually predict how much a child is going to learn during eight weeks of math tutoring based on measures of brain structure and connectivity," said Vinod Menon, PhD, the study’s senior author and a professor of psychiatry and behavioral sciences. Menon is also a member of the Child Health Research Institute at Lucile Packard Children’s Hospital.

"The results are a significant step toward the development of targeted learning programs based on a child’s current as well as predicted learning trajectory," said the study’s lead author, Kaustubh Supekar, PhD, postdoctoral scholar in psychiatry and behavioral sciences.

Menon’s team focused on third-grade students ages 8 and 9 because these children are at a critical stage for acquiring basic arithmetic skills. The study included 24 third-graders who participated in a well-validated program of 15 to 20 hours of individualized math tutoring over eight weeks. The tutors explained new concepts to children and also got them to practice math skills with an emphasis on speed, and the sessions were tailored to each child’s level of understanding.

Before tutoring began, the children were given several standard neuropsychological assessments, including tests of IQ, working memory, reading and math-problem-solving abilities. Both before and after the eight-week tutoring period, children’s arithmetic performance was tested, and all children had structural and functional magnetic resonance imaging scans performed on their brains. To control for the effects of math instruction the children received at school (rather than during tutoring), a comparison group of 16 third-grade children who did not receive tutoring, but who had the same testing and brain scans before and after an eight-week interval, was also included in the study.

All 24 children receiving tutoring improved their arithmetic performance. Their performance efficiency, a composite measure of accuracy and speed of problem solving, improved an average of 67 percent after tutoring. But individual gains varied widely, ranging from 8 percent to 198 percent improvement. The children who did not receive tutoring did not show any change in arithmetic performance during the study.

When the researchers analyzed the children’s structural brain scans, they found that larger gray matter volume in three brain structures predicted greater ability to benefit from math tutoring. (The predictions were generated with a machine learning algorithm, the same type of data-analysis tool used to create movie recommendations for users of websites like Netflix, for example.) Of the three structures, the best predictor of improvement with tutoring was a larger hippocampus, a structure traditionally considered one of the brain’s most important memory centers. Functional connections between the hippocampus and several other brain regions, especially the prefrontal cortex and basal ganglia, also predicted ability to benefit from tutoring. These regions are important for forming long-term memories.

"The part of the brain that is recruited in memories for places and events also plays a pivotal role in determining how much and how well a child learns math," Supekar said.

None of the neuropsychological assessment scores, such as IQ or tests of working memory, could predict how much an individual child would benefit from tutoring.

The brain systems highlighted by this study - including the hippocampus, basal ganglia and prefrontal cortex - are different from those previously implicated for math learning in adults, the researchers noted. When solving math problems, adults rely on brain regions that are specialized for representing complex visual objects and processing spatial information.

And the findings suggest that the tutoring approach used, which was tailored to each child’s level of understanding and included lots of repetitive, high-speed arithmetic practice to help cement facts in children’s heads, works because it is compatible with the way their brains encode facts. “Memory resources provided by the hippocampal system create a scaffold for learning math in the developing brain,” Menon said. “Our findings suggest that, while conceptual knowledge about numbers is necessary for math learning, repeated, speeded practice and testing of simple number combinations is also needed to encode facts and encourage children’s reliance on retrieval - the most efficient strategy for answering simple arithmetic problems.” Once kids are able to pull up answers to basic arithmetic problems automatically from memory, their brains can tackle more complex problems.

The researchers’ next steps will include comparing brain structure and wiring in children with and without math learning disabilities, analyzing how the wiring of the brain changes in response to tutoring and examining whether lower-performing children’s brains can be exercised to help them learn math. “We’re pushing a very ecologically relevant model of learning,” Menon said. “Academic instruction should rely on validated instructional principles while incorporating individualized training to provide feedback on whether students are right or wrong, how they’re wrong and how they can improve their math skills.”

(Source: med.stanford.edu)

Filed under children math tutoring brain connections brain scans psychology neuroscience science

69 notes

Ear-witness precision: Congenitally blind people have more accurate memories

Distortions and illusions within human memory are well documented in scientific and forensic work and appear to be a basic feature of memory functioning.

image

Yet several studies suggest that blind individuals, especially those without any visual experience, possess superior verbal and memory skills.

The researchers from the Department of Psychology ran memory tests on groups of congenitally blind people, those with late onset blindness and sighted people, in collaboration with a research assistant at Queen Mary, University of London.

Each participant was asked to listen to a series of word lists and then recall the words they heard. Past research has found that such words lists normally cause people to falsely “remember” words that are related to those heard, but that were never actually experienced. For example hearing ‘chimney’, ‘cigar’, and ‘fire’ can prompt some to produce a false memory of the word ‘smoke’ when asked to remember the list of words.

The researchers found that not only did the congenitally blind participants remember more words but were also less likely to create false memories of the related words. In contrast, the sighted and late blind participants remembered fewer words and were much more likely to falsely remember the related words that were not read to the participants.

Dr Achille Pasqualotto, postdoctoral researcher and first author of the study, said: “We found that congenitally blind participants reported significantly more correct words than both late onset blind and sighted people. Most of the congenitally blind participants avoided unrelated words, therefore congenitally blind participants can store more items and with a higher fidelity.”

Dr Michael Proulx who led the study added: “Our results show that visual experience has a significant negative impact on both the number of items remembered and the accuracy of semantic memory and also demonstrate the importance of adaptive neural plasticity in the congenitally blind brain for enhanced memory retrieval mechanisms.

“There is an old Hebrew proverb that believes the blind were the most trustworthy sources for quotations and that certainly seems true in this case. It will be interesting to see whether congenitally blind individuals would also be better witnesses in forensic studies.”

The researched is from the paper Congenital blindness improves semantic and episodic memory, published in the journal Behavioural Brain Research.

(Source: bath.ac.uk)

Filed under congenital blindness false memories memory visual experience psychology neuroscience science

115 notes

Sniffing Out Schizophrenia

Neurons in the nose could be the key to early, fast, and accurate diagnosis, says a TAU researcher

image

A debilitating mental illness, schizophrenia can be difficult to diagnose. Because physiological evidence confirming the disease can only be gathered from the brain during an autopsy, mental health professionals have had to rely on a battery of psychological evaluations to diagnose their patients.

Now, Dr. Noam Shomron and Prof. Ruth Navon of Tel Aviv University’s Sackler Faculty of Medicine, together with PhD student Eyal Mor from Dr. Shomron’s lab and Prof. Akira Sawa of Johns Hopkins Hospital in Baltimore, Maryland, have discovered a method for physical diagnosis — by collecting tissue from the nose through a simple biopsy. Surprisingly, collecting and sequencing neurons from the nose may lead to “more sure-fire” diagnostic capabilities than ever before, Dr. Shomron says.

This finding, which was reported in the journal Neurobiology of Disease, could not only lead to a more accurate diagnosis, it may also permit the crucial, early detection of the disease, giving rise to vastly improved treatment overall.

From the nose to diagnosis

Until now, biomarkers for schizophrenia had only been found in the neuron cells of the brain, which can’t be collected before death. By that point it’s obviously too late to do the patient any good, says Dr. Shomron. Instead, psychiatrists depend on psychological evaluations for diagnosis, including interviews with the patient and reports by family and friends.

For a solution to this diagnostic dilemma, the researchers turned to the olfactory system, which includes neurons located on the upper part of the inner nose. Researchers at Johns Hopkins University collected samples of olfactory neurons from patients diagnosed with schizophrenia and a control group of non-affected individuals, then sent them to Dr. Shomron’s TAU lab.

Dr. Shomron and his fellow researchers applied a high-throughput technology to these samples, studying the microRNA of the olfactory neurons. Within these molecules, which help to regulate our genetic code, they were able to identify a microRNA which is highly elevated in those with schizophrenia, compared to individuals who do not have the disease.

"We were able to narrow down the microRNA to a differentially expressed set, and from there down to a specific microRNA which is elevated in individuals with the disease compared to healthy individuals," explains Dr. Shomron. Further research revealed that this particular microRNA controls genes associated with the generation of neurons.

In practice, material for biopsy could be collected through a quick and easy outpatient procedure, using a local anesthetic, says Dr. Shomron. And with microRNA profiling results ready in a matter of hours, this method could evolve into a relatively simple and accurate test to diagnose a very complicated illness.

Early detection, early intervention

Though there is much more to investigate, Dr. Shomron has high hopes for this diagnostic method. It’s important to determine whether this alteration in microRNA expression begins before schizophrenic symptoms begin to exhibit themselves, or only after the disease fully develops, he says. If this change comes near the beginning of the timeline, it could be invaluable for early diagnostics. This would mean early intervention, better treatment, and possibly even the postponement of symptoms.

If, for example, a person has a family history of schizophrenia, this test could reveal whether they too suffer from the disease. And while such advanced warning doesn’t mean a cure is on the horizon, it will help both patient and doctor identify and prepare for the challenges ahead.

(Source: aftau.org)

Filed under schizophrenia olfactory system diagnosis neurons microRNA neuroscience science

43 notes

New subtype of ataxia identified

The finding opens the door for presymptomatic diagnostics and genetic counselling for patients and it is the first step in identifying the cause and developing therapies

image

(Image: Antony Gormley)

Researchers from the Germans Trias i Pujol Health Sciences Research Institute Foundation (IGTP), the Bellvitge Biomedical Research Institute (IDIBELL), and the Sant Joan de Déu de Martorell Hospital, has identified a new subtype of ataxia, a rare disease without treatment that causes atrophy in the cerebellum and affects around 1.5 million people in the world. The results have been published online on April 29 in the journal JAMA Neurology.

The cause of ataxia is a diverse genetic alteration. For this reason it is classified in subtypes. The new subtype identified described by the researchers has been called SCA37. The study has found this subtype in members of the same family living in Barcelona, Huelva and Madrid and Salamanca (Spain). The finding will allow in the medium term that these families and all who suffer the genetic alteration identified will have personalized therapies and diagnostics prior to the development of the disease. The study was funded by La Marató de TV3 (the Catalan public TV) in 2009, dedicated to rare diseases.

The cerebellum is a part of the brain located behind the brain that, among other functions, coordinates the movements of the human body. When it is atrophied, movement disorders appear, and when the ataxia evolves, the patients suffer frequent falls and swallowing problems. Progressively, they end up needing a wheelchair. Until now, there have been identified more than 30 different subtypes of ataxia, the first of which was described in 1993 by Dr. Antoni Matill, head of the Neurogenetics Unit, IGTP, and Dr. Victor Volpini, head of the Center for Molecular Genetic Diagnosis at IDIBELL.

The publication of this paper has been possible thanks to the collaboration of the Hospital de Sant Pau, Universitat Pompeu Fabra and the Pitie-Salpêtrière Hospital in Paris.

Particular eye movements

The first symptoms of ataxia may develop during the childhood or adult stage, depending on the subtype. The SCA37 subtype, the first cases of which were identified by Carme Serrano, neurologist at the Sant Joan de Deu Hospital, Martorell (Barcelona), is expressed at 48 years on average. One of the features of SCA37 subtype is the difficulty for vertical eye movements. Besides the patients identified in Spain by Dr. Serrano and Germans Trias and IDIBELL researchers, there are evidence of the existence of more people affected with this subtype of ataxia in France, Holland and Britain, and for this reason it seems to be a quite prevalent subtype of ataxia in Europe.

All SCA37 patients have a common genetic alteration in the portion 32 of the short arm of chromosome 1, wherein there are a hundred genes. Currently, researchers are sequencing it with new generation technologies to find the specific mutation that causes ataxia. When it is found it will be possible to make an accurate diagnosis in family members who do not yet have developed symptoms. Also, it will be possible to investigate the biological mechanisms that cause ataxia in order to develop and implement personalized therapies, with drugs or stem cells therapy.

(Source: eurekalert.org)

Filed under ataxia cerebellum genetic alteration SCA37 subtype eye movements neuroscience science

free counters