Posts tagged science

Posts tagged science
Children of Blind Mothers Learn New Modes of Communication
A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.
Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.
To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.
Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.
The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.
When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.
Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”
"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."
Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”
New research has questioned the reliability of neuroscience studies, saying that conclusions could be misleading due to small sample sizes.

A team led by academics from the University of Bristol reviewed 48 articles on neuroscience meta-analysis which were published in 2011 and concluded that most had an average power of around 20 per cent – a finding which means the chance of the average study discovering the effect being investigated is only one in five.
The paper, being published in Nature Reviews Neuroscience, reveals that small, low-powered studies are ‘endemic’ in neuroscience, producing unreliable research which is inefficient and wasteful.
It focuses on how low statistical power – caused by low sample size of studies, small effects being investigated, or both – can be misleading and produce more false scientific claims than high-powered studies.
It also illustrates how low power reduces a study’s ability to detect any effects and shows that when discoveries are claimed, they are more likely to be false or misleading.
The paper claims there is substantial evidence that a large proportion of research published in scientific literature may be unreliable as a consequence.
Another consequence is that the findings are overestimated because smaller studies consistently give more positive results than larger studies. This was found to be the case for studies using a diverse range of methods, including brain imaging, genetics and animal studies.
Kate Button, from the School of Social and Community Medicine, and Marcus Munafò, from the School of Experimental Psychology, led a team of researchers from Stanford University, the University of Virginia and the University of Oxford.
She said: “There’s a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real. This has two important implications - many studies lack the ability to give definitive answers to the questions they are testing, and many claimed findings are likely to be incorrect or unreliable.”
The study concludes that improving the standard of results in neuroscience, and enabling them to be more easily reproduced, is a key priority and requires attention to well-established methodological principles.
It recommends that existing scientific practices can be improved with small changes or additions to methodologies, such as acknowledging any limitations in the interpretation of results; disclosing methods and findings transparently; and working collaboratively to increase the total sample size and power.
(Source: bristol.ac.uk)
The eyes sometimes have it, beating out the tongue, nose and brain in the emotional and biochemical balloting that determines the taste and allure of food, a scientist said here today. Speaking at the 245th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, he described how people sometimes “see” flavors in foods and beverages before actually tasting them.
“There have been important new insights into how people perceive food flavors,” said Terry E. Acree, Ph.D. “Years ago, taste was a table with two legs — taste and odor. Now we are beginning to understand that flavor depends on parts of the brain that involve taste, odor, touch and vision. The sum total of these signals, plus our emotions and past experiences, result in perception of flavors, and determine whether we like or dislike specific foods.”

Acree said that people actually can see the flavor of foods, and the eyes have such a powerful role that they can trump the tongue and the nose. The popular Sauvignon Blanc white wine, for instance, gets its flavor from scores of natural chemicals, including chemicals with the flavor of banana, passion fruit, bell pepper and boxwood. But when served a glass of Sauvignon Blanc tinted to the deep red of merlot or cabernet, people taste the natural chemicals that give rise to the flavors of those wines.
The sense of smell likewise can trump the taste buds in determining how things taste, said Acree, who is with Cornell University. In a test that people can do at home, psychologists have asked volunteers to smell caramel, strawberry or other sweet foods and then take a sip of plain water; the water will taste sweet. But smell bread, meat, fish or other non-sweet foods, and water will not taste sweet.
While the appearance of foods probably is important, other factors can override it. Acree pointed out that hashes, chilies, stews and cooked sausages have an unpleasant look, like vomit or feces. However, people savor these dishes based on the memory of eating and enjoying them in the past. The human desire for novelty and new experiences also is a factor in the human tendency to ignore what the eyes may be tasting and listening to the tongue and nose, he added.
Acree said understanding the effects of interactions between smell and vision and taste, as well as other odorants, will open the door to developing healthful foods that look and smell more appealing to finicky kids or adults.
(Source: portal.acs.org)
New therapy device enables stroke victims to recover further
Scientists from Nanyang Technological University (NTU) have developed a new stroke rehabilitation device which greatly improves recovery in stroke patients.
Thanks to this invention, stroke patients who had undergone conventional rehabilitation for a year or more and had hit a plateau in their recovery, managed to make significant progress in their ability to carry out everyday tasks.
Some of these long-term stroke sufferers have recovered up to 70 per cent of motor function clinical scores in just a month during the trial.
The new stroke therapy system, known as Synergistic Physio-Neuro Platform (SynPhNe), is currently undergoing thorough clinical investigations and more feasibility trials at local hospitals.
In use for 150 therapy hours, it has not had any side effects so far. Patients who tried SynPhNe also said they experienced little fatigue while using this easy-to-use system.
Developed by Dr John Heng, a senior research fellow at NTU’s School of Mechanical and Aerospace Engineering and his PhD student, Mr Banerji Subhasis, this system gives hope to frustrated patients who want to see more progress after completing conventional rehabilitation therapies.
The NTU research team of four has published over 11 scientific papers since 2008 on the principles of the system, its effectiveness and ease of use.
“While current rehabilitation systems do benefit many patients, there are also other patients who still have difficulties performing everyday activities like holding a fork or drinking from a cup, despite the usual rehab sessions,” said Dr Heng.
“SynPhNe works by giving real-time feedback to the patients on what is happening in their mind and in their muscles. Patients using SynPhNe know where their problems lie and can slowly work towards overcoming each problem, instead of feeling frustrated and going through a painful, expensive and prolonged trial-and-error process when their improvements are not visible.”
How it works
SynPhNe consists of patented computer software connected to a specially designed headset with neural sensors and a sensor arm glove. The device is designed to be worn easily by stroke patients who usually have control of only one arm.
These sensors provide feedback on the stress, attention, and relaxation levels of the mind and which muscles are being activated or inhibited by the patient. The software contains instructional videos for limb movements which the patient can mimic to improve his/her performance of various tasks.
Sensor information is displayed in real time via the computer screen so that the patient is aware of what is happening in his mind and body while undergoing the rehabilitation exercises.
Dr Heng said that while multi-model associative learning is known to be useful in the development of babies and in education, it is the first time that their research team is adapting it for stroke therapy. Tested on 10 patients so far, it has shown to be very effective in accelerating the recovery in stroke patients.
In associative learning, a patient will find out the link between cause and effect, or intent and physical result. The patient learns what he/she wants to do and what is actually happening with their limbs. This helps the patient to self-correct movements to match intended actions.
“For example, if a patient wants to move his wrist, but his wrist is not moving, SynPhNe will be able to show him that his mind had sent out a signal, his muscles have received it, but because supporting and opposing muscles are clenched, he will need to relax the opposing muscle in order to move his wrist,” Mr Subhasis explained.
“Another common problem is that the patient may feel stressed while undergoing therapy, which affects his muscle control. So by showing the stress level on the screen, SynPhNe will teach the patient how to control his breathing and posture to regain his balance and composure so that he can continue with the exercises.
“In short, SynPhNe makes patients aware of what is happening with their bodies so they learn how to relax their mind and muscles. This helps them to re-learn simple actions like holding a pen or a cup which may be arduous tasks for stroke victims.”
Ramping up patient trials
Patient trials are still on-going and 10 patients have undergone the trial for 12 sessions, each lasting 90 minutes. Over a four-week period, they have all shown some improvement on the clinical scales. It was found that patients with hand control and hand weakness problems improved the most, in several cases, up to 70 per cent.
The scientists started the patient trials in October 2012 at Tan Tock Seng Hospital and are embarking on another similar trial at the National University Hospital. Talks are underway to start another trial at Singapore General Hospital and in India.
SynPhNe, which took over five years to develop, have also won successive grants from the National Medical Research Council, the National Research Foundation’s Proof-of-Concept grant and Singapore-MIT Alliance for Research and Technology (SMART)’s Innovation Grant.
Start-up to look into commercialisation
Apart from conducting further trials involving 50 more patients, the next step for the scientists is to form a start-up company to turn the SynPhNe prototype into a portable stroke therapy kit for home use. This kit is expected to be cheaper than most robotic rehabilitation systems in the market which may cost over tens of thousands of dollars.
“This reduction in cost will allow for perhaps a rental or subsidy scheme for patients who wish to practise in the convenience of their own home instead of having to go to rehabilitation centres. It has the added advantage of providing constant updates of instructional videos and exercises to match the patient’s improvement and can even send their reports to their therapists via the device’s Wi-Fi capabilities,” Prof Heng added.
The idea to develop SynPhNe was inspired by the mind-and-body-as-one philosophy preached in traditional practices such as Taichi, Aikido and Yoga, and the health benefits they bring.
Mr Subhasis, a martial arts and yoga practitioner for more than 30 years had sought to bring this health benefit to people through modern yet simple, affordable technology. In the latest study, the patients who synergised their minds and bodies best (based on the brain and muscles signals recorded by SynPhNe) made the most dramatic improvements.
“Training the patients to self-regulate their mind and body increases their confidence to make positive changes in their lives. It also helps therapists better customize rehabilitation routines based on the individual patient’s capabilities and perceptions,” Mr Subhasis added.
The Singapore-MIT Alliance (SMART) and Technology Transfer Office at NTU (NIEO) are assisting the research group with the commercialisation process.
Why do some memories last a lifetime while others disappear quickly?

(Image: Tim Vernon, LTH NHS TRUST/SCIENCE PHOTO LIBRARY)
A new study suggests that memories rehearsed, during either sleep or waking, can have an impact on memory consolidation and on what is remembered later.
The new Northwestern University study shows that when the information that makes up a memory has a high value (associated with, for example, making more money), the memory is more likely to be rehearsed and consolidated during sleep and, thus, be remembered later.
Also, through the use of a direct manipulation of sleep, the research demonstrated a way to encourage the reactivation of low-value memories so they too were remembered later.
Delphine Oudiette, a postdoctoral fellow in the department of psychology at Northwestern and lead author of the study, designed the experiment to study how participants remembered locations of objects on a computer screen. A value assigned to each object informed participants how much money they could make if they remembered it later on the test.
"The pay-off was much higher for some of the objects than for others," explained Ken Paller, professor of psychology at Northwestern and co-author of the study. "In other words, we manipulated the value of the memories — some were valuable memories and others not so much, just as the things we experience each day vary in the extent to which we’d like to be able to remember them later."
When each object was shown, it was accompanied by a characteristic sound. For example, a tea kettle would appear with a whistling sound. During both states of wakefulness and sleep, some of the sounds were played alone, quite softly, essentially reminding participants of the low-value items.
Participants remembered the low-value associations better when the sound presentations occurred during sleep.
"We think that what’s happening during sleep is basically the reactivation of that information," Oudiette said. "We can provoke the reactivation by presenting those sounds, therefore energizing the low-value memories so they get stored better."
The research poses provocative implications about the role memory reactivation during sleep could play in improving memory storage,” said Paller, director of the Cognitive Neuroscience Program at Northwestern. “Whatever makes you rehearse during sleep is going to determine what you remember later, and conversely, what you’re going to forget.”
Many memories that are stored during the day are not remembered.
"We think one of the reasons for that is that we have to rehearse memories in order to keep them. When you practice and rehearse, you increase the likelihood of later remembering," Oudiette said. "And a lot of our rehearsal happens when we don’t even realize it — while we’re asleep."
Paller said selectivity of memory consolidation is not well understood. Most efforts in memory research have focused on what happens when you first form a memory and on what happens when you retrieve a memory.
"The in-between time is what we want to learn more about, because a fascinating aspect of memory storage is that it is not static," Paller said. "Memories in our brain are changing all of the time. Sometimes you improve memory storage by rehearsing all the details, so maybe later you remember better — or maybe worse if you’ve embellished too much.
"The fact that this critical memory reactivation transpires during sleep has mostly been hidden from us, from humanity, because we don’t realize so much of what’s happening while we’re asleep," he said.
(Source: eurekalert.org)
A Chimp’s Point Of View: Goggles simultaneously monitor a chimpanzee’s eyes and field of view
Chimps with camera goggles on their heads are helping scientists learn how the apes literally see the world.
From a scientific perspective, the eyes are windows to the mind. What people watch is one key sign of what they might be thinking, so monitoring their gazes can help researchers learn about what is going on inside people’s heads.
Scientists have conducted eye-tracking studies on people for more than 100 years. However, comparably little work has been conducted with other primates. Such work promises to shed light on humanity’s closest living relatives, and how they might perceive the world differently.
"If we know the differences between chimpanzees and humans, we will have an insight into how human perception has evolved," said comparative psychologist Fumihiro Kano at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
Until recently, eye-tracking research involved desk-sized machines confined to labs. Investigators now have access to portable, wearable eye-trackers, enabling scientists to learn how people look at and interact with the world in a more natural way. This enables them to research topics such as how experts look at the world differently from novices. Now Kano and his colleagues are using these devices to study chimps.
"Everybody wants to see the world through chimpanzee eyes, right?" Kano said. "That’s one of my childhood dreams. How do chimpanzees, the closest relatives of humans, see the world?"
The researchers placed lightweight goggles on a 27-year-old female chimpanzee named Pan that had one camera monitor her right eye and another aimed at her field of view, both of which sent data to a portable recorder. The mobile setup allowed the chimp to move and behave freely.
"We modified the eye-tracker goggle shape so that the chimpanzee could wear it and like it," Kano said. "If the chimpanzee felt uncomfortable wearing the goggles, she wouldn’t care about throwing it away!"
When Pan wore the eye-trackers, the scientists practiced a two-minute gestural task with her that she had already learned for several years. The researchers performed one of three gestures — touching their noses, touching their palms, or clapping their hands — and gave Pan pieces of apple from a transparent box as a reward whenever she copied that task. The goggles also captured the greetings Pan often gave people before tasks, such as pant-grunting or swaying.
"No researcher has been successful in recording the natural gaze of chimpanzees before," Kano said.
The researchers found out how Pan looked at the world differently depending on what she was doing. For instance, when greeting experimenters, the chimpanzee focused on their faces and feet — the latter presumably to see where they were going — but during the gestural task, she gazed at the experimenters’ faces and hands. In addition, while Pan mostly ignored the fruit reward before the gestural task, she looked at it 30 times more during the task. Kano indicated that this focus on the fruit reveals that Pan was thinking ahead to anticipate the future.
"This work builds toward an understanding not just of how chimpanzees learn about the world, but how they want to influence it," said neuroethologist Stephen Shepherd at Rockefeller University in New York, who did not take part in this research. "We can use gaze as a readout of what chimpanzees think is important to attend and affect."
Moreover, past research with desk-mounted eye-trackers hinted chimps did not look at familiar faces any longer than unfamiliar ones, but these new findings suggest otherwise — Pan looked at unfamiliar experimenters longer than familiar ones.
The researchers think one reason for the difference may have been because the previous studies used pictures of faces, shown for a shorter amount of time. In the new experiment, Pan also looked at familiar people longer if they were not in rooms where she was accustomed to seeing them.
The researchers plan on testing more chimpanzees with these wearable eye-trackers. They also want to compare the apes with people and other primates.
"It will be very interesting to see how humans, chimpanzees and other primates use gaze while performing the same real-world tasks," Shepherd said. "I would love to know if chimpanzees are intermediate between humans and monkeys, or if they’re just like humans."
In addition, future research will analyze how chimpanzees predict the actions of people and other chimpanzees. How the apes predict the actions of others in real-time, “that is, within a fraction of a second, is largely unknown,” Kano said.
Kano and his colleague Masaki Tomonaga detailed their findings online March 27 in the journal PLOS ONE.
People often think that other people are staring at them even when they aren’t, vision scientists have found.
In a new article in Current Biology, researchers at The Vision Centre reveal that, when in doubt, the human brain is more likely to tell its owner that they’re under the gaze of another person.
“Gaze perception – the ability to tell what a person is looking at – is a social cue that people often take for granted,” says Professor Colin Clifford of The Vision Centre and The University of Sydney.
“Judging if others are looking at us may come naturally, but it’s actually not that simple – our brains have to do a lot of work behind the scenes.”
To tell if they’re under someone’s gaze, people look at the position of the other person’s eyes and the direction of their heads, Prof. Clifford explains. These visual cues are then sent to the brain where there are specific areas that compute this information.
However, the brain doesn’t just passively receive information from the eyes, Prof. Clifford says. The new study shows that when people have limited visual cues, such as in dark conditions or when the other person is wearing sunglasses, the brain takes over with what it ‘knows’.
In their study, the Vision Centre researchers created images of faces and asked people to observe where the faces were looking.
“We made it difficult for the observers to see where the eyes were pointed so they would have to rely on their prior knowledge to judge the faces’ direction of gaze,” Prof. Clifford explains. “It turns out that we’re hard-wired to believe that others are staring at us, especially when we’re uncertain.
“So gaze perception doesn’t only involve visual cues – our brains generate assumptions from our experiences and match them with what we see at a particular moment.”
There are several speculations to why humans have this bias, Prof. Clifford says. “Direct gaze can signal dominance or a threat, and if you perceive something as a threat, you would not want to miss it. So assuming that the other person is looking at you may simply be a safer strategy.”
“Also, direct gaze is often a social cue that the other person wants to communicate with us, so it’s a signal for an upcoming interaction.”
There is also evidence that babies have a preference for direct gaze, which suggests that this bias is innate, Prof. Clifford says. “It’s important that we find out whether it’s innate or learned – and how this might affect people with certain mental conditions.
“Research has shown, for example, that people who have autism are less able to tell whether someone is looking at them. People with social anxiety, on the other hand, have a higher tendency to think that they are under the stare of others.
“So if it is a learned behaviour, we could help them practice this task – one possibility is letting them observe a lot of faces with different eyes and head directions, and giving them feedback on whether their observations are accurate.”
New learning and memory neurons uncovered
A University of Queensland study has identified precisely when new neurons become important for learning.
Lead researcher Dr Jana Vukovic from UQ’s Queensland Brain Institute (QBI) said the study highlighted the importance of new neuron development.
“New neurons are continually produced in the brain, passing through a number of developmental stages before becoming fully mature,” Dr Vukovic said.
“Using a genetic technique to delete immature neurons in animal models, we found they had great difficulty learning a new spatial task.
“There are ways to encourage the production of new neurons – including physical exercise – to improve learning.
“The new neurons appear particularly important for the brain to detect subtle but critical differences in the environment that can impact on the individual.”
The study, performed in QBI Director Professor Perry Bartlett’s laboratory, also demonstrates that immature neurons, born in a region of the brain known as the hippocampus, are required for learning but not for the retrieval of past memories.
“On the other hand, if the animals needed to remember a task they had already mastered in the past, before these immature neurons were deleted, their ability to perform the task was the same – so, they’ve remembered the task they learned earlier,” Dr Vukovic said.
This research allows for better understanding of the processes underlying learning and memory formation.
(Image Caption: Newly generated neurons doublecortin positive in the dentate gyrus of a degenerating hippocampus in mutant mice lacking the transcription factor TIF-IA. Credit: Rosanna Parlato (AG Schütz, DKFZ-ZMBH Alliance)
Protein spheres in the nucleus give wrong signal for cell division

RUB researchers develop new hypothesis for the degeneration of nerve cells
A new hypothesis has been developed by researchers in Bochum on how Alzheimer’s disease could occur. They analysed the interaction of the proteins FE65 and BLM that regulate cell division. In the cell culture model, they discovered spherical structures in the nucleus that contained FE65 and BLM. The interaction of the proteins triggered a wrong signal for cell division. This may explain the degeneration and death of nerve cells in Alzheimer’s patients. The team led by Dr. Thorsten Müller and Prof. Dr. Katrin Marcus from the Department of Functional Proteomics in cooperation with the RUB’s Medical Proteome Centre headed by Prof. Helmut E. Meyer reported on the results in the “Journal of Cell Science”.
Components of spherical structures in the nucleus identified
The so-called amyloid precursor protein APP is central to Alzheimer’s disease. It spans the cell membrane, and its cleavage products are linked to protein deposits that form in Alzheimer patients outside the nerve cells. APP anchors the protein FE65 to the membrane, which was the focus of the current study. FE65 can migrate into the nucleus, where it plays a role in DNA replication and repair. Based on cells grown in the laboratory, the team led by Dr. Müller established that FE65 can unite with other proteins in the cell nucleus to form spherical structures, so-called “nuclear spheres”. Video microscopy showed that these ring-like structures merge with each other and can thus grow. “By using a special cell culture model, we were able to identify additional components of these spheres”, says Andreas Schrötter, PhD student in the working group Morbus Alzheimer at the Institute for Functional Proteomics. Among other things, the scientists found the protein BLM, which is known from Bloom’s syndrome – an extremely rare hereditary disease, which is associated with dwarfism, immunodeficiency, and an increased risk of cancer. BLM is involved in DNA replication and repair in the nucleus.
The amount of FE65 determines the amount of BLM in the cell nucleus
Müller’s team took a closer look at the function of FE65. By means of genetic manipulation, the researchers generated cell cultures, in which the FE65-production was reduced. A smaller amount of FE65 thus generated a smaller amount of the protein BLM in the nucleus. Instead, BLM collected in another area of the cell, the endoplasmic reticulum. In addition, the researchers found a lower rate of DNA replication in the genetically modified cells. In this way, FE65 influences the replication of the genetic material via the BLM protein. When the researchers cranked up the FE65-production again, the amount of BLM in the nucleus also increased again.
FE65 as a possible trigger for Alzheimer’s
In patients with Alzheimer’s disease, the protein APP, an interaction partner of FE65, changes. The interaction of the two molecules is important for the transport of FE65 into the nucleus, where it regulates cell division in combination with BLM. Müller’s team assumes that the altered APP-FE65 interaction mistakenly sends the cells the signal to divide. Since nerve cells normally cannot divide, they degenerate instead and die. “This hypothesis, which we pursue in the working group Morbus Alzheimer, also delivers new starting points for potential therapies, which are urgently needed for Alzheimer’s disease,” says Dr. Mueller. In the future, the team will also investigate whether and how the amount of BLM is altered in Alzheimer’s patients compared to healthy subjects.
(Source: alphagalileo.org)
We’ve all been there: You’re at work deeply immersed in a project when suddenly you start thinking about your weekend plans. It happens because behind the scenes, parts of your brain are battling for control.

Now, University of Florida researchers and their colleagues are using a new technique that allows them to examine how parts of the brain battle for dominance when a person tries to concentrate on a task. Addressing these fluctuations in attention may help scientists better understand many neurological disorders such as autism, depression and mild cognitive impairment.
Mingzhou Ding, a professor of biomedical engineering, and Xiaotong Wen, an assistant research scientist of biomedical engineering, both of the University of Florida; Yijun Liu of the McKnight Brain Institute of the University of Florida and Peking University, Beijing; and Li Yao of Beijing Normal University, report their findings in the current issue of The Journal of Neuroscience.
Scientists know different networks within the brain have distinct functions. Ding, Wen and their colleagues used a brain imaging technique called functional magnetic resonance imaging and biostatistical methods to examine interactions between a set of areas they call the task control network and another set of areas known as the default mode network.
The task control network regulates attention to surroundings, controlling concentration on a task such as doing homework, or listening for emotional cues during a conversation. The default mode network is thought to regulate self-reflection and emotion, and often becomes active when a person seems to be doing nothing else.
“We knew that the default mode network decreases in activity when a task is being performed, but we didn’t know why or how,” said Ding, a professor of biomedical engineering in the J. Crayton Pruitt department of biomedical engineering. “We also wanted to know what is driving that activity decrease.
“For a long time, the questions we are asking could not be answered.”
In the past, researchers could not distinguish between directions of interactions between regions of the brain, and could come up with only one number to represent an average of the back-and-forth interactions. Ding and his colleagues used a new technique to untangle the interactions in each direction to show how the different brain regions interact with one another.
In their study, the researchers used fMRI to examine the brains of people performing a task that required concentration. The scientists can see the activity in certain areas of the brain at the same time a person is performing a given task. They can see which parts of the brain are active and which are not and correlate this to how successful a person is at a given task. They then applied the Granger causality technique to look at the data they saw in the fMRI. Named for Nobel Prize-winning economist Clive Granger, this technique allows scientists to examine how one variable affects another variable; in this case, how one region of the brain influences another.
“People have hypothesized different functions for signals going in different directions,” Ding said. “We show that when the task control network suppresses the default mode network, the person can do the task better and faster. The better the default mode network is shut down, the better a person performs.”
However, when the default mode network is not sufficiently suppressed, it sends signals to the task control network that effectively distract the person, causing his or her performance to drop. So while the task control network suppresses the default mode network, the default mode network also interferes with the task control network.
“Your brain is a constant seesaw back and forth,” even when trying to concentrate on a task, Ding said.
The Granger causality technique may help researchers learn more about how neurological disorders work. Researchers have found that the default mode network remains unchanged in people with autism whether they are performing a task or interacting with the environment, which could explain symptoms such as difficulty reading social cues or being easily overwhelmed by sensory stimulation. Scientists have made similar findings with depression and mild cognitive impairment. However, until now no one has been able to address what areas of the brain might be regulating the default mode network and which might be interfering with that regulation.
“Now we are able to address these questions,” Ding said.
(Source: news.ufl.edu)