Neuroscience

Articles and news from the latest research reports.

Posts tagged science

252 notes

Antidepressant use in pregnancy may be associated with structural changes in the infant brain

A new study by University of North Carolina at Chapel Hill researchers found that children of depressed mothers treated with a group of antidepressants called selective serotonin reuptake inhibitors (SSRIs) during pregnancy were more likely to develop Chiari type 1 malformations than were children of mothers with no history of depression.

However, the researchers cautioned, doctors treating pregnant women for depression should not change their prescribing practices based on the results of this study.

“Our results can be interpreted two ways,” said Rebecca Knickmeyer, PhD, assistant professor of psychiatry in the UNC School of Medicine and lead author of the study published May 19 in the journal Neuropsychopharmacology. “Either SSRIs increase risk for Chiari type 1 malformations, or other factors associated with SSRI treatment during pregnancy, such as severity of depression itself, increase risk. Additional research into the effects of depression during pregnancy, with and without antidepressant treatment is urgently needed.”

A Chiari type 1 malformation is a condition in which brain tissue in the cerebellum (a part of the brain that controls balance, motor systems, and some cognitive functions) extends into the spinal canal. About 5 percent of children have a Chiari type 1 malformation. Most do not have any problems because of it, but some develop symptoms such as headache and balance problems. In severe cases surgery may be necessary.

The study’s results are based on an analysis of magnetic resonance imaging (MRI) brain scans done on four groups of children at UNC Hospitals. Thirty-three children whose mothers were diagnosed with depression and took SSRI antidepressant medications, such as sertraline and fluoxetine, were compared to 66 children whose mothers had no history of depression. In addition, 30 children whose mothers were diagnosed with depression but did not take SSRIs were compared to 60 children whose mothers had no history of depression.

Eighteen percent of the children whose mothers took SSRIs during pregnancy had Chiari type 1 malformations, compared to 3 percent among children whose mothers had no history of depression. The rate of Chiari type 1 malformations was highest in children whose mothers reported a family history of depression in addition to treatment with SSRIs during pregnancy, suggesting an important role for genes as well as environment. Duration of SSRI exposure and SSRI exposure at conception also appeared to increase risk.

“These results raise many interesting questions, and there are many things we still don’t know,” said study co-author Samantha Meltzer-Brody, MD, MPH, associate professor of psychiatry in the UNC School of Medicine and director of UNC’s Perinatal Psychiatry Program. “For example, we do not know how many of these children will go on to develop symptoms of Chiari type 1 malformations. What we do know is that untreated depression can be very harmful for women and their babies, and so we strongly encourage pregnant women who are being treated for depression to continue with their treatment,” she said.

Knickmeyer said that a decision to use antidepressants during pregnancy must be based on the balance between risks and benefits and that it is critical that health care providers and the public get accurate information on this topic. She also noted that a diagnosis of Chiari Type 1 is often delayed due to the non-specific nature of the symptoms. Thus, it may be valuable for families in this situation to know about the results of this study.

In addition, “Chiari type 1 malformations are somewhat common, but very little is known about what causes them,” said study co-author J. Keith Smith, MD, PhD, professor and vice chair of clinical research in UNC’s Department of Radiology. “Studies like this could give us new insight into that question.”

Filed under antidepressants SSRIs chiari I malformations pregnancy depression neuroscience science

124 notes

Cognitive test can differentiate between Alzheimer’s and normal aging

Researchers have developed a new cognitive test that can better determine whether memory impairments are due to very mild Alzheimer’s disease or the normal aging process.

Their study appears in the journal Neuropsychologia.

The Alzheimer’s Association estimates that the number of Americans living with Alzheimer’s disease will increase from 5 million in 2014 to as many as 16 million by 2050. Memory impairments and other early symptoms of Alzheimer’s are often difficult to differentiate from the effects of normal aging, making it hard for doctors to recommend treatment for those affected until the disease has progressed substantially.

Previous studies have shown that a part of the brain called the hippocampus is important to relational memory – the “ability to bind together various items of an event,” said Jim Monti, a University of Illinois postdoctoral research associate who led the work with psychology professor Neal Cohen, who is affiliated with the Beckman Institute at Illinois. Being able to connect a person’s name with his or her face is one example of relational memory. These two pieces of information are stored in different parts of the brain, but the hippocampus “binds” them so that the next time you see that person, you remember his or her name, Monti said.

Previous research has shown that people with Alzheimer’s disease often have impairments in hippocampal function. So the team designed a task that tested participants’ relational memory abilities.

Participants were shown a circle divided into three parts, each having a unique design. Similar to the process of name-and-face binding, the hippocampus works to bind these three pieces of the circle together. After the participants studied a circle, they would pick its exact match from a series of 10 circles, presented one at a time.

People with very mild Alzheimer’s disease did worse overall on the task than those in the healthy aging group, who, in turn, did worse than a group of young adults. The task also revealed an additional memory impairment unique to those with very mild Alzheimer’s disease, indicating the changes in cognition that result from Alzheimer’s are qualitatively different than healthy aging. This unique impairment allows researchers to statistically differentiate between those who did and those who did not have Alzheimer’s more accurately than some of the classical tests used for Alzheimer’s diagnosis, Monti said.

“That was illuminating and will serve to inform future work aimed at understanding and detecting the earliest cognitive manifestations of Alzheimer’s disease,” Monti said.

Although this new tool could eventually be used in clinical practice, more studies need to be done to refine the test, he said.

“We’d like to eventually study populations with fewer impairments and bring in neuroimaging techniques to better understand the initial changes in brain and cognition that are due to Alzheimer’s disease,” Monti said.

Filed under aging alzheimer's disease hippocampus psychology neuroscience science

103 notes

Compound Reverses Symptoms of Alzheimer’s Disease in Mice

A molecular compound developed by Saint Louis University scientists restored learning, memory and appropriate behavior in a mouse model of Alzheimer’s disease, according to findings in the May issue of the Journal of Alzheimer’s Disease. The molecule also reduced inflammation in the part of the brain responsible for learning and memory.

The paper, authored by a team of scientists led by Susan Farr, Ph.D., research professor of geriatrics at Saint Louis University, is the second mouse study that supports the potential therapeutic value of an antisense compound in treating Alzheimer’s disease in humans.

"It reversed learning and memory deficits and brain inflammation in mice that are genetically engineered to model Alzheimer’s disease," Farr said. "Our current findings suggest that the compound, which is called antisense oligonucleotide (OL-1), is a potential treatment for Alzheimer’s disease."

Farr cautioned that the experiment was conducted in a mouse model. Like any drug, before an antisense compound could be tested in human clinical trials, toxicity tests need to be completed.

Antisense is a strand of molecules that bind to messenger RNA, launching a cascade of cellular events that turns off a certain gene.

In this case, OL-1 blocks the translation of RNA, which triggers a process that keeps excess amyloid beta protein from being produced. The specific antisense significantly decreased the overexpression of a substance called amyloid beta protein precursor, which normalized the amount of amyloid beta protein in the body. Excess amyloid beta protein is believed to be partially responsible for the formation of plaque in the brain of patients who have Alzheimer’s disease.

Scientists tested OL-1 in a type of mouse that overexpresses a mutant form of the human amyloid beta precursor gene. Previously they had tested the substance in a mouse model that has a natural mutation causing it to overproduce mouse amyloid beta. Like people who have Alzheimer’s disease, both types of mice have age-related impairments in learning and memory, elevated levels of amyloid beta protein that stay in the brain and increased inflammation and oxidative damage to the hippocampus — the part of the brain responsible for learning and memory.

"To be effective in humans, OL-1 would need to be effective at suppressing production of human amyloid beta protein," Farr said.

Scientists compared the mice that were genetically engineered to overproduce human amyloid beta protein with a wild strain, which served as the control. All of the wild strain received random antisense, while about half of the genetically engineered mice received random antisense and half received OL-1.

The mice were given a series of tests designed to measure memory, learning and appropriate behavior, such as going through a maze, exploring an unfamiliar location and recognizing an object.

Scientists found that learning and memory improved in the genetically engineered mice that received OL-1 compared to the genetically engineered mice that received random antisense. Learning and memory were the same among genetically engineered mice that received OL-1 and wild mice that received random antisense.

They also tested the effect of administering the drug through the central nervous system, so it crossed the blood brain barrier to enter the brain directly, and of giving it through a vein in the tail, so it circulated through the bloodstream in the body. They found where the drug was injected had little effect on learning and memory.

"Our findings reinforced the importance of amyloid beta protein in the Alzheimer’s disease process. They suggest that an antisense that targets the precursor to amyloid beta protein is a potential therapy to explore to reversing symptoms of Alzheimer’s disease," Farr said.

(Source: slu.edu)

Filed under alzheimer's disease antisense oligonucleotide memory inflammation oxidative stress neuroscience science

177 notes

Altruism/egoism: a question of points of view
Different brain structures are at the basis of these behaviours
Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.
It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.
Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.
“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.
Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.
“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.
“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Altruism/egoism: a question of points of view

Different brain structures are at the basis of these behaviours

Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.

It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.

Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.

“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.

Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.

“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.

“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Filed under prosocial behavior brain activity virtual reality salience network prefrontal cortex neuroscience science

122 notes

Alpha waves organize a to-do list for the brain

In his search to understand the role and function of brain waves, neuroscientist Ole Jensen (Radboud University) postulates a new theory on how the alpha wave controls attention to visual signals. His theory is published in Trends in Neurosciences on May 20. Alpha waves appear to be even more active and important than Jensen already thought.

image

Our brain cells ‘spark’ all the time. From this electronic activity brain waves emerge: oscillations at different band widths. And like a radio station uses different frequencies to carry specific information far away from the emitting source, so does the brain. And just like radio listeners with a certain musical preference tune in to the frequency that carries the music they prefer, brain area’s tune into the wave length relevant for their functioning.

Alpha waves aren’t boring
Ole Jensen, professor of Neuronal Oscillations at Radboud University’s Donders Institute for Brain, Cognition and Behaviour, tries to figure out how this network of sending and receiving information through oscillations works in detail. Earlier he discovered a novel role of the alpha wave that was long thought to be a boring wave, emerging when the brain runs idle and a person is dozing off. Jensen shifted this interpretation by showing the importance of the alpha frequency: it helps to shut down irrelevant brain area’s for a certain task. It helps us concentrate on what is really important at that moment.

To do list
In the Trends in Neurosciences paper that appeared today, Jensen postulates a new theory for how this actually works given a visual task. ‘We think that different phases of the alpha wave encode for different parts of a visual scene. It helps breaking down the visual information into small jobs and then perform those tasks in a specific order. A to do list for your visual attention system: focus on the face, focus on the hand, focus on the glass, look around. And then all over again.’

Jensen is now planning to test this new interpretation of the alpha wave in both animals and humans.

(Source: ru.nl)

Filed under brainwaves alpha oscillations visual attention visual processing neuroscience science

219 notes

Scientists discover how to restore ability to grasp in paralysed hand
Pioneering research by scientists at a North East university could help people who have been paralysed to re-gain the use of their hands.
The researchers at Newcastle University have been able to restore the ability to grab objects with a paralysed hand using spinal cord stimulation.
The work, which has been funded by the Wellcome Trust, could help stroke and spinal injury victims as the research has shown that by connecting the brain to a computer and then the computer to the spinal cord, it is possible to restore movement.
The discovery opens up the possibility of new treatments within the next few years which could help stroke victims or those with spinal cord injuries regain some movement in their arms and hands as currently there is no cure for upper limb paralysis.
The work, led by Dr Andrew Jackson, Research Fellow at Newcastle University and Dr Jonas Zimmermann, now at Brown University in America, is published in the journal Frontiers in Neuroscience.
Read more

Scientists discover how to restore ability to grasp in paralysed hand

Pioneering research by scientists at a North East university could help people who have been paralysed to re-gain the use of their hands.

The researchers at Newcastle University have been able to restore the ability to grab objects with a paralysed hand using spinal cord stimulation.

The work, which has been funded by the Wellcome Trust, could help stroke and spinal injury victims as the research has shown that by connecting the brain to a computer and then the computer to the spinal cord, it is possible to restore movement.

The discovery opens up the possibility of new treatments within the next few years which could help stroke victims or those with spinal cord injuries regain some movement in their arms and hands as currently there is no cure for upper limb paralysis.

The work, led by Dr Andrew Jackson, Research Fellow at Newcastle University and Dr Jonas Zimmermann, now at Brown University in America, is published in the journal Frontiers in Neuroscience.

Read more

Filed under spinal cord stimulation spinal cord injury BCI paralysis motor cortex motor movement neuroscience science

194 notes

Optical brain scanner goes where other brain scanners can’t
Scientists have advanced a brain-scanning technology that tracks what the brain is doing by shining dozens of tiny LED lights on the head. This new generation of neuroimaging compares favorably to other approaches but avoids the radiation exposure and bulky magnets the others require, according to new research at Washington University School of Medicine in St. Louis.
The new optical approach to brain scanning is ideally suited for children and for patients with electronic implants, such as pacemakers, cochlear implants and deep brain stimulators (used to treat Parkinson’s disease). The magnetic fields in magnetic resonance imaging (MRI) often disrupt either the function or safety of implanted electrical devices, whereas there is no interference with the optical technique.
The new technology is called diffuse optical tomography (DOT). While researchers have been developing it for more than 10 years, the method had been limited to small regions of the brain. The new DOT instrument covers two-thirds of the head and for the first time can image brain processes taking place in multiple regions and brain networks such as those involved in language processing and self-reflection (daydreaming).
The results are now available online in Nature Photonics.
“When the neuronal activity of a region in the brain increases, highly oxygenated blood flows to the parts of the brain doing more work, and we can detect that,” said senior author Joseph Culver, PhD, associate professor of radiology. “It’s roughly akin to spotting the rush of blood to someone’s cheeks when they blush.”
The technique works by detecting light transmitted through the head and capturing the dynamic changes in the colors of the brain tissue. 
Although DOT technology now is used in research settings, it has the potential to be helpful in many medical scenarios as a surrogate for functional MRI, the most commonly used imaging method for mapping human brain function. Functional MRI also tracks activity in the brain via changes in blood flow. In addition to greatly adding to our understanding of the human brain, fMRI is used to diagnose and monitor brain disease and therapy.
Another commonly used method for mapping brain function is positron emission tomography (PET), which involves radiation exposure. Because DOT technology does not use radiation, multiple scans performed over time could be used to monitor the progress of patients treated for brain injuries, developmental disorders such as autism, neurodegenerative disorders such as Parkinson’s, and other diseases.
Unlike fMRI and PET, DOT technology is designed to be portable, so it could be used at a patient’s beside or in the operating room.
“With the new improvements in image quality, DOT is moving significantly closer to the resolution and positional accuracy of fMRI,” said first author Adam T. Eggebrecht, PhD, a postdoctoral research fellow. “That means DOT can be used as a stronger surrogate in situations where fMRI cannot be used.”
The researchers have many ideas for applying DOT, including learning more about how deep brain stimulation helps Parkinson’s patients, imaging the brain during social interactions, and studying what happens to the brain during general anesthesia and when the heart is temporarily stopped during cardiac surgery.
For the current study, the researchers validated the performance of DOT by comparing its results to fMRI scans. Data was collected using the same subjects, and the DOT and fMRI images were aligned. They looked for Broca’s area, a key area of the frontal lobe used for language and speech production. The overlap between the brain region identified as Broca’s area by DOT data and by fMRI scans was about 75 percent.
In a second set of tests, researchers used DOT and fMRI to detect brain networks that are active when subjects are resting or daydreaming. Researchers’ interests in these networks have grown enormously over the past decade as the networks have been tied to many different aspects of brain health and sickness, such as schizophrenia, autism and Alzheimer’s disease. In these studies, the DOT data also showed remarkable similarity to fMRI — picking out the same cluster of three regions in both hemispheres.
“With the improved image quality of the new DOT system, we are getting much closer to the accuracy of fMRI,” Culver said. “We’ve achieved a level of detail that, going forward, could make optical neuroimaging much more useful in research and the clinic.”
While DOT doesn’t let scientists peer very deeply into the brain, researchers can get reliable data to a depth of about one centimeter of tissue. That centimeter contains some of the brain’s most important and interesting areas with many higher brain functions, such as memory, language and self-awareness represented.
During DOT scans, the subject wears a cap composed of many light sources and sensors connected to cables. The full-scale DOT unit takes up an area slightly larger than an old-fashioned phone booth, but Culver and his colleagues have built versions of the scanner mounted on wheeled carts. They continue to work to make the technology more portable.

Optical brain scanner goes where other brain scanners can’t

Scientists have advanced a brain-scanning technology that tracks what the brain is doing by shining dozens of tiny LED lights on the head. This new generation of neuroimaging compares favorably to other approaches but avoids the radiation exposure and bulky magnets the others require, according to new research at Washington University School of Medicine in St. Louis.

The new optical approach to brain scanning is ideally suited for children and for patients with electronic implants, such as pacemakers, cochlear implants and deep brain stimulators (used to treat Parkinson’s disease). The magnetic fields in magnetic resonance imaging (MRI) often disrupt either the function or safety of implanted electrical devices, whereas there is no interference with the optical technique.

The new technology is called diffuse optical tomography (DOT). While researchers have been developing it for more than 10 years, the method had been limited to small regions of the brain. The new DOT instrument covers two-thirds of the head and for the first time can image brain processes taking place in multiple regions and brain networks such as those involved in language processing and self-reflection (daydreaming).

The results are now available online in Nature Photonics.

“When the neuronal activity of a region in the brain increases, highly oxygenated blood flows to the parts of the brain doing more work, and we can detect that,” said senior author Joseph Culver, PhD, associate professor of radiology. “It’s roughly akin to spotting the rush of blood to someone’s cheeks when they blush.”

The technique works by detecting light transmitted through the head and capturing the dynamic changes in the colors of the brain tissue. 

Although DOT technology now is used in research settings, it has the potential to be helpful in many medical scenarios as a surrogate for functional MRI, the most commonly used imaging method for mapping human brain function. Functional MRI also tracks activity in the brain via changes in blood flow. In addition to greatly adding to our understanding of the human brain, fMRI is used to diagnose and monitor brain disease and therapy.

Another commonly used method for mapping brain function is positron emission tomography (PET), which involves radiation exposure. Because DOT technology does not use radiation, multiple scans performed over time could be used to monitor the progress of patients treated for brain injuries, developmental disorders such as autism, neurodegenerative disorders such as Parkinson’s, and other diseases.

Unlike fMRI and PET, DOT technology is designed to be portable, so it could be used at a patient’s beside or in the operating room.

“With the new improvements in image quality, DOT is moving significantly closer to the resolution and positional accuracy of fMRI,” said first author Adam T. Eggebrecht, PhD, a postdoctoral research fellow. “That means DOT can be used as a stronger surrogate in situations where fMRI cannot be used.”

The researchers have many ideas for applying DOT, including learning more about how deep brain stimulation helps Parkinson’s patients, imaging the brain during social interactions, and studying what happens to the brain during general anesthesia and when the heart is temporarily stopped during cardiac surgery.

For the current study, the researchers validated the performance of DOT by comparing its results to fMRI scans. Data was collected using the same subjects, and the DOT and fMRI images were aligned. They looked for Broca’s area, a key area of the frontal lobe used for language and speech production. The overlap between the brain region identified as Broca’s area by DOT data and by fMRI scans was about 75 percent.

In a second set of tests, researchers used DOT and fMRI to detect brain networks that are active when subjects are resting or daydreaming. Researchers’ interests in these networks have grown enormously over the past decade as the networks have been tied to many different aspects of brain health and sickness, such as schizophrenia, autism and Alzheimer’s disease. In these studies, the DOT data also showed remarkable similarity to fMRI — picking out the same cluster of three regions in both hemispheres.

“With the improved image quality of the new DOT system, we are getting much closer to the accuracy of fMRI,” Culver said. “We’ve achieved a level of detail that, going forward, could make optical neuroimaging much more useful in research and the clinic.”

While DOT doesn’t let scientists peer very deeply into the brain, researchers can get reliable data to a depth of about one centimeter of tissue. That centimeter contains some of the brain’s most important and interesting areas with many higher brain functions, such as memory, language and self-awareness represented.

During DOT scans, the subject wears a cap composed of many light sources and sensors connected to cables. The full-scale DOT unit takes up an area slightly larger than an old-fashioned phone booth, but Culver and his colleagues have built versions of the scanner mounted on wheeled carts. They continue to work to make the technology more portable.

Filed under brain scans diffuse optical tomography neuroimaging brain tissue neuroscience science

136 notes

Structurally-Constrained Relationships between Cognitive States in the Human Brain
The anatomical connectivity of the human brain supports diverse patterns of correlated neural activity that are thought to underlie cognitive function. In a manner sensitive to underlying structural brain architecture, we examine the extent to which such patterns of correlated activity systematically vary across cognitive states. Anatomical white matter connectivity is compared with functional correlations in neural activity measured via blood oxygen level dependent (BOLD) signals. Functional connectivity is separately measured at rest, during an attention task, and during a memory task. We assess these structural and functional measures within previously-identified resting-state functional networks, denoted task-positive and task-negative networks, that have been independently shown to be strongly anticorrelated at rest but also involve regions of the brain that routinely increase and decrease in activity during task-driven processes. We find that the density of anatomical connections within and between task-positive and task-negative networks is differentially related to strong, task-dependent correlations in neural activity. The space mapped out by the observed structure-function relationships is used to define a quantitative measure of separation between resting, attention, and memory states. We find that the degree of separation between states is related to both general measures of behavioral performance and relative differences in task-specific measures of attention versus memory performance. These findings suggest that the observed separation between cognitive states reflects underlying organizational principles of human brain structure and function.
Full Article

Structurally-Constrained Relationships between Cognitive States in the Human Brain

The anatomical connectivity of the human brain supports diverse patterns of correlated neural activity that are thought to underlie cognitive function. In a manner sensitive to underlying structural brain architecture, we examine the extent to which such patterns of correlated activity systematically vary across cognitive states. Anatomical white matter connectivity is compared with functional correlations in neural activity measured via blood oxygen level dependent (BOLD) signals. Functional connectivity is separately measured at rest, during an attention task, and during a memory task. We assess these structural and functional measures within previously-identified resting-state functional networks, denoted task-positive and task-negative networks, that have been independently shown to be strongly anticorrelated at rest but also involve regions of the brain that routinely increase and decrease in activity during task-driven processes. We find that the density of anatomical connections within and between task-positive and task-negative networks is differentially related to strong, task-dependent correlations in neural activity. The space mapped out by the observed structure-function relationships is used to define a quantitative measure of separation between resting, attention, and memory states. We find that the degree of separation between states is related to both general measures of behavioral performance and relative differences in task-specific measures of attention versus memory performance. These findings suggest that the observed separation between cognitive states reflects underlying organizational principles of human brain structure and function.

Full Article

Filed under cognitive function attention memory neural activity performance psychology neuroscience science

108 notes

Revealing Rembrandt
The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI). Our results emphasized the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.
Full Article

Revealing Rembrandt

The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI). Our results emphasized the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.

Full Article

Filed under brain activity neuroimaging art occipital cortex visual processing psychology neuroscience science

78 notes

Infants Benefit from Implants with More Frequency Sounds
A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.
Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.
“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.
Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.
Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.
Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.
More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.
Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”
The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.
The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.
“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”
Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Infants Benefit from Implants with More Frequency Sounds

A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.

Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.

“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.

Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.

Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.

Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.

More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.

Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”

The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.

The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.

“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”

Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Filed under implants cochlear implants speech speech perception hearing neuroscience science

free counters