Neuroscience

Articles and news from the latest research reports.

Posts tagged EEG

182 notes

Whites of Their Eyes: Study Finds Infants Respond to Social Cues From Sclera

Humans are the only primates with large, highly visible sclera – the white part of the eye.

The eye plays a significant role in the expressiveness of a face, and how much sclera is shown can indicate the emotions or behavioral attitudes of a person. Wide-open eyes, exposing a lot of white, indicate fear or surprise. A thinner slit of exposed eye, such as when smiling, expresses happiness or joy. Averted eyes, as well as direct eye contact, can mean several things. So the eye white, or how much of it is shown and at what angle, plays a role in the social and cooperative interactions among humans.

Adult humans are well-attuned to social cues involving the eye and use them, along with a great range of other facial and body features, to respond appropriately during social interactions. This sensitivity to eye cues is hard-wired into the brain of adults as they respond to social eye cues even without consciously seeing them.

But it is unclear whether the ability to unconsciously distinguish between different social cues indicated by the eyes exists early in development and can therefore be considered a key feature of the human social makeup.

A new University of Virginia and Max Planck Institute study, published online this week in the journal Proceedings of the National Academy of Sciences, finds that the ability to respond to eye cues apparently develops during infancy – at seven or so months.

“Our study provides developmental evidence for the notion that humans possess specific brain processes that allow them to automatically respond to eye cues,” said Tobias Grossmann, a University of Virginia developmental psychologist and one of the study’s authors.

Grossmann and his Max Planck Institute colleague Sarah Jessen used electroencephalography, or EEG, to measure the brain activity of 7-month-old infants while showing images of eyes wide open, narrowly opened, and with direct or averted gazes.

They found that the infants’ brains responded differently depending on the expression suggested by the eyes they viewed, which were shown absent of other facial features. They viewed the eye images for only 50 milliseconds – which is much less time than needed for an infant of this age to consciously perceive this kind of visual information.

“Their brains clearly responded to social cues conveyed through the eyes, indicating that even without conscious awareness, human infants are able to detect subtle social cues,” Grossmann said.

The infants’ brain responses displayed a different pattern to sclera depicting fearful expressions (wide-eyed) to non-fearful sclera. They also showed brain responses that differed when viewing direct gaze eyes compared to averted gaze.

“This demonstrates that, like adults, infants are sensitive to eye expressions of fear and direction of focus, and that these responses operate without conscious awareness,” Grossmann said. “The existence of such brain mechanisms in infants likely provides a vital foundation for the development of social interactive skills in humans.”

The infants in the study wore an EEG cap, like a small hat, which included sensors that could detect brain signals. Infants were sitting in the laps of their parents during the testing.

Filed under social perception social interaction brain activity infants EEG sclera neuroscience science

97 notes

Judgment and decision-making: brain activity indicates there is more than meets the eye



People make immediate judgments about images they are shown, which could impact on their decisions, even before their brains have had time to consciously process the information, a study of brainwaves led by The University Of Melbourne has found.



Published today in PLOS ONE, the study is the first in the world to show that it is possible to predict abstract judgments from brain waves, even though people were not conscious of making such judgments. The study also increases our understanding of impulsive behaviours and how to regulate it. 


It found that researchers could predict from participants’ brain activity how exciting they found a particular image to be, and whether a particular image made them think more about the future or the present. This is true even though the brain activity was recorded before participants knew they were going to be asked to make these judgments.

Lead authors Dr Stefan Bode from the Melbourne School of Psychological Sciences and Dr Carsten Murawski from the University of Melbourne Department of Finance said these findings illustrated there was more information encoded in brain activity than previously assumed.

“We have found that brain activity when looking at images can encode judgments such as time reference, even when the viewer is not aware of making such judgments. Moreover, our results suggest that certain images can prompt a person to think about the present or the future,” they said.

The authors said the results contributed to our understanding of impulsive behaviours, especially where those behaviours were caused by ‘prompts’ in the world around us. 


“For instance, consider someone trying to quit gambling who sees a gambling advertisement on TV. Our results suggest that even if this person is trying to ignore the ad, their brain may be unconsciously processing it and making it more likely that they will relapse,” he said. 

The researchers used electroencephalography technology (EEG) to measure the electrical activity of people’s brains while they looked at different pictures. The pictures displayed images of food, social scenes or status symbols like cars and money. 


After the EEG, researchers showed participants the same pictures again and asked questions about each image, such as how exciting they thought the image was or how strongly the image made them think of either the present or the future.
A statistical ‘decoding’ technique was then used to predict the judgments participants made about each of the pictures from the EEG brain activity that was recorded.
Co-author Daniel Bennett said just as certain prompts might cause impulsive behaviour, images could be used to prompt people to be more patient by regulating impulse control.

“Our results suggest that prompting people with images related to the future might cause processing outside awareness that could make it easier to think about the future. In theory, this could make people less impulsive and more likely to make healthy long-term decisions. These are hypotheses we will try to test in the future,” he said. 

The research was done in collaboration with the University of Cologne, Germany.

Judgment and decision-making: brain activity indicates there is more than meets the eye

People make immediate judgments about images they are shown, which could impact on their decisions, even before their brains have had time to consciously process the information, a study of brainwaves led by The University Of Melbourne has found.

Published today in PLOS ONE, the study is the first in the world to show that it is possible to predict abstract judgments from brain waves, even though people were not conscious of making such judgments. The study also increases our understanding of impulsive behaviours and how to regulate it. 



It found that researchers could predict from participants’ brain activity how exciting they found a particular image to be, and whether a particular image made them think more about the future or the present. This is true even though the brain activity was recorded before participants knew they were going to be asked to make these judgments.


Lead authors Dr Stefan Bode from the Melbourne School of Psychological Sciences and Dr Carsten Murawski from the University of Melbourne Department of Finance said these findings illustrated there was more information encoded in brain activity than previously assumed.


“We have found that brain activity when looking at images can encode judgments such as time reference, even when the viewer is not aware of making such judgments. Moreover, our results suggest that certain images can prompt a person to think about the present or the future,” they said.


The authors said the results contributed to our understanding of impulsive behaviours, especially where those behaviours were caused by ‘prompts’ in the world around us. 



“For instance, consider someone trying to quit gambling who sees a gambling advertisement on TV. Our results suggest that even if this person is trying to ignore the ad, their brain may be unconsciously processing it and making it more likely that they will relapse,” he said. 


The researchers used electroencephalography technology (EEG) to measure the electrical activity of people’s brains while they looked at different pictures. The pictures displayed images of food, social scenes or status symbols like cars and money. 



After the EEG, researchers showed participants the same pictures again and asked questions about each image, such as how exciting they thought the image was or how strongly the image made them think of either the present or the future.

A statistical ‘decoding’ technique was then used to predict the judgments participants made about each of the pictures from the EEG brain activity that was recorded.

Co-author Daniel Bennett said just as certain prompts might cause impulsive behaviour, images could be used to prompt people to be more patient by regulating impulse control.


“Our results suggest that prompting people with images related to the future might cause processing outside awareness that could make it easier to think about the future. In theory, this could make people less impulsive and more likely to make healthy long-term decisions. These are hypotheses we will try to test in the future,” he said. 

The research was done in collaboration with the University of Cologne, Germany.

Filed under decision making brain activity brainwaves EEG vision neuroscience science

312 notes

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak
In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.
The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience. 
The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 
“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 
Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.
“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”
Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.
Looking for lasting improvement in language skills
“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  
The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  
Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”
Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak

In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.

The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience.

The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 

“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 

Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.

“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”

Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.

Looking for lasting improvement in language skills

“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  

The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  

Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”

Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Filed under language language development EEG cognitive function sound processing neuroscience science

102 notes

New EEG electrode set for fast and easy measurement of brain function abnormalities

A new, easy-to-use EEG electrode set for the measurement of the electrical activity of the brain was developed in a recent study completed at the University of Eastern Finland. The solutions developed in the PhD study of Pasi Lepola, MSc, make it possible to attach the electrode set on the patient quickly, resulting in reliable results without any special treatment of the skin. As EEG measurements in emergency care are often performed in challenging conditions, the design of the electrode set pays particular attention to the reduction of electromagnetic interference from external sources.

EEG measurements can be used to detect such abnormalities in the electrical activity of the brain that require immediate treatment. These abnormalities are often indications of severe brain damage, cerebral infarction, cerebral haemorrhage, poisoning, or unspecified disturbed levels of consciousness. One of the most serious brain function abnormalities is a prolonged epileptic seizure, status epilepticus, which is impossible to diagnose without an EEG measurement. In many cases, a rapidly performed EEG measurement and the start of a proper treatment significantly reduces the need for aftercare and rehabilitation. This, in turn, drastically improves the cost-effectiveness of the treatment chain.

Although the benefits of EEG measurements are indisputable, they remain underused in acute and emergency care. A significant reason for this is the fact that the electrode sets available on the markets are difficult to attach on the patient, and their use requires special skills and constant training. This new type of an electrode set is expected to provide solutions for making EEG measurements feasible at as an early stage as possible.

image

The EEG electrode set was produced using screen printing technology, in which silver ink was used to print the conductors and measurement electrodes on a flexible polyester film. The EEG electrode set consists of 16 hydrogel-coated electrodes which, unlike in the traditional method, are placed on the hair-free areas of the patient’s head, making it easy to attach. The new EEG electrode set significantly speeds up the measurement process because there is no need to scrape the patient’s skin or to use any separate gels. As the electrode set is flexible and solid, the electrodes get automatically placed in their correct places. Furthermore, there is no need to move the patient’s head when putting on the EEG electrode set, which is especially important in patients possibly suffering from a neck or skull injury. Due to the fact that the disposable electrode set is easy and fast to use, it is particularly well-suited to be used in emergency care, in ambulances and even in field conditions. Thanks to the materials used, the electrode set does not interfere with any magnetic resonance or computed tomography imaging the patient may undergo.

The performance of the electrode set was tested by using various electrical tests, on several volunteers, and in real patient cases. The results were compared to those obtained by traditional EEG methods.

image

The PhD study also focused on the use of screen printing technology solutions to protect electrodes against electromagnetic interference. The silver or graphite shielding layer printed to the outer edge of the electrode set was discovered to significantly reduce external interference on the EEG signal. This shielding layer can be easily and cost-efficiently introduced to all measurement electrodes produced with similar methods. Protecting the electrode with a shielding layer is beneficial when measuring weak signals in conditions that contain external interference.

(Source: uef.fi)

Filed under EEG brain activity brain function brain damage neuroscience science

94 notes

Brainwave Test Could Improve Autism Diagnosis and Classification
A new study by researchers at Albert Einstein College of Medicine of Yeshiva University suggests that measuring how fast the brain responds to sights and sounds could help in objectively classifying people on the autism spectrum and may help diagnose the condition earlier. The paper was published today in the online edition of the Journal of Autism and Developmental Disabilities.
The U.S. Centers for Disease Control and Prevention estimates that 1 in 68 children has been identified with an autism spectrum disorder (ASD). The signs and symptoms of ASD vary significantly from person to person, ranging from mild social and communication difficulties to profound cognitive impairments.
“One of the challenges in autism is that we don’t know how to classify patients into subgroups or even what those subgroups might be,” said study leader Sophie Molholm, Ph.D., associate professor in the Dominick P. Purpura Department of Neuroscience and the Muriel and Harold Block Faculty Scholar in Mental Illness in the department of pediatrics at Einstein. “This has greatly limited our understanding of the disorder and how to treat it.”
Autism is diagnosed based on a patient’s behavioral characteristics and symptoms. “These assessments can be highly subjective and require a tremendous amount of clinical expertise,” said Dr. Molholm. “We clearly need a more objective way to diagnose and classify this disorder.”
An earlier study by Dr. Molholm and colleagues suggested that brainwave electroencephalogram (EEG) recordings could potentially reveal how severely ASD individuals are affected. That study found that children with ASD process sensory information—such as sound, touch and vision—less rapidly than typically developing children do.
The current study was intended to see whether sensory processing varies along the autism spectrum. Forty-three ASD children aged 6 to 17 were presented with either a simple auditory tone, a visual image (red circle), or a tone combined with an image, and instructed to press a button as soon as possible after hearing the tone, seeing the image or seeing and hearing the two stimuli together. Continuous EEG recordings were made via 70 scalp electrodes to determine how fast the children’s brains were processing the stimuli.
The speed with which the subjects processed auditory signals strongly correlated with the severity of their symptoms: the more time required for an ASD individual to process the auditory signals, the more severe that person’s autistic symptoms. “This finding is in line with studies showing that, in people with ASD, the microarchitecture in the brain’s auditory center differs from that of typically developing children,” Dr. Molholm said.
The study also found a significant though weaker correlation between the speed of processing combined audio-visual signals and ASD severity. No link was observed between visual processing and ASD severity.
“This is a first step toward developing a biomarker of autism severity—an objective way to assess someone’s place on the ASD spectrum,” said Dr. Molholm. “Using EEG recordings in this way might also prove useful for objectively evaluating the effectiveness of ASD therapies.”
In addition, EEG recordings might help diagnose ASD earlier. “Early diagnosis allows for earlier treatment—which we know increases the likelihood of a better outcome,” said Dr. Molholm. “But currently, fewer than 15 percent of children with ASD are diagnosed before age 4. We might be able to adapt this technology to allow for early ASD detection and therapy for a much larger percentage of children.”

Brainwave Test Could Improve Autism Diagnosis and Classification

A new study by researchers at Albert Einstein College of Medicine of Yeshiva University suggests that measuring how fast the brain responds to sights and sounds could help in objectively classifying people on the autism spectrum and may help diagnose the condition earlier. The paper was published today in the online edition of the Journal of Autism and Developmental Disabilities.

The U.S. Centers for Disease Control and Prevention estimates that 1 in 68 children has been identified with an autism spectrum disorder (ASD). The signs and symptoms of ASD vary significantly from person to person, ranging from mild social and communication difficulties to profound cognitive impairments.

“One of the challenges in autism is that we don’t know how to classify patients into subgroups or even what those subgroups might be,” said study leader Sophie Molholm, Ph.D., associate professor in the Dominick P. Purpura Department of Neuroscience and the Muriel and Harold Block Faculty Scholar in Mental Illness in the department of pediatrics at Einstein. “This has greatly limited our understanding of the disorder and how to treat it.”

Autism is diagnosed based on a patient’s behavioral characteristics and symptoms. “These assessments can be highly subjective and require a tremendous amount of clinical expertise,” said Dr. Molholm. “We clearly need a more objective way to diagnose and classify this disorder.”

An earlier study by Dr. Molholm and colleagues suggested that brainwave electroencephalogram (EEG) recordings could potentially reveal how severely ASD individuals are affected. That study found that children with ASD process sensory information—such as sound, touch and vision—less rapidly than typically developing children do.

The current study was intended to see whether sensory processing varies along the autism spectrum. Forty-three ASD children aged 6 to 17 were presented with either a simple auditory tone, a visual image (red circle), or a tone combined with an image, and instructed to press a button as soon as possible after hearing the tone, seeing the image or seeing and hearing the two stimuli together. Continuous EEG recordings were made via 70 scalp electrodes to determine how fast the children’s brains were processing the stimuli.

The speed with which the subjects processed auditory signals strongly correlated with the severity of their symptoms: the more time required for an ASD individual to process the auditory signals, the more severe that person’s autistic symptoms. “This finding is in line with studies showing that, in people with ASD, the microarchitecture in the brain’s auditory center differs from that of typically developing children,” Dr. Molholm said.

The study also found a significant though weaker correlation between the speed of processing combined audio-visual signals and ASD severity. No link was observed between visual processing and ASD severity.

“This is a first step toward developing a biomarker of autism severity—an objective way to assess someone’s place on the ASD spectrum,” said Dr. Molholm. “Using EEG recordings in this way might also prove useful for objectively evaluating the effectiveness of ASD therapies.”

In addition, EEG recordings might help diagnose ASD earlier. “Early diagnosis allows for earlier treatment—which we know increases the likelihood of a better outcome,” said Dr. Molholm. “But currently, fewer than 15 percent of children with ASD are diagnosed before age 4. We might be able to adapt this technology to allow for early ASD detection and therapy for a much larger percentage of children.”

Filed under autism ASD EEG brainwaves neuroscience science

66 notes

You Don’t Walk Alone

Breakthrough in detecting early onset of refractory epilepsy in children will lead to effective treatment using non-pharmacological therapies.

65 million people around the world today suffer from epilepsy, a condition of the brain that may trigger an uncontrollable seizure at any time, often for no known reason. A seizure is a disruption of the electrical communication between neurons, and someone is said to have epilepsy if they experience two or more unprovoked seizures separated by at least 24 hours.

Epilepsy is the most common chronic disease in pediatric neurology, with about 0.5-1% of children developing epilepsy during their lifetime. A further 30-40% of epileptic children develop refractory epilepsy, a particular type of epilepsy that cannot be managed by antiepileptic drugs (AED). Regardless of etiology, children with refractory epilepsy are invariably exposed to a variety of physical, psychological and social morbidities. Patients whose seizures are difficult to control could benefit from non-pharmacological therapies, including surgery, deep brain stimulation and ketogenic diets. Therefore, the early identification of patients whose seizures are refractory to AED would allow them to receive alternative therapies at an appropriate time.

Despite idiopathic etiology being a significant predictor of a lower risk of refractory epilepsy, a subset of patients with idiopathic epilepsy might still be refractory to medical treatment.

Using a new electroencephalography (EEG) analytical method, a team of medical doctors and scientists in Taiwan has successfully developed a tool to detect certain EEG features often present in children with idiopathic epilepsy.

The team developed an efficient, automated and quantitative approach towards the early prediction of refractory idiopathic epilepsy based on EEG classification analysis. EEG analysis is widely employed to investigate brain disorders and to study brain electrical activity. In the study, a set of artifact-free EEG segments was acquired from the EEG recordings of patients belonging to two classes of epilepsy: well-controlled and refractory. To search for significantly discriminative EEG features and to reduce computational costs, a statistical approach involving global parametric features was adopted across EEG channels as well as over time. A gain ratio-based feature selection was then performed.

The study found a significantly higher DecorrTime avg AVG and RelPowDelta avg AVG in the well-controlled group than in the refractory group. This suggests that refractory patients have a higher risk of seizure attacks than well-controlled patients.

The main contributions of this study are as follows:

  1. the generalisation of 10 significant EEG features into a concept for the recognition and identification of potential refractory epilepsy in patients with idiopathic epilepsy, based on EEG classification analysis;
  2. the development of a diagnostic tool based conceptually on these 10 EEG features, using a support vector machine (SVM) classification model to discriminate between well-controlled idiopathic epilepsy and refractory idiopathic epilepsy, which will facilitate subsequent expert visual EEG interpretation.

Further research with more diversity (in terms of pediatric and adult participants) is encouraged to expand on the tool’s reliability and generalisation. This study was supported partly by a grant from the Kaohsiung Medical University Hospital and grants from Ministry of Science and Technology, Taiwan.

The paper can be found in the upcoming issue of the International Journal of Neural Systems (IJNS)

Filed under epilepsy EEG epileptic seizures neuroscience science

209 notes

EEG Study Findings Reveal How Fear is Processed in the Brain
An estimated 8% of Americans will suffer from post traumatic stress disorder (PTSD) at some point during their lifetime. Brought on by an overwhelming or stressful event or events, PTSD is the result of altered chemistry and physiology of the brain. Understanding how threat is processed in a normal brain versus one altered by PTSD is essential to developing effective interventions. 
New research from the Center for BrainHealth at The University of Texas at Dallas published online today in Brain and Cognition illustrates how fear arises in the brain when individuals are exposed to threatening images. This novel study is the first to separate emotion from threat by controlling for the dimension of arousal, the emotional reaction provoked, whether positive or negative, in response to stimuli. Building on previous animal and human research, the study identifies an electrophysiological marker for threat in the brain.
“We are trying to find where thought exists in the mind,” explained John Hart, Jr., M.D., Medical Science Director at the Center for BrainHealth. “We know that groups of neurons firing on and off create a frequency and pattern that tell other areas of the brain what to do. By identifying these rhythms, we can correlate them with a cognitive unit such as fear.”
Utilizing electroencephalography (EEG), Dr. Hart’s research team identified theta and beta wave activity that signifies the brain’s reaction to visually threatening images. 
“We have known for a long time that the brain prioritizes threatening information over other cognitive processes,” explained Bambi DeLaRosa, study lead author. “These findings show us how this happens. Theta wave activity starts in the back of the brain, in it’s fear center – the amygdala – and then interacts with brain’s memory center - the hippocampus – before traveling to the frontal lobe where thought processing areas are engaged. At the same time, beta wave activity indicates that the motor cortex is revving up in case the feet need to move to avoid the perceived threat.” 
For the study, 26 adults (19 female, 7 male), ages 19-30 were shown 224 randomized images that were either unidentifiably scrambled or real pictures. Real pictures were separated into two categories: threatening (weapons, combat, nature or animals) and non-threatening (pleasant situations, food, nature or animals). 
While wearing an EEG cap, participants were asked to push a button with their right index finger for real items and another button with their right middle finger for nonreal/scrambled items. Shorter response times were recorded for scrambled images than the real images. There was no difference in reaction time for threatening versus non-threatening images. 
EEG results revealed that threatening images evoked an early increase in theta activity in the occipital lobe (the area in the brain where visual information is processed), followed by a later increase in theta power in the frontal lobe (where higher mental functions such as thinking, decision-making, and planning occur). A left lateralized desynchronization of the beta band, the wave pattern associated with motor behavior (like the impulse to run), also consistently appeared in the threatening condition.
This study will serve as a foundation for future work that will explore normal versus abnormal fear associated with an object in other atypical populations including individuals with PTSD.

EEG Study Findings Reveal How Fear is Processed in the Brain

An estimated 8% of Americans will suffer from post traumatic stress disorder (PTSD) at some point during their lifetime. Brought on by an overwhelming or stressful event or events, PTSD is the result of altered chemistry and physiology of the brain. Understanding how threat is processed in a normal brain versus one altered by PTSD is essential to developing effective interventions. 

New research from the Center for BrainHealth at The University of Texas at Dallas published online today in Brain and Cognition illustrates how fear arises in the brain when individuals are exposed to threatening images. This novel study is the first to separate emotion from threat by controlling for the dimension of arousal, the emotional reaction provoked, whether positive or negative, in response to stimuli. Building on previous animal and human research, the study identifies an electrophysiological marker for threat in the brain.

“We are trying to find where thought exists in the mind,” explained John Hart, Jr., M.D., Medical Science Director at the Center for BrainHealth. “We know that groups of neurons firing on and off create a frequency and pattern that tell other areas of the brain what to do. By identifying these rhythms, we can correlate them with a cognitive unit such as fear.”

Utilizing electroencephalography (EEG), Dr. Hart’s research team identified theta and beta wave activity that signifies the brain’s reaction to visually threatening images. 

“We have known for a long time that the brain prioritizes threatening information over other cognitive processes,” explained Bambi DeLaRosa, study lead author. “These findings show us how this happens. Theta wave activity starts in the back of the brain, in it’s fear center – the amygdala – and then interacts with brain’s memory center - the hippocampus – before traveling to the frontal lobe where thought processing areas are engaged. At the same time, beta wave activity indicates that the motor cortex is revving up in case the feet need to move to avoid the perceived threat.” 

For the study, 26 adults (19 female, 7 male), ages 19-30 were shown 224 randomized images that were either unidentifiably scrambled or real pictures. Real pictures were separated into two categories: threatening (weapons, combat, nature or animals) and non-threatening (pleasant situations, food, nature or animals). 

While wearing an EEG cap, participants were asked to push a button with their right index finger for real items and another button with their right middle finger for nonreal/scrambled items. Shorter response times were recorded for scrambled images than the real images. There was no difference in reaction time for threatening versus non-threatening images. 

EEG results revealed that threatening images evoked an early increase in theta activity in the occipital lobe (the area in the brain where visual information is processed), followed by a later increase in theta power in the frontal lobe (where higher mental functions such as thinking, decision-making, and planning occur). A left lateralized desynchronization of the beta band, the wave pattern associated with motor behavior (like the impulse to run), also consistently appeared in the threatening condition.

This study will serve as a foundation for future work that will explore normal versus abnormal fear associated with an object in other atypical populations including individuals with PTSD.

Filed under fear PTSD emotions EEG brainwaves amygdala motor cortex hippocampus neuroscience science

312 notes

Control your environment through brain commands
Many patients with amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s Disease) and other neurodegenerative conditions live every day with a frustrating inability to do small, everyday tasks, such as turning on the lights, changing the volume on the TV, or even communicating with their friends and loved ones.
Today, a first-ever proof of concept demonstrates how wearable technology and consumer products can be brought together with digital innovations to let a person with no mobility control their environment using brain commands, via a custom-built tablet application and wearable display interface.
This proof of concept demonstrates the potential to improve the quality of life for ALS patients – or any person with limited muscle and speech function – by giving them the ability to interact, communicate and issue commands without moving their body or using their voice.
Read more

Control your environment through brain commands

Many patients with amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s Disease) and other neurodegenerative conditions live every day with a frustrating inability to do small, everyday tasks, such as turning on the lights, changing the volume on the TV, or even communicating with their friends and loved ones.

Today, a first-ever proof of concept demonstrates how wearable technology and consumer products can be brought together with digital innovations to let a person with no mobility control their environment using brain commands, via a custom-built tablet application and wearable display interface.

This proof of concept demonstrates the potential to improve the quality of life for ALS patients – or any person with limited muscle and speech function – by giving them the ability to interact, communicate and issue commands without moving their body or using their voice.

Read more

Filed under ALS Lou Gehrig’s disease brainwaves EEG Emotiv Insight Brainware technology neuroscience science

286 notes

Vajrayana Meditation Techniques Associated with Tibetan Buddhism Can Enhance Brain Performance 
Contrary to popular belief, not all meditation techniques produce similar effects of body and mind. Indeed, a recent study by researchers from the National University of Singapore (NUS) has demonstrated for the first time that different types of Buddhist meditation – namely the Vajrayana and Theravada styles of meditation - elicit qualitatively different influences on human physiology and behaviour, producing arousal and relaxation responses respectively.
In particular, the NUS research team found that Vajrayana meditation, which is associated with Tibetan Buddhism, can lead to enhancements in cognitive performance.
The study by Associate Professor Maria Kozhevnikov and Dr Ido Amihai from the Department of Psychology at the NUS Faculty of Arts and Social Sciences was first published in the journal PLOS ONE in July 2014.
Vajrayana and Theravada meditation produce different physiological responses
Previous studies had defined meditation as a relaxation response and had attempted to categorise meditation as either involving focused or distributed attentional systems. Neither of these hypotheses received strong empirical support, and most of the studies focused on Theravada meditative practices.
Assoc Prof Kozhevnikov and Dr Amihai examined four different types of meditative practices: two types of Vajrayana meditations (Tibetan Buddhism) practices (Visualisation of self-generation-as-Deity and Rig-pa) and two types of Theravada practices (Shamatha and Vipassana). They collected electrocardiographic (EKG) and electroencephalographic (EEG) responses and also measured behavioural performance on cognitive tasks using a pool of experienced Theravada practitioners from Thailand and Nepal, as well as Vajrayana practitioners from Nepal.
They observed that physiological responses during the Theravada meditation differ significantly from those during the Vajrayana meditation. Theravada meditation produced enhanced parasympathetic activation (relaxation). In contrast, Vajrayana meditation did not show any evidence of parasympathetic activity but showed an activation of the sympathetic system (arousal).
The researchers had also observed an immediate dramatic increase in performance on cognitive tasks following only Vajrayana styles of meditation. They noted that such dramatic boost in attentional capacity is impossible during a state of relaxation. Their results show that Vajrayana and Theravada styles of meditation are based on different neurophysiological mechanisms, which give rise to either an arousal or relaxation response.
Applications of the research findings
The findings from the study showed that Vajrayana meditation can lead to dramatic enhancement in cognitive performance, suggesting that Vajrayana meditation could be especially useful in situations where it is important to perform at one’s best, such as during competition or states of urgency. On the other hand, Theravada styles of meditation are an excellent way to decrease stress, release tension, and promote deep relaxation.
Further research
After seeing that even a single session of Vajrayana meditation can lead to radical enhancements in brain performance, Assoc Prof Kozhevnikov and Dr Amihai will be investigating whether permanent changes could occur after long-term practice. The researchers are also looking at how non-practitioners can benefit from such meditative practices.
Assoc Prof Kozhevnikov said, “Vajrayana meditation typically requires years of practice, so we are also looking into whether it is also possible to acquire the beneficial effects of brain performance by practicing certain essential elements of the meditation. This would provide an effective and practical method for non-practitioners to quickly increase brain performance in times of need.”

Vajrayana Meditation Techniques Associated with Tibetan Buddhism Can Enhance Brain Performance

Contrary to popular belief, not all meditation techniques produce similar effects of body and mind. Indeed, a recent study by researchers from the National University of Singapore (NUS) has demonstrated for the first time that different types of Buddhist meditation – namely the Vajrayana and Theravada styles of meditation - elicit qualitatively different influences on human physiology and behaviour, producing arousal and relaxation responses respectively.

In particular, the NUS research team found that Vajrayana meditation, which is associated with Tibetan Buddhism, can lead to enhancements in cognitive performance.

The study by Associate Professor Maria Kozhevnikov and Dr Ido Amihai from the Department of Psychology at the NUS Faculty of Arts and Social Sciences was first published in the journal PLOS ONE in July 2014.

Vajrayana and Theravada meditation produce different physiological responses

Previous studies had defined meditation as a relaxation response and had attempted to categorise meditation as either involving focused or distributed attentional systems. Neither of these hypotheses received strong empirical support, and most of the studies focused on Theravada meditative practices.

Assoc Prof Kozhevnikov and Dr Amihai examined four different types of meditative practices: two types of Vajrayana meditations (Tibetan Buddhism) practices (Visualisation of self-generation-as-Deity and Rig-pa) and two types of Theravada practices (Shamatha and Vipassana). They collected electrocardiographic (EKG) and electroencephalographic (EEG) responses and also measured behavioural performance on cognitive tasks using a pool of experienced Theravada practitioners from Thailand and Nepal, as well as Vajrayana practitioners from Nepal.

They observed that physiological responses during the Theravada meditation differ significantly from those during the Vajrayana meditation. Theravada meditation produced enhanced parasympathetic activation (relaxation). In contrast, Vajrayana meditation did not show any evidence of parasympathetic activity but showed an activation of the sympathetic system (arousal).

The researchers had also observed an immediate dramatic increase in performance on cognitive tasks following only Vajrayana styles of meditation. They noted that such dramatic boost in attentional capacity is impossible during a state of relaxation. Their results show that Vajrayana and Theravada styles of meditation are based on different neurophysiological mechanisms, which give rise to either an arousal or relaxation response.

Applications of the research findings

The findings from the study showed that Vajrayana meditation can lead to dramatic enhancement in cognitive performance, suggesting that Vajrayana meditation could be especially useful in situations where it is important to perform at one’s best, such as during competition or states of urgency. On the other hand, Theravada styles of meditation are an excellent way to decrease stress, release tension, and promote deep relaxation.

Further research

After seeing that even a single session of Vajrayana meditation can lead to radical enhancements in brain performance, Assoc Prof Kozhevnikov and Dr Amihai will be investigating whether permanent changes could occur after long-term practice. The researchers are also looking at how non-practitioners can benefit from such meditative practices.

Assoc Prof Kozhevnikov said, “Vajrayana meditation typically requires years of practice, so we are also looking into whether it is also possible to acquire the beneficial effects of brain performance by practicing certain essential elements of the meditation. This would provide an effective and practical method for non-practitioners to quickly increase brain performance in times of need.”

Filed under mindfulness meditation vajrayana meditation EEG relaxation arousal cognition neuroscience science

541 notes

New prosthetic arm controlled by neural messages 
This design hopes to identify the memory of movement in the amputee’s brain to translate to an order allowing manipulation of the device.
Controlling a prosthetic arm by just imagining a motion may be possible through the work of Mexican scientists at the Centre for Research and Advanced Studies (CINVESTAV), who work in the development of an arm replacement to identify movement patterns from brain signals.
First, it is necessary to know if there is a memory pattern to remember in the amputee’s brain in order to know how it moved and, thus, translating it to instructions for the prosthesis,” says Roberto Muñoz Guerrero, researcher at the Department of Electrical Engineering and project leader at Cinvestav.
He explains that the electric signal won’t come from the muscles that form the stump, but from the movement patterns of the brain. “If this phase is successful, the patient would be able to move the prosthesis by imagining different movements.”
However, Muñoz Guerrero acknowledges this is not an easy task because the brain registers a wide range of activities that occur in the human body and from all of them, the movement pattern is tried to be drawn. “Therefore, the first step is to recall the patterns in the EEG and define there the memory that can be electrically recorded. Then we need to evaluate how sensitive the signal is to other external shocks, such as light or blinking.”
Regarding this, it should be noted that the prosthesis could only be used by individuals who once had their entire arm and was amputated because some accident or illness. Patients were able to move the arm naturally and stored in their memory the process that would apply for the use of the prosthesis.
According to the researcher, the prosthesis must be provided with a mechanical and electronic system, the elements necessary to activate it and a section that would interpret the brain signals. “Regarding the material with which it must be built, it has not yet been fully defined because it must weigh between two and three kilograms, which is similar to the missing arm’s weight.”
The unique prosthesis represents a new topic in bioelectronics called BCI (Brain Computer Interface), which is a direct communication pathway between the brain and an external device in order to help or repair sensory and motor functions. “An additional benefit is the ability to create motion paths for the prosthesis, which is not possible with commercial products,” says Muñoz Guerrero.

New prosthetic arm controlled by neural messages

This design hopes to identify the memory of movement in the amputee’s brain to translate to an order allowing manipulation of the device.

Controlling a prosthetic arm by just imagining a motion may be possible through the work of Mexican scientists at the Centre for Research and Advanced Studies (CINVESTAV), who work in the development of an arm replacement to identify movement patterns from brain signals.

First, it is necessary to know if there is a memory pattern to remember in the amputee’s brain in order to know how it moved and, thus, translating it to instructions for the prosthesis,” says Roberto Muñoz Guerrero, researcher at the Department of Electrical Engineering and project leader at Cinvestav.

He explains that the electric signal won’t come from the muscles that form the stump, but from the movement patterns of the brain. “If this phase is successful, the patient would be able to move the prosthesis by imagining different movements.”

However, Muñoz Guerrero acknowledges this is not an easy task because the brain registers a wide range of activities that occur in the human body and from all of them, the movement pattern is tried to be drawn. “Therefore, the first step is to recall the patterns in the EEG and define there the memory that can be electrically recorded. Then we need to evaluate how sensitive the signal is to other external shocks, such as light or blinking.”

Regarding this, it should be noted that the prosthesis could only be used by individuals who once had their entire arm and was amputated because some accident or illness. Patients were able to move the arm naturally and stored in their memory the process that would apply for the use of the prosthesis.

According to the researcher, the prosthesis must be provided with a mechanical and electronic system, the elements necessary to activate it and a section that would interpret the brain signals. “Regarding the material with which it must be built, it has not yet been fully defined because it must weigh between two and three kilograms, which is similar to the missing arm’s weight.”

The unique prosthesis represents a new topic in bioelectronics called BCI (Brain Computer Interface), which is a direct communication pathway between the brain and an external device in order to help or repair sensory and motor functions. “An additional benefit is the ability to create motion paths for the prosthesis, which is not possible with commercial products,” says Muñoz Guerrero.

Filed under BCI prosthetics prosthetic arm motor movement EEG neuroscience science

free counters