Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

65 notes

Video could transform how schools serve teens with autism

Video-based teaching helps teens with autism learn important social skills, and the method eventually could be used widely by schools with limited resources, a Michigan State University researcher says.

The diagnosis rate for Autism Spectrum Disorder for 14- to 17-year-olds has more than doubled in the past five years, according to the Centers for Disease Control and Prevention. Yet previous research has found very few strategies for helping adolescents with autism develop skills needed to be successful, especially in group settings.

“Teaching social skills to adolescents with ASD has to be effective and practical,” said Joshua Plavnick, assistant professor of special education at MSU. “Using video-based group instruction regularly could promote far-reaching gains for students with ASD across many social behaviors.”

Plavnick developed group video teaching techniques with colleagues while a postdoctoral fellow at the University of North Carolina’s Frank Porter Graham Child Development Institute. Their findings are published in the research journal Exceptional Children.

Previous studies have shown many people with autism are more likely to pay attention when an innovative technology delivers information. Before Plavnick’s work, however, there were no investigations of video modeling as an option for teaching social skills to more than one adolescent with ASD at the same time.

The team recruited 13- to 17-year-old students with ASD and used laptops or iPads to offer group video instruction on social behaviors, such as inviting a peer to join an activity. One facilitator showed four students video footage of people helping one another clean up a mess, for example, and then gave them opportunities to practice the same skills in the classroom.

According to the researchers, the students demonstrated a rapid increase in the level of complex social behaviors each time video-based group instruction was used. Students sustained those social behaviors at high levels, even when the videos were used less often.

The students’ parents also completed anonymous surveys and indicated high levels of satisfaction. One reported their child started asking family members to play games together, a skill the teen had never before displayed at home.

Most schools do not have appropriate staff resources to provide one-on-one help for students with autism. The video can be used with a small group all at once and has been shown to be effective.

“Video-based group instruction is important, given the often limited resources in schools that also face increasing numbers of students being diagnosed with ASD,” said Plavnick, who also has begun implementing the strategy as part of a daily high school-based program.

(Source: msutoday.msu.edu)

Filed under ASD autism learning technology neuroscience science

181 notes

"Smart glasses" can improve gait of Parkinson’s patients

A new app for intelligent glasses, such as Google Glass, will soon make it possible to improve the gait of patients suffering from Parkinson’s disease and to decrease their risk of falling. Researchers at the University of Twente’s MIRA Institute have received a grant from the NutsOhra fund for the development of the app.
The gait of Parkinson’s patients is often disturbed: sometimes this presents as a shuffling movement with the patient taking small steps, or it may result in the patient constantly looking for additional support. Gait disturbance also increases the chance of a fall, despite the progress made in terms of medication. Researchers have established that the gait of patients improves when they regularly see or hear a pattern. Examples might include stripes on the floor, or the regular ticking of a metronome.
The researchers, working under the leadership of Prof. Richard van Wezel, who is professor of Neurophysiology at the UT and is also attached to the Donders Institute in Nijmegen, are now looking at exploring the possibility of using the intelligent glasses, such as Google Glass, that are now coming on to the consumer market. 


Intelligent glasses would be able to provide patients with the regular visual or audible patterns required. These patterns may take the form of moving stripes or shapes which the patient sees through the glasses, flashing shapes, or music with varying tempos. The latest intelligent glasses already have inbuilt cameras and accelerometers. By using these, it will be possible to determine which approach works best for each individual patient.

The MIRA Institute for Biomedical Technology and Technical Medicine is working on the project together with the Donders Institute for Brain, Cognition and Behaviour (Nijmegen), the Medisch Spectrum Twente hospital and the VUmc University Medical Centre in Amsterdam.

"Fonds NutsOhra", a fund that provides financial support for healthcare projects, has granted the sum of € 94,000 to the project.

"Smart glasses" can improve gait of Parkinson’s patients

A new app for intelligent glasses, such as Google Glass, will soon make it possible to improve the gait of patients suffering from Parkinson’s disease and to decrease their risk of falling. Researchers at the University of Twente’s MIRA Institute have received a grant from the NutsOhra fund for the development of the app.

The gait of Parkinson’s patients is often disturbed: sometimes this presents as a shuffling movement with the patient taking small steps, or it may result in the patient constantly looking for additional support. Gait disturbance also increases the chance of a fall, despite the progress made in terms of medication. Researchers have established that the gait of patients improves when they regularly see or hear a pattern. Examples might include stripes on the floor, or the regular ticking of a metronome.

The researchers, working under the leadership of Prof. Richard van Wezel, who is professor of Neurophysiology at the UT and is also attached to the Donders Institute in Nijmegen, are now looking at exploring the possibility of using the intelligent glasses, such as Google Glass, that are now coming on to the consumer market.

Intelligent glasses would be able to provide patients with the regular visual or audible patterns required. These patterns may take the form of moving stripes or shapes which the patient sees through the glasses, flashing shapes, or music with varying tempos. The latest intelligent glasses already have inbuilt cameras and accelerometers. By using these, it will be possible to determine which approach works best for each individual patient.

The MIRA Institute for Biomedical Technology and Technical Medicine is working on the project together with the Donders Institute for Brain, Cognition and Behaviour (Nijmegen), the Medisch Spectrum Twente hospital and the VUmc University Medical Centre in Amsterdam.

"Fonds NutsOhra", a fund that provides financial support for healthcare projects, has granted the sum of € 94,000 to the project.

Filed under neurodegenerative diseases google glass smart glasses technology neuroscience science

504 notes

A Blueprint for Restoring Touch with a Prosthetic Hand
New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.
The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.
“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”
Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.
Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.
The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.
Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.
Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.
The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.
“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

A Blueprint for Restoring Touch with a Prosthetic Hand

New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.

The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.

“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”

Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.

Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.

The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.

Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.

Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.

The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.

“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

Filed under BCI neural activity robotics prosthetics touch technology neuroscience science

926 notes

Hawking: ‘in the future brains could be separated from the body’
Professor Stephen Hawking has predicted that it could be possible to preserve a mind as powerful as his on a computer - but not with technology existing today. 

The cosmologist, 71, said the brain operates in a similar way to a computer programme, meaning it could in theory be kept running without a body to power it.


Prof Hawking was speaking after the premiere of a new biopic about his life, which he narrates himself, at the Cambridge Film Festival.


Asked about whether a person’s consciousness can live on after they die, he said: “I think the brain is like a programme in the mind, which is like a computer, so it’s theoretically possible to copy the brain onto a computer and so provide a form of life after death.


"However, this is way beyond our present capabilities. I think the conventional afterlife is a fairy tale for people afraid of the dark."


The film tells the story of Prof Hawking’s life, from his childhood in Oxford to his current home in Cambridge where he lives with the help of a group of carers.

It addresses how he moved from being diagnosed with motor neurone disease at the age of 21, and being told he had three years left to live, to becoming the world’s most famous living scientist.
Addressing his condition, which has afflicted him for half a century, he says in the film: “Keeping an active mind has been vial to my survival,as has been maintaining a sense of humour.”
Speaking before the premiere on Thursday, Kip Thorne, the American physicist and a close friend of Prof Hawking, said: “I think his handicap allowed him to do science he may not otherwise have done.
"He is the most stubborn man I know and that stubbornness and that drive is in part motivated by his disability."

Hawking: ‘in the future brains could be separated from the body’

Professor Stephen Hawking has predicted that it could be possible to preserve a mind as powerful as his on a computer - but not with technology existing today.

The cosmologist, 71, said the brain operates in a similar way to a computer programme, meaning it could in theory be kept running without a body to power it.

Prof Hawking was speaking after the premiere of a new biopic about his life, which he narrates himself, at the Cambridge Film Festival.

Asked about whether a person’s consciousness can live on after they die, he said: “I think the brain is like a programme in the mind, which is like a computer, so it’s theoretically possible to copy the brain onto a computer and so provide a form of life after death.

"However, this is way beyond our present capabilities. I think the conventional afterlife is a fairy tale for people afraid of the dark."

The film tells the story of Prof Hawking’s life, from his childhood in Oxford to his current home in Cambridge where he lives with the help of a group of carers.

It addresses how he moved from being diagnosed with motor neurone disease at the age of 21, and being told he had three years left to live, to becoming the world’s most famous living scientist.

Addressing his condition, which has afflicted him for half a century, he says in the film: “Keeping an active mind has been vial to my survival,as has been maintaining a sense of humour.”

Speaking before the premiere on Thursday, Kip Thorne, the American physicist and a close friend of Prof Hawking, said: “I think his handicap allowed him to do science he may not otherwise have done.

"He is the most stubborn man I know and that stubbornness and that drive is in part motivated by his disability."

Filed under Stephen Hawking brain consciousness technology science

117 notes

Scientists Develop New Process to Create Artificial Cell Membranes

The membranes surrounding and inside cells are involved in every aspect of biological function. They separate the cell’s various metabolic functions, compartmentalize the genetic material, and drive evolution by separating a cell’s biochemical activities. They are also the largest and most complex structures that cells synthesize.

Understanding the myriad biochemical roles of membranes requires the ability to prepare synthetic versions of these complex multi-layered structures, which has been a long-standing challenge.

In a study published this week by Nature Chemistry, scientists at The Scripps Research Institute (TSRI) report a highly programmable and controlled platform for preparing and experimentally probing synthetic cellular structures.

“Layer-by-layer membrane assembly allows us to create synthetic cells with membranes of arbitrary complexity at the molecular and supramolecular scale,” said TSRI Assistant Professor Brian Paegel, who authored the study with Research Associate Sandro Matosevic. “We can now control the molecular composition of the inner and outer layers of a bilayer membrane, and even assemble multi-layered membranes that resemble the envelope of the cell nucleus.”

Starting with a technique commonly used to deposit molecules on a solid surface, Langmuir-Blodgett deposition, the scientists repurposed the approach to work on liquid objects.

The scientists engineered a microfluidic device containing an array of microscopic cups, each trapping a single droplet of water bathed in oil and lipids, the molecules that make up cellular membranes. The trapped droplets are then ready to serve as a foundation for building up a series of lipid layers like coats of paint.

The lipid-coated water droplets are first bathed in water. As the water/oil interface encounters the trapped droplets, a second lipid layer coats the droplets and transforms them into what are known as unilamellar or single-layer vesicles. Bathing the vesicles in oil/lipid deposits a third lipid layer, and followed by a final layer of lipids that is deposited on the trapped drops to yield double-bilayer vesicles.

“The computer-controlled microfluidic circuits we have constructed will allow us to assemble synthetic cells not only from biologically derived lipids, but from any amphiphile and to measure important chemical and physical parameters, such as permeability and stability,” said Paegel.

(Source: scripps.edu)

Filed under cell membrane synthetic cells technology neuroscience science

195 notes

Covert operations: Your brain digitally remastered for clarity of thought
Neurofeedback can enhance the signal-to-noise ratio in thought, enabling a sharper focus on tasks—and a better understanding of brain-computer interfaces.
The sweep of a needle across the grooves of a worn vinyl record carries distinct sounds: hisses, scratches, even the echo of skips. For many years, though, those yearning to hear Frank Sinatra sing “Fly Me to the Moon” have been able to listen to his light baritone with technical clarity, courtesy of the increased signal-to-noise ratio of digital remasterings.
Now, with advances in neurofeedback techniques, the signal-to-noise ratio of the brain activity underlying our thoughts can be remastered as well, according to the recent discovery of a research team led by Stephen LaConte, an assistant professor at the Virginia Tech Carilion Research Institute.
LaConte and his colleagues specialize in real-time functional magnetic resonance imaging, a relatively new technology that can convert thought into action by transferring noninvasive measurements of human brain activity into control signals that drive physical devices and computer displays in real time. Crucially, for the ultimate goal of treating disorders of the brain, this rudimentary form of mind reading enables neurofeedback.
“Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria,” LaConte said. “Covert mental activities, on the other hand—such as visual imagery, inner language, or recollections of the past—can’t be observed by others and don’t necessarily translate into action in the outside world.”
But, LaConte added, brain–computer interfaces now enable us to eavesdrop on previously undetectable mental activities.
In the recent study, the scientists used whole-brain, classifier-based real-time functional magnetic resonance imaging to understand the neural underpinnings of brain–computer interface control. The research team asked two dozen subjects to control a visual interface by silently counting numbers at fast and slow rates. For half the tasks, the subjects were told to use their thoughts to control the movement of the needle on the device they were observing; for the other tasks, they simply watched the needle.
The scientists discovered a feedback effect that LaConte said he had long suspected existed but had found elusive: the subjects who were in control of the needle achieved a better whole-brain signal-to-noise ratio than those who simply watched the needle move. “When the subjects were performing the counting task without feedback, they did a pretty good job,” LaConte said. “But when they were doing it with feedback, we saw increases in the signal-to-noise ratio of the entire brain. This improved clarity could mean that the signal was sharpening, the noise was dropping, or both. I suspect the brain was becoming less noisy, allowing the subject to concentrate on the task at hand.”
The scientists also found that the act of controlling the computer–brain interface led to an increased classification accuracy, which corresponded with improvements in the whole-brain signal-to-noise ratio.
This enhanced signal-to-noise ratio, LaConte added, carries implications for brain rehabilitation. “When people undergoing real-time brain scans get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes,” LaConte said. “This, in turn, gives them the opportunity to aid in their own healing. Ultimately, we want to use this effect to find better ways to treat brain injuries and psychiatric and neurological disorders.”
“Dr. LaConte’s discovery represents a milestone in the development of noninvasive brain imaging approaches with potential for neurorehabilitation,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This research carries implications for people whose brains have been damaged, such as through traumatic injury or stroke, in ways that affect the motor system—how they walk, move an arm, or speak, for example. Dr. LaConte’s innovations with real-time functional brain imaging are helping to set the stage for the future, for capturing covert brain activity and creating better computer interfaces that can help people retrain their own brains.”

Covert operations: Your brain digitally remastered for clarity of thought

Neurofeedback can enhance the signal-to-noise ratio in thought, enabling a sharper focus on tasks—and a better understanding of brain-computer interfaces.

The sweep of a needle across the grooves of a worn vinyl record carries distinct sounds: hisses, scratches, even the echo of skips. For many years, though, those yearning to hear Frank Sinatra sing “Fly Me to the Moon” have been able to listen to his light baritone with technical clarity, courtesy of the increased signal-to-noise ratio of digital remasterings.

Now, with advances in neurofeedback techniques, the signal-to-noise ratio of the brain activity underlying our thoughts can be remastered as well, according to the recent discovery of a research team led by Stephen LaConte, an assistant professor at the Virginia Tech Carilion Research Institute.

LaConte and his colleagues specialize in real-time functional magnetic resonance imaging, a relatively new technology that can convert thought into action by transferring noninvasive measurements of human brain activity into control signals that drive physical devices and computer displays in real time. Crucially, for the ultimate goal of treating disorders of the brain, this rudimentary form of mind reading enables neurofeedback.

“Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria,” LaConte said. “Covert mental activities, on the other hand—such as visual imagery, inner language, or recollections of the past—can’t be observed by others and don’t necessarily translate into action in the outside world.”

But, LaConte added, brain–computer interfaces now enable us to eavesdrop on previously undetectable mental activities.

In the recent study, the scientists used whole-brain, classifier-based real-time functional magnetic resonance imaging to understand the neural underpinnings of brain–computer interface control. The research team asked two dozen subjects to control a visual interface by silently counting numbers at fast and slow rates. For half the tasks, the subjects were told to use their thoughts to control the movement of the needle on the device they were observing; for the other tasks, they simply watched the needle.

The scientists discovered a feedback effect that LaConte said he had long suspected existed but had found elusive: the subjects who were in control of the needle achieved a better whole-brain signal-to-noise ratio than those who simply watched the needle move. “When the subjects were performing the counting task without feedback, they did a pretty good job,” LaConte said. “But when they were doing it with feedback, we saw increases in the signal-to-noise ratio of the entire brain. This improved clarity could mean that the signal was sharpening, the noise was dropping, or both. I suspect the brain was becoming less noisy, allowing the subject to concentrate on the task at hand.”

The scientists also found that the act of controlling the computer–brain interface led to an increased classification accuracy, which corresponded with improvements in the whole-brain signal-to-noise ratio.

This enhanced signal-to-noise ratio, LaConte added, carries implications for brain rehabilitation. “When people undergoing real-time brain scans get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes,” LaConte said. “This, in turn, gives them the opportunity to aid in their own healing. Ultimately, we want to use this effect to find better ways to treat brain injuries and psychiatric and neurological disorders.”

“Dr. LaConte’s discovery represents a milestone in the development of noninvasive brain imaging approaches with potential for neurorehabilitation,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This research carries implications for people whose brains have been damaged, such as through traumatic injury or stroke, in ways that affect the motor system—how they walk, move an arm, or speak, for example. Dr. LaConte’s innovations with real-time functional brain imaging are helping to set the stage for the future, for capturing covert brain activity and creating better computer interfaces that can help people retrain their own brains.”

Filed under neuroimaging brain mapping brain activity brain-computer interface technology neuroscience science

379 notes

Smithsonian experts find e-readers can make reading easier for those with dyslexia
As e-readers grow in popularity as convenient alternatives to traditional books, researchers at the Smithsonian have found that convenience may not be their only benefit. The team discovered that when e-readers are set up to display only a few words per line, some people with dyslexia can read more easily, quickly and with greater comprehension. Their findings are published in the Sept. 18 issue of the journal PLOS ONE.
An element in many cases of dyslexia is called a visual attention deficit. It is marked by an inability to concentrate on letters within words or words within lines of text. Another element is known as visual crowding—the failure to recognize letters when they are cluttered within the word. Using short lines on an e-reader can alieviate these issues and promote reading by reducing visual distractions within the text.
"At least a third of those with dyslexia we tested have these issues with visual attention and are helped by reading on the e-reader," said Matthew H. Schneps, director of the Laboratory for Visual Learning at the Smithsonian Astrophysical Observatory and lead author of the research. "For those who don’t have these issues, the study showed that the traditional ways of displaying text are better."
An earlier study by Schneps tracked eye movements of dyslexic students while they read, and it showed the use of short lines facilitated reading by improving the efficiency of the eye movements. This second study examined the role the small hand-held reader had on comprehension, and found that in many cases the device not only improved speed and efficiency, but improved abilities for the dyslexic reader to grasp the meaning of the text.
The team tested the reading comprehension and speed of 103 students with dyslexia who attend Landmark High School in Boston. Reading on paper was compared with reading on small hand-held e-reader devices, configured to lines of text that were two-to-three words long. The use of an e-reader significantly improved speed and comprehension in many of the students. Those students with a pronounced visual attention deficit benefited most from reading text on a handheld device versus on paper, while the reverse was true for those who did not exhibit these issues. The small screen on a handheld device displaying few words (versus a full sheet of paper) is believed to narrow and concentrate the reader’s focus, which controls visual distraction.
"The high school students we tested at Landmark had the benefit of many years of exceptional remediation, but even so, if they have visual attention deficits they will eventually hit a plateau, and traditional approaches can no longer help," said Schneps. "Our research showed that the e-readers help these students reach beyond those limits."
These findings suggest that this reading method can be an effective intervention for struggling readers and that e-readers may be more than new technological gadgets: They also may be educational resources and solutions for those with dyslexia.

Smithsonian experts find e-readers can make reading easier for those with dyslexia

As e-readers grow in popularity as convenient alternatives to traditional books, researchers at the Smithsonian have found that convenience may not be their only benefit. The team discovered that when e-readers are set up to display only a few words per line, some people with dyslexia can read more easily, quickly and with greater comprehension. Their findings are published in the Sept. 18 issue of the journal PLOS ONE.

An element in many cases of dyslexia is called a visual attention deficit. It is marked by an inability to concentrate on letters within words or words within lines of text. Another element is known as visual crowding—the failure to recognize letters when they are cluttered within the word. Using short lines on an e-reader can alieviate these issues and promote reading by reducing visual distractions within the text.

"At least a third of those with dyslexia we tested have these issues with visual attention and are helped by reading on the e-reader," said Matthew H. Schneps, director of the Laboratory for Visual Learning at the Smithsonian Astrophysical Observatory and lead author of the research. "For those who don’t have these issues, the study showed that the traditional ways of displaying text are better."

An earlier study by Schneps tracked eye movements of dyslexic students while they read, and it showed the use of short lines facilitated reading by improving the efficiency of the eye movements. This second study examined the role the small hand-held reader had on comprehension, and found that in many cases the device not only improved speed and efficiency, but improved abilities for the dyslexic reader to grasp the meaning of the text.

The team tested the reading comprehension and speed of 103 students with dyslexia who attend Landmark High School in Boston. Reading on paper was compared with reading on small hand-held e-reader devices, configured to lines of text that were two-to-three words long. The use of an e-reader significantly improved speed and comprehension in many of the students. Those students with a pronounced visual attention deficit benefited most from reading text on a handheld device versus on paper, while the reverse was true for those who did not exhibit these issues. The small screen on a handheld device displaying few words (versus a full sheet of paper) is believed to narrow and concentrate the reader’s focus, which controls visual distraction.

"The high school students we tested at Landmark had the benefit of many years of exceptional remediation, but even so, if they have visual attention deficits they will eventually hit a plateau, and traditional approaches can no longer help," said Schneps. "Our research showed that the e-readers help these students reach beyond those limits."

These findings suggest that this reading method can be an effective intervention for struggling readers and that e-readers may be more than new technological gadgets: They also may be educational resources and solutions for those with dyslexia.

Filed under reading dyslexia e-readers visual attention deficit technology neuroscience science

63 notes

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality 
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality

It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Filed under robots robotics perception technology neuroscience science

222 notes

Playing video games can boost brain power

Certain types of video games can help to train the brain to become more agile and improve strategic thinking, according to scientists from Queen Mary University of London and University College London (UCL).

image

The researchers recruited 72 volunteers and measured their ‘cognitive flexibility’ described as a person’s ability to adapt and switch between tasks, and think about multiple ideas at a given time to solve problems.

Two groups of volunteers were trained to play different versions of a real-time strategy game called StarCraft, a fast-paced game where players have to construct and organise armies to battle an enemy. A third of the group played a life simulation video game called The Sims, which does not require much memory or many tactics.

All the volunteers played the video games for 40 hours over six to eight weeks, and were subjected to a variety of psychological tests before and after. All the participants happened to be female as the study was unable to recruit a sufficient number of male volunteers who played video games for less than two hours a week.

The researchers discovered that those who played StarCraft were quicker and more accurate in performing cognitive flexibility tasks, than those who played The Sims.

Dr Brian Glass from Queen Mary’s School of Biological and Chemical Sciences, said: “Previous research has demonstrated that action video games, such as Halo, can speed up decision making but the current work finds that real-time strategy games can promote our ability to think on the fly and learn from past mistakes.

“Our paper shows that cognitive flexibility, a cornerstone of human intelligence, is not a static trait but can be trained and improved using fun learning tools like gaming.”

Professor Brad Love from UCL, said:  “Cognitive flexibility varies across people and at different ages. For example, a fictional character like Sherlock Holmes has the ability to simultaneously engage in multiple aspects of thought and mentally shift in response to changing goals and environmental conditions.

“Creative problem solving and ‘thinking outside the box’ require cognitive flexibility. Perhaps in contrast to the repetitive nature of work in past centuries, the modern knowledge economy places a premium on cognitive flexibility.”

Dr Glass added: “The volunteers who played the most complex version of the video game performed the best in the post-game psychological tests. We need to understand now what exactly about these games is leading to these changes, and whether these cognitive boosts are permanent or if they dwindle over time. Once we have that understanding, it could become possible to develop clinical interventions for symptoms related to attention deficit hyperactivity disorder or traumatic brain injuries, for example.”

(Source: qmul.ac.uk)

Filed under video games cognition technology neuroscience science

127 notes

Building Better Brain Implants: The Challenge of Longevity 
On August 20, JoVE, the Journal of Visualized Experiments will publish a technique from the Capadona Lab at Case Western Reserve University to accommodate two challenges inherent in brain-implantation technology, gauging the property changes that occur during implantation and measuring on a micro-scale. These new techniques open the doors for solving a great challenge for bioengineers — crafting a device that can withstand the physiological conditions in the brain for the long-term.
“We created an instrument to measure the mechanical properties of micro-scale biomedical implants, after being explanted from living animals,” explained the lab’s principal investigator, Dr. Jeffrey R. Capadona. By preserving the changing properties that occurred during implantation even after removal, the technique offers potential to create and test new materials for brain implant devices. It could result in producing longer lasting and better suited devices for the highly-tailored functions.
For implanted devices, withstanding the high-temperatures, moisture, and other in-vivo properties poses a challenge to longevity. Resulting changes in stiffness, etc, of an implanted material can trigger a greater inflammatory response. “Often, the body’s reaction to those implants causes the device to prematurely fail,” says Dr. Capadona, “In some cases, the patient requires regular brain surgery to replace or revise the implants.”
New implantation materials may help find solutions to restore motor function in individuals who have suffered from spinal cord injuries, stroke or multiple sclerosis. “Microelectrodes embedded chronically in the brain could hold promise for using neural activity to restore motor function in individuals who have, suffered from spinal cord injuries,” said Dr. Capadona.
Furthermore, Capadona and his colleagues’ method allows for measurement of mechanical properties using microsize scales. Previous methods typically require large or nano-sized samples of material, and data has to be scaled, which doesn’t always work.
When asked why Dr. Capadona and his colleagues published their methods with JoVE, he responded “We choose JoVE because of the novel format to show readers visually what we are doing. If a picture is worth [a] thousand words, a video is worth a million.”

Building Better Brain Implants: The Challenge of Longevity

On August 20, JoVE, the Journal of Visualized Experiments will publish a technique from the Capadona Lab at Case Western Reserve University to accommodate two challenges inherent in brain-implantation technology, gauging the property changes that occur during implantation and measuring on a micro-scale. These new techniques open the doors for solving a great challenge for bioengineers — crafting a device that can withstand the physiological conditions in the brain for the long-term.

“We created an instrument to measure the mechanical properties of micro-scale biomedical implants, after being explanted from living animals,” explained the lab’s principal investigator, Dr. Jeffrey R. Capadona. By preserving the changing properties that occurred during implantation even after removal, the technique offers potential to create and test new materials for brain implant devices. It could result in producing longer lasting and better suited devices for the highly-tailored functions.

For implanted devices, withstanding the high-temperatures, moisture, and other in-vivo properties poses a challenge to longevity. Resulting changes in stiffness, etc, of an implanted material can trigger a greater inflammatory response. “Often, the body’s reaction to those implants causes the device to prematurely fail,” says Dr. Capadona, “In some cases, the patient requires regular brain surgery to replace or revise the implants.”

New implantation materials may help find solutions to restore motor function in individuals who have suffered from spinal cord injuries, stroke or multiple sclerosis. “Microelectrodes embedded chronically in the brain could hold promise for using neural activity to restore motor function in individuals who have, suffered from spinal cord injuries,” said Dr. Capadona.

Furthermore, Capadona and his colleagues’ method allows for measurement of mechanical properties using microsize scales. Previous methods typically require large or nano-sized samples of material, and data has to be scaled, which doesn’t always work.

When asked why Dr. Capadona and his colleagues published their methods with JoVE, he responded “We choose JoVE because of the novel format to show readers visually what we are doing. If a picture is worth [a] thousand words, a video is worth a million.”

Filed under brain implants neural implants neurology neuroscience technology science

free counters