Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

68 notes

Children with auditory processing disorder may now have more treatment options
Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.
Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.
Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.
Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.
"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."
Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.
"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."
Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.
Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:
Phonemic training to address the brain’s ability to process speech sounds.
Words in Noise training to address the brain’s ability to process speech with background noise.
Phonemic synthesis training to address the brain’s ability to process speech sounds across words.
At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.
"Based on these results, our program is showing early signs of being effective," Burnett said.

Children with auditory processing disorder may now have more treatment options

Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.

Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.

Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.

Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.

"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."

Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.

"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."

Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.

Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:

  • Phonemic training to address the brain’s ability to process speech sounds.
  • Words in Noise training to address the brain’s ability to process speech with background noise.
  • Phonemic synthesis training to address the brain’s ability to process speech sounds across words.

At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.

"Based on these results, our program is showing early signs of being effective," Burnett said.

Filed under auditory processing disorder EARSS program hearing language processing neuroscience science

121 notes

Fear, anger or pain. Why do babies cry?
Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.
It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.
"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.
Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.
In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.
According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.
"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.
As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.
The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.
Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.
For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.
Anger, fear and pain
When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.
As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.
In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.
Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Fear, anger or pain. Why do babies cry?

Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.

It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.

"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.

Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.

In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.

According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.

"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.

As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.

The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.

Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.

For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.

Anger, fear and pain

When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.

As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.

In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.

Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Filed under infants emotions emotional response cry communication eye activity psychology neuroscience science

52 notes

Finding “Mr. Right,” How Insects Sniff Out the Perfect Mate
Unlike humans, most insects rely on their sense of smell when looking for a mate. Scientists have found that sex pheromones play an important role in finding a suitable partner of the same species; yet, little is known about the evolution and genetic basis of these alluring smells.
A team of researchers from Arizona State University and Germany found that one wasp species has evolved a specific scent, or pheromone, which keeps it from mating with other species. In addition, they discovered that the genetic basis of the new scent is simple, which allows the males to change an existing scent into a new one. Over time, the females recognize and use this new scent to distinguish their own species from others.
Scientists from ASU, the University of Regensburg, the Zoological Research Museum Alexander Koenig Bonn, and the Technical University Darmstadt in Germany, present their findings in an article published Feb. 13 online in the journal Nature.

Finding “Mr. Right,” How Insects Sniff Out the Perfect Mate

Unlike humans, most insects rely on their sense of smell when looking for a mate. Scientists have found that sex pheromones play an important role in finding a suitable partner of the same species; yet, little is known about the evolution and genetic basis of these alluring smells.

A team of researchers from Arizona State University and Germany found that one wasp species has evolved a specific scent, or pheromone, which keeps it from mating with other species. In addition, they discovered that the genetic basis of the new scent is simple, which allows the males to change an existing scent into a new one. Over time, the females recognize and use this new scent to distinguish their own species from others.

Scientists from ASU, the University of Regensburg, the Zoological Research Museum Alexander Koenig Bonn, and the Technical University Darmstadt in Germany, present their findings in an article published Feb. 13 online in the journal Nature.

Filed under mating evolution wasps pheromones smell genetics neuroscience science

259 notes

Microchip Restores Vision
A wirelessly controlled microchip has restored limited vision to patients in a small experimental trial, report researchers in the Proceedings of the Royal Society B.

The German medical technology company Retina Implant developed the artificial retina, which was implanted in one eye of each participant as part of a company-funded trial. The patients had all been blinded by retinitis pigmentosa or another inherited disease that cause the eye’s light-detecting rod and cone cells, called photoreceptors, to degenerate and die over time. In theory, the device could also benefit patients with degenerative eye diseases such as macular degeneration, says Katarina Štigl, a clinical scientist and ophthalmologist at the University of Tübingen, who led the study.

With the implant, eight of the nine patients in the trial could perceive light. Five were able to detect moving patterns on a screen as well as everyday objects such as cutlery, doorknobs, and telephones. Three were able to read letters. Seeing their own hands and the faces of their loved ones had the biggest impression on the patients, says Štigl. “The very personal things, such as if a mouth is smiling, or the shape of a nose, are the most exciting for them,” she says.
The implanted device consists of a three-millimeter-square chip with 1,500 pixels. Each pixel contains a photodiode, which picks up incoming light, and an electrode and an amplification circuit, which boosts the weak electrical activity given off by the diode. A thin cable that runs through the eye socket connects the implant to a small coil implanted under the skin behind the ear, which means most of the system is invisible. The coil under the skin is powered by an external battery pack that can be held behind the ear with magnets.

The results follow an announcement earlier this week from California-based Second Sight that its Argus II system was approved for use in the United States. The two technologies take different approaches to restoring vision in patients with retinal degeneration. In Second Sight’s system, a camera mounted on eyeglasses picks up images that are converted into electrical signals by a small wearable computer. That data is then sent to a 60-electrode chip to stimulate neurons in the retina. The Retina Implant device instead attempts to directly replace the lost photoreceptors, allowing the remaining retinal circuitry to do the data processing.

Microchip Restores Vision

A wirelessly controlled microchip has restored limited vision to patients in a small experimental trial, report researchers in the Proceedings of the Royal Society B.

The German medical technology company Retina Implant developed the artificial retina, which was implanted in one eye of each participant as part of a company-funded trial. The patients had all been blinded by retinitis pigmentosa or another inherited disease that cause the eye’s light-detecting rod and cone cells, called photoreceptors, to degenerate and die over time. In theory, the device could also benefit patients with degenerative eye diseases such as macular degeneration, says Katarina Štigl, a clinical scientist and ophthalmologist at the University of Tübingen, who led the study.

With the implant, eight of the nine patients in the trial could perceive light. Five were able to detect moving patterns on a screen as well as everyday objects such as cutlery, doorknobs, and telephones. Three were able to read letters. Seeing their own hands and the faces of their loved ones had the biggest impression on the patients, says Štigl. “The very personal things, such as if a mouth is smiling, or the shape of a nose, are the most exciting for them,” she says.

The implanted device consists of a three-millimeter-square chip with 1,500 pixels. Each pixel contains a photodiode, which picks up incoming light, and an electrode and an amplification circuit, which boosts the weak electrical activity given off by the diode. A thin cable that runs through the eye socket connects the implant to a small coil implanted under the skin behind the ear, which means most of the system is invisible. The coil under the skin is powered by an external battery pack that can be held behind the ear with magnets.

The results follow an announcement earlier this week from California-based Second Sight that its Argus II system was approved for use in the United States. The two technologies take different approaches to restoring vision in patients with retinal degeneration. In Second Sight’s system, a camera mounted on eyeglasses picks up images that are converted into electrical signals by a small wearable computer. That data is then sent to a 60-electrode chip to stimulate neurons in the retina. The Retina Implant device instead attempts to directly replace the lost photoreceptors, allowing the remaining retinal circuitry to do the data processing.

Filed under vision retinal degeneration subretinal electronic implant electronic implants retinal diseases neuroscience science

228 notes

Bioengineers print ears that look and act like the real thing
Cornell bioengineers and physicians have created an artificial ear that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.
In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.
"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.
The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell.
"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said.
Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient’s harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.
To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject’s ear and converted the image into a digitized “solid” ear using a 3-D printer to assemble a mold.
They injected the mold with collagen derived from rat tails, and then added 250 million cartilage cells from the ears of cows. This Cornell-developed, high-density gel is similar to the consistency of Jell-O when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.
The process is also fast, Bonassar added: “It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted.”
The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.
Bonassar and Spector have been collaborating on bioengineered human replacement parts since 2007. Bonassar has also worked with Weill Cornell neurological surgeon Dr. Roger Härtl on bioengineered disc replacements using some of the same techniques demonstrated in the PLOS One study.
The researchers specifically work on replacement human structures that are primarily made of cartilage — joints, trachea, spine, nose — because cartilage does not need to be vascularized with a blood supply in order to survive.
They are now looking at ways to expand populations of human ear cartilage cells in the laboratory so that these cells can be used in the mold, instead of cow cartilage.
"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.
He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or 6 years old. At that age, ears are 80 percent of their adult size.
If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Bioengineers print ears that look and act like the real thing

Cornell bioengineers and physicians have created an artificial ear that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.

In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.

"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.

The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell.

"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said.

Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient’s harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.

To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject’s ear and converted the image into a digitized “solid” ear using a 3-D printer to assemble a mold.

They injected the mold with collagen derived from rat tails, and then added 250 million cartilage cells from the ears of cows. This Cornell-developed, high-density gel is similar to the consistency of Jell-O when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.

The process is also fast, Bonassar added: “It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted.”

The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.

Bonassar and Spector have been collaborating on bioengineered human replacement parts since 2007. Bonassar has also worked with Weill Cornell neurological surgeon Dr. Roger Härtl on bioengineered disc replacements using some of the same techniques demonstrated in the PLOS One study.

The researchers specifically work on replacement human structures that are primarily made of cartilage — joints, trachea, spine, nose — because cartilage does not need to be vascularized with a blood supply in order to survive.

They are now looking at ways to expand populations of human ear cartilage cells in the laboratory so that these cells can be used in the mold, instead of cow cartilage.

"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.

He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or 6 years old. At that age, ears are 80 percent of their adult size.

If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Filed under microtia artificial ear ear replacement implants cartilage medicine neuroscience science

57 notes

Smoking damages mouse brains
Cigarette smoke damages the lungs, but it also wreaks havoc in the brain, a study in mice suggests. Signs of Alzheimer’s disease increased in the brains of animals that breathed cigarette smoke for four months, scientists report February 19 in Nature Communications.
The relationship between smoking and Alzheimer’s in people is murky. Some evidence from the 1990s suggested that smoking actually protected people against Alzheimer’s, presumably by stimulating nicotine-detecting brain cells. More recent studies have found that smoking ups the odds of the disease.
To see what cigarettes do to the brain, scientists led by Claudio Soto of the University of Texas Medical School at Houston turned to mice. In animals bred to show signs of Alzheimer’s, cigarette smoke (one cigarette’s worth in air the mouse breathed for an hour, five days a week) worsened aspects of the disease. Compared with mice that weren’t exposed, mice exposed to smoke had several signs of Alzheimer’s: they had more amyloid beta plaques, a higher load of abnormal tau protein and more severe inflammation in their brains.  The scientists don’t know yet how cigarette smoke causes these changes, or whether a similar process happens in people.

Smoking damages mouse brains

Cigarette smoke damages the lungs, but it also wreaks havoc in the brain, a study in mice suggests. Signs of Alzheimer’s disease increased in the brains of animals that breathed cigarette smoke for four months, scientists report February 19 in Nature Communications.

The relationship between smoking and Alzheimer’s in people is murky. Some evidence from the 1990s suggested that smoking actually protected people against Alzheimer’s, presumably by stimulating nicotine-detecting brain cells. More recent studies have found that smoking ups the odds of the disease.

To see what cigarettes do to the brain, scientists led by Claudio Soto of the University of Texas Medical School at Houston turned to mice. In animals bred to show signs of Alzheimer’s, cigarette smoke (one cigarette’s worth in air the mouse breathed for an hour, five days a week) worsened aspects of the disease. Compared with mice that weren’t exposed, mice exposed to smoke had several signs of Alzheimer’s: they had more amyloid beta plaques, a higher load of abnormal tau protein and more severe inflammation in their brains.

The scientists don’t know yet how cigarette smoke causes these changes, or whether a similar process happens in people.

Filed under alzheimer's disease cigarette smoke brain brain cells amyloid beta animal studies neuroscience science

80 notes

Scientists make older adults less forgetful in memory tests
Scientists at Baycrest Health Sciences’ Rotman Research Institute (RRI) and the University of Toronto’s Psychology Department have found compelling evidence that older adults can eliminate forgetfulness and perform as well as younger adults on memory tests.
Scientists used a distraction learning strategy to help older adults overcome age-related forgetting and boost their performance to that of younger adults. Distraction learning sounds like an oxymoron, but a growing body of science is showing that older brains are adept at processing irrelevant and relevant information in the environment, without conscious effort, to aid memory performance.
“Older brains may be be doing something very adaptive with distraction to compensate for weakening memory,” said Renée Biss, lead investigator and PhD student. “In our study we asked whether distraction can be used to foster memory-boosting rehearsal for older adults. The answer is yes!”
“To eliminate age-related forgetfulness across three consecutive memory experiments and help older adults perform like younger adults is dramatic and to our knowledge a totally unique finding,” said Lynn Hasher, senior scientist on the study and a leading authority in attention and inhibitory functioning in younger and older adults. “Poor regulation of attention by older adults may actually have some benefits for memory.”
The findings, published online in Psychological Science, ahead of print publication, have intriguing implications for designing learning strategies for the mature, older student and equipping senior-housing with relevant visual distraction cues throughout the living environment that would serve as rehearsal opportunities to remember things like an upcoming appointment or medications to take, even if the cues aren’t consciously paid attention to.

Scientists make older adults less forgetful in memory tests

Scientists at Baycrest Health Sciences’ Rotman Research Institute (RRI) and the University of Toronto’s Psychology Department have found compelling evidence that older adults can eliminate forgetfulness and perform as well as younger adults on memory tests.

Scientists used a distraction learning strategy to help older adults overcome age-related forgetting and boost their performance to that of younger adults. Distraction learning sounds like an oxymoron, but a growing body of science is showing that older brains are adept at processing irrelevant and relevant information in the environment, without conscious effort, to aid memory performance.

“Older brains may be be doing something very adaptive with distraction to compensate for weakening memory,” said Renée Biss, lead investigator and PhD student. “In our study we asked whether distraction can be used to foster memory-boosting rehearsal for older adults. The answer is yes!”

“To eliminate age-related forgetfulness across three consecutive memory experiments and help older adults perform like younger adults is dramatic and to our knowledge a totally unique finding,” said Lynn Hasher, senior scientist on the study and a leading authority in attention and inhibitory functioning in younger and older adults. “Poor regulation of attention by older adults may actually have some benefits for memory.”

The findings, published online in Psychological Science, ahead of print publication, have intriguing implications for designing learning strategies for the mature, older student and equipping senior-housing with relevant visual distraction cues throughout the living environment that would serve as rehearsal opportunities to remember things like an upcoming appointment or medications to take, even if the cues aren’t consciously paid attention to.

Filed under cognitive decline memory learning psychology neuroscience science

58 notes

Hypnosis study unlocks secrets of unexplained paralysis

Hypnosis has begun to attract renewed interest from neuroscientists interested in using hypnotic suggestion to test predictions about normal cognitive functioning.

To demonstrate the future potential of this growing field, guest editors Professor Peter Halligan from the School of Psychology at Cardiff University and David A. Oakley of University College London, brought together leading researchers from cognitive neuroscience and hypnosis to contribute to this month’s special issue of the international journal, Cortex.

image

The issue illustrates how methodological and theoretical advances, using hypnotic suggestion, can return novel and experimentally verifiable insights for the neuroscience of consciousness and motor control. The research also includes novel brain imaging studies, which address sceptics’ concerns regarding the subjective reality and comparability of hypnotically suggested phenomena that previously depended on subjects’ largely unverifiable report and behaviour.

Halligan and Oakley also contribute to a new and revealing brain imaging study in the special issue that explores the brain systems involved in hypnotic paralysis. This research follows their earlier pioneering work on hypnotic leg paralysis reported in the Lancet in 2000.

Patients with “functional” or “psychogenic” conversion disorders present symptoms, such as paralyses, are clinically challenging. They comprise between 30 and 40% of patients attending neurology outpatient clinics and place a huge strain on public health services.

Professor Halligan of Cardiff University’s School of Psychology said: “This new study, working with colleagues at the Institute of Psychiatry in London, suggests that hypnosis can provide insights into of the brain systems involved in patients who display symptoms of neurological illness, but without evidence of brain damage. New insights show that symptoms experienced by patients with functional or dissociative conversion disorders (e.g. medically unexplained paralysis) can be simulated using targeted hypnotic suggestion.

"In this study we monitored brain activations of healthy volunteers with hypnosis induction who experienced paralysis-like experiences which could be turned ‘on’ and ‘off’. The suggestion resulted in subjects being unable to move a joystick together with a realistic and compelling experience of being unable to move and control their left hand despite trying.

"When compared to the completed movements, the suggested paralysis condition revealed increased activity in brain regions know to be active during motor planning and intention to move – and also brain areas involved in response selection and inhibition."

Comparing symptoms conveyed by conversion disorder patients and those produced by ‘paralysis’ suggestions in hypnosis, has revealed similar patterns of brain activation associated with attempted movement of the affected limb.

These findings could inform future studies of the brain mechanisms underpinning limb paralysis in patients with conversion disorders. More importantly they could lead to effective treatments.

(Source: cardiff.ac.uk)

Filed under brain cognitive function hypnosis hypnotic paralysis brain activation neuroscience science

38 notes

More Than Just Looking – A Role of Tiny Eye Movements Explained
Tübingen researcher learns how the brain keeps an eye on the periphery even when focusing on one object.
Have you ever wondered whether it’s possible to look at two places at once? Because our eyes have a specialized central region with high visual acuity and good color vision, we must always focus on one spot at a time in order to see our environment. As a result, our eyes constantly jump back and forth as we look around.
But what if – when you are looking at an object – your brain also allowed you to “look” somewhere else at the same time, out of the corner of your eye, as it were? Now, a scientist at the Werner Reichardt Centre for Integrative Neuroscience (CIN), which is funded by the German Excellence initiative at Tübingen University, has found a possible explanation for how this might happen.
Ziad Hafed, the leader of the Physiology of Active Vision Junior Research Group at CIN, wondered about the role of a type of tiny microscopic eye movement that occurs when we fix our gaze on something, called a microsaccade. “Microsaccades are sort of enigmatic,” Hafed says. They are movements of the eye which occur at exactly the moment when we are trying to look at something steadily – i.e., when we are trying to prevent our eyes from moving.
It was long thought that microsaccades were nothing but random, inconsequential tics, but Hafed wondered whether the mere unconscious preparation to generate these tiny eye movements can alter visual perception and effectively allow you to “see” out of the corner of your eye. He found that before generating a microsaccade, the brain reorganizes its visual processing to alter how you perceive things. “Imagine that you are the coach of a football team,” Hafed says. “You would normally ask your defenders to spread out across the field in order to provide good coverage during match play. However, in preparation for an upcoming corner kick by your opposing team, you would reorganize your defenders, assigning two of them to become temporary goalkeepers and protect the goal. What I found was evidence for a similar strategy in the visual brain before microsaccades,” says Hafed. That is, in preparation for generating a tiny microscopic eye movement, the brain – the “coach” – causes a subtle reorganization of the visual system, and thus alters how you might see out of the corner of your eyes (see diagram).
Using a series of experiments on human participants, coupled with computational modeling of the human visual system, Hafed asked participants to fix their attention on a spot that appeared on a screen in front of them, while he carefully measured their tiny microscopic eye movements. Hafed then probed the participants’ ability to look at two places at once by testing their peripheral vision. He found that in preparation to generate a tiny microsaccade, the participants demonstrated remarkable changes in their ability to process visual inputs. In the periphery, tiny microscopic eye movements effectively improved the capacity to direct visual input – from around where gaze is fixed – towards the brain. Hafed’s results, which are described in the leading science journal Neuron, thus demonstrate an important functional role for these tiny, microscopic, and “enigmatic” movements of the eye in helping us to perceive our environment.
Hafed’s results not only help us understand a previously puzzling phenomenon; there are also potentially wide-ranging applications arising from this work. In particular, this work can affect how we design computer and machine user interfaces. For example, using knowledge about the whole range of eye movements we constantly make, including microscopic ones, our future “smart user interfaces” can ensure that things likely to attract our attention are not displayed in places where they can be distracting. Conversely, if we need to locate something that should attract our attention – a warning light in a control room, for instance – this same approach will also be useful. As Hafed put it, “eye movements would essentially be a window on our minds.”

More Than Just Looking – A Role of Tiny Eye Movements Explained

Tübingen researcher learns how the brain keeps an eye on the periphery even when focusing on one object.

Have you ever wondered whether it’s possible to look at two places at once? Because our eyes have a specialized central region with high visual acuity and good color vision, we must always focus on one spot at a time in order to see our environment. As a result, our eyes constantly jump back and forth as we look around.

But what if – when you are looking at an object – your brain also allowed you to “look” somewhere else at the same time, out of the corner of your eye, as it were? Now, a scientist at the Werner Reichardt Centre for Integrative Neuroscience (CIN), which is funded by the German Excellence initiative at Tübingen University, has found a possible explanation for how this might happen.

Ziad Hafed, the leader of the Physiology of Active Vision Junior Research Group at CIN, wondered about the role of a type of tiny microscopic eye movement that occurs when we fix our gaze on something, called a microsaccade. “Microsaccades are sort of enigmatic,” Hafed says. They are movements of the eye which occur at exactly the moment when we are trying to look at something steadily – i.e., when we are trying to prevent our eyes from moving.

It was long thought that microsaccades were nothing but random, inconsequential tics, but Hafed wondered whether the mere unconscious preparation to generate these tiny eye movements can alter visual perception and effectively allow you to “see” out of the corner of your eye. He found that before generating a microsaccade, the brain reorganizes its visual processing to alter how you perceive things. “Imagine that you are the coach of a football team,” Hafed says. “You would normally ask your defenders to spread out across the field in order to provide good coverage during match play. However, in preparation for an upcoming corner kick by your opposing team, you would reorganize your defenders, assigning two of them to become temporary goalkeepers and protect the goal. What I found was evidence for a similar strategy in the visual brain before microsaccades,” says Hafed. That is, in preparation for generating a tiny microscopic eye movement, the brain – the “coach” – causes a subtle reorganization of the visual system, and thus alters how you might see out of the corner of your eyes (see diagram).

Using a series of experiments on human participants, coupled with computational modeling of the human visual system, Hafed asked participants to fix their attention on a spot that appeared on a screen in front of them, while he carefully measured their tiny microscopic eye movements. Hafed then probed the participants’ ability to look at two places at once by testing their peripheral vision. He found that in preparation to generate a tiny microsaccade, the participants demonstrated remarkable changes in their ability to process visual inputs. In the periphery, tiny microscopic eye movements effectively improved the capacity to direct visual input – from around where gaze is fixed – towards the brain. Hafed’s results, which are described in the leading science journal Neuron, thus demonstrate an important functional role for these tiny, microscopic, and “enigmatic” movements of the eye in helping us to perceive our environment.

Hafed’s results not only help us understand a previously puzzling phenomenon; there are also potentially wide-ranging applications arising from this work. In particular, this work can affect how we design computer and machine user interfaces. For example, using knowledge about the whole range of eye movements we constantly make, including microscopic ones, our future “smart user interfaces” can ensure that things likely to attract our attention are not displayed in places where they can be distracting. Conversely, if we need to locate something that should attract our attention – a warning light in a control room, for instance – this same approach will also be useful. As Hafed put it, “eye movements would essentially be a window on our minds.”

Filed under visual perception microsaccades eye movements peripheral vision neuroscience science

105 notes

First snaps made of fetal brains wiring themselves up

The first images have been captured of the fetal brain at different stages of its development. The work gives a glimpse of how the brain’s neural connections form in the womb, and could one day lead to prenatal diagnosis and treatment of conditions such as autism and schizophrenia.

We know little about how the fetal brain grows and functions – not only because it is so small, says Moriah Thomason of Wayne State University in Detroit, but also because “a fetus is doing backflips as we scan it”, making it tricky to get a usable result.

Undeterred, Thomason’s team made a series of functional magnetic resonance imaging (fMRI) scans of the brains of 25 fetuses between 24 and 38 weeks old. Each scan lasted just over 10 minutes, and the team kept only the images taken when the fetus was relatively still.

The researchers used the scans to look at two well-understood features of the developing brain: the spacing of neural connections and the time at which they developed. As expected, the two halves of the fetal brain formed denser and more numerous connections between themselves from one week to the next. The earliest connections tended appear in the middle of the brain and spread outward as the brain continued to develop.

Thomason says that the team is now scanning up to 100 fetuses at different stages of development. These scans might allow them to start to see variation between individuals. They are also applying algorithms to the scanning program that will help correct for the fetus’s movements, so fewer scans will be needed in future.

Once they understand what a normal fetal brain looks like, the researchers hope to study brains that are forming abnormal connections. Disorders such as schizophrenia or autism, for instance, are believed to start during development and might be due to faulty brain connections. Understanding the patterns that characterise these diseases might one day allow physicians to spot early warning signs and intervene sooner. Just as importantly, such images might improve our understanding of how these conditions develop in the first place, Thomason says.

Emi Takahashi of Boston Children’s Hospital says that one way to do this would be to follow a large group of children after they are born, and look back at the prenatal scans of those who later develop a brain disorder. Although she says the study is a very good first step, understanding the miswiring of the brain is so difficult that it may be some time before the results of such work become useful in clinical settings.

(Source: newscientist.com)

Filed under brain brain development fetal brain neuroimaging neural connections neuroscience science

free counters