Posts tagged psychology

Posts tagged psychology
The Marketplace in Your Brain: Neuroscientists have found brain cells that compute value. Why are economists ignoring them?
In 2003, amid the coastal greenery of the Winnetu Oceanside Resort, on Martha’s Vineyard, a group of about 20 scholars gathered to kick-start a new discipline. They fell, broadly, into two groups: neuroscientists and economists. What they came to talk about was a collaboration between the two fields, which a few researchers had started to call “neuroeconomics.” Insights about brain anatomy, combined with economic models of neurons in action, could produce new insights into how people make decisions about money and life.
A photo taken during one of those sun-dappled days captures the group posed and smiling around a giant chess set on the resort lawn. Pawns were about two feet tall, kings and queens about four feet. Informally, the neuroscientists began to play the black pieces. The economists began to play white.
Today, nearly a decade later, a few black pawns have moved down the board. But the white pieces have stayed put. “I would say that neuroeconomics is about 90 percent neuroscience and 10 percent economists,” says Colin F. Camerer, a professor of behavioral finance and economics at the California Institute of Technology and one of the prime movers in the new field. “We’ve taken a lot of mathematical models from economics to help describe what we see happening in the brain. But economists have been a lot slower to use any of our ideas.”
New research published in Psychological Science, a journal of the Association for Psychological Science, examines the nuanced relationship between language and different types of perception.
Bilingual Infants Can Tell Unfamiliar Languages Apart
Speaking more than one language can improve our ability to control our behavior and focus our attention, recent research has shown. But are there any advantages for bilingual children before they can speak in full sentences? We know that bilingual children can tell if a person is speaking one of their native languages or the other, even when there is no sound, by watching the speaker’s mouth for visual cues. But Núria Sebastián-Gallés of Universitat Pompeu Fabra and colleagues wanted to know whether bilingual infants could also do this with two unfamiliar languages. They studied 8-month-old infants, half of whom lived in either Spanish- or Catalan-speaking households and half of whom lived in Spanish-Catalan bilingual households. The researchers looked at whether the infants could discriminate between English and French, two unfamiliar languages, using only visual cues. They found that the bilingual infants could tell the difference between the two languages, while the infants who lived in single-language households could not. These findings suggest that infants who are immersed in bilingual environments are more sensitive to the differences in visual cues associated with the sounds of various languages.
Lead author: Núria Sebastián-Gallés
Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading
Though people born deaf are better able to use information from peripheral vision than those who can hear, they have a harder time learning to read. Researchers have proposed that the extra information coming in could distract from, rather than enhance, the process of reading. But no research has actually compared visual attention in reading between hearing and deaf readers. In a new study, Nathalie Bélanger of the University of California, San Diego and colleagues investigated this issue by measuring the perceptual span, or the number of letter spaces used when reading, of skilled deaf readers, less-skilled deaf readers, and hearing readers. The experimenters manipulated the number of letter spaces that the participants saw while reading text on a screen. They found that, compared to the other two groups, skilled deaf readers read fastest when they were given the largest number of letter spaces, showing that they had the largest perceptual span. Regardless, they were able to read just as fast as skilled hearing readers. Contrary to previous hypotheses, these findings suggest that enhanced visual attention and perceptual span are not the cause of reading difficulties common among deaf individuals.
Lead author: Nathalie N. Bélanger
Detection of Appearing and Disappearing Objects in Complex Acoustic Scenes
The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an “early warning device,” rapidly directing attention to new events. Here, we investigate listeners’ sensitivity to changes in complex acoustic scenes—what makes certain events “pop-out” and grab attention while others remain unnoticed? We use artificial “scenes” populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between “appear” and “disappear” events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion) across all scene interruptions used, suggesting a robust perceptual representation of item appearance.
New research methods reveal that babies and young children learn by rationally testing hypotheses, analyzing statistics and doing experiments much as scientists do
Very young children’s learning and thinking is strikingly similar to much learning and thinking in science, according to Alison Gopnik, professor of psychology and affiliate professor of philosophy at the University of California, Berkeley. Gopnik’s findings are described in the Sept 28 issue of the journal Science. She spoke about her work in a video briefing with NSF.
Baby Laughter Survey
At Birkbeck Babylab psychologists study how babies learn about world. We believe that the laughter of babies can tell us a great deal about what they are thinking and how much they understand about the world. You can’t laugh if you don’t get the joke and neither does your baby. We believe that what your baby laughs at will change as they grow up. Therefore, we are conducting the world’s first global survey of baby laughter. We would like to know what kinds of things your baby finds funny. Who are the funniest people? What are the funniest songs, sounds, toys and games? What are the funniest parts of your baby’s daily routine?
The survey is anonymous, confidential and takes about 15-20 minutes to complete.
Parkinson’s could be detected by telephone call
A simple telephone call could help spot the early signs of Parkinson’s disease by tracking subtle changes in patients’ voices years before more severe symptoms emerge, researchers claim.
New technology being developed in America analyses tremors, breathiness and other weaknesses in people’s voices which are believed to be one of the condition’s earliest symptoms.
Experts at the Massachusetts Institute of Technology claim that their computer programme can pick out Parkinson’s sufferers with 99 per cent accuracy simply by analysing their speech.
Dr Max Little, a British researcher who is leading the initiative at MIT, now hopes to determine whether the same results could be produced from a patient speaking over the telephone.
By recruiting Parkinson’s patients and health volunteers to take part in a three-minute telephone call where they will say “ah”, speak some sentences and answer a few questions, he said the system could be programmed to diagnose people remotely, allowing earlier treatment.
He said: “Science tells us voice impairment might be an early sign of Parkinson’s. It sounds counterintuitive as Parkinson’s is a movement disorder but the voice is a form of movement.
"Neurologists look at changes in the ability to move, which is done with the limbs, but we are looking in the vocal organs – the sounds that come out of the mouth. We are fairly confident we can detect the disease over the telephone."
Obesity-Related Hormone Discovered in Fruit Flies
Researchers have discovered in fruit flies a key metabolic hormone thought to be the exclusive property of vertebrates. The hormone, leptin, is a nutrient sensor, regulating energy intake and output and ultimately controlling appetite. As such, it is of keen interest to researchers investigating obesity and diabetes on the molecular level. But until now, complex mammals such as mice have been the only models for investigating the mechanisms of this critical hormone. These new findings suggest that fruit flies can provide significant insights into the molecular underpinnings of fat sensing.
“Leptin is very complex,” said Akhila Rajan, first author on the paper and a postdoctoral researcher in the lab of Norbert Perrimon, James Stillman Professor of Developmental Biology at Harvard Medical School. “These types of hormones acquire more and more complex function as they evolve. Here in the fly we’re seeing leptin in its most likely primitive form.”
Concordia student collaborates with Australian neuroscientist to create music based on raw emotions
What does anger sound like? What music does sorrow imply? Human emotion is being given a new soundtrack thanks to an exciting new collaboration between art and neuroscience.
Concordia University researcher Erin Gee is taking feelings to a new level by tapping directly into the human brain, delivering music powered purely by the human body and its emotions. Using data collected from physiological displays of emotion, Gee is creating a software and hardware system that incorporates a set of experimental musical instruments that will perform a symphony of sentiments.
This research could have significant therapeutic benefits for those who have difficulty expressing emotion. Individuals with autism disorders, for example, often struggle to understand the emotions of others. Gee’s robotic technology could be used to teach them how to identify feelings by externalising and exaggerating them into such forms as music.
Cogmed Working Memory Training: Does it Actually Work? The Debate Continues…
A target article in the Journal of Applied Research in Memory and Cognition concludes that evidence does not support the claims of Cogmed Working Memory Training. Additional experts weigh in with commentary papers in response.
Helping children achieve their full potential in school is of great concern to everyone, and a number of commercial products have been developed to try and achieve this goal. The Cogmed Working Memory Training program is such an example and is marketed to schools and parents of children with attention problems caused by poor working memory. But, does the program actually work? The target article in the September issue of Journal of Applied Research in Memory and Cognition (JARMAC) calls into question Cogmed’s claims of improving working memory and addressing underachievement due to working memory constraints.
The target article authors Zach Shipstead, Kenny L. Hicks, Randall W. Engle, all from the Georgia Institute of Technology, review the research that is used to back up the claims of Cogmed. They argue that many of the problem-solving or training tasks are not related to working memory, many of the attention tasks are unrelated to problems such as ADHD, and that there is limited transfer to real-life manifestations of inattentive behavior. They conclude succinctly: “The only unequivocal statement that can be made is that Cogmed will improve performance on tasks that resemble Cogmed training.”
How attention helps you remember: New study finds long-overlooked cells help the brain respond to visual stimuli
A new study from MIT neuroscientists sheds light on a neural circuit that makes us likelier to remember what we’re seeing when our brains are in a more attentive state.
The team of neuroscientists found that this circuit depends on a type of brain cell long thought to play a supporting role, at most, in neural processing. When the brain is attentive, those cells, called astrocytes, relay messages alerting neurons of the visual cortex that they should respond strongly to whatever visual information they are receiving.
The findings, published this week in the online edition of the Proceedings of the National Academy of Sciences, are the latest in a growing body of evidence suggesting that astrocytes are critically important for processing sensory information, says Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience at MIT and senior author of the paper.