Neuroscience

Articles and news from the latest research reports.

Posts tagged language

18 notes

Most Neanderthals were right handed, just like modern humans, and this tendency suggests that they may have had the capacity for speech, new research claims.
A new investigation by Professor Frayer and an international team led by Virginie Volpato of the Senckenberg Institute in Frankfurt, Germany, has confirmed Regourdou’s right-handedness by looking more closely at the robustness of the arms and shoulders, and comparing it with scratches on his teeth.

'We’ve been studying scratch marks on Neanderthal teeth, but in all cases they were isolated teeth, or teeth in mandibles not directly associated with skeletal material,' said Professor Frayer.'This is the first time we can check the pattern that’s seen in the teeth with the pattern that’s seen in the arms. We did more sophisticated analysis of the arms — the collarbone, the humerus, the radius and the ulna — because we have them on both sides. And we looked at cortical thickness and other biomechanical measurements. All of them confirmed that everything was more robust on the right side then the left.'

Most Neanderthals were right handed, just like modern humans, and this tendency suggests that they may have had the capacity for speech, new research claims.

A new investigation by Professor Frayer and an international team led by Virginie Volpato of the Senckenberg Institute in Frankfurt, Germany, has confirmed Regourdou’s right-handedness by looking more closely at the robustness of the arms and shoulders, and comparing it with scratches on his teeth.

'We’ve been studying scratch marks on Neanderthal teeth, but in all cases they were isolated teeth, or teeth in mandibles not directly associated with skeletal material,' said Professor Frayer.

'This is the first time we can check the pattern that’s seen in the teeth with the pattern that’s seen in the arms. We did more sophisticated analysis of the arms — the collarbone, the humerus, the radius and the ulna — because we have them on both sides. And we looked at cortical thickness and other biomechanical measurements. All of them confirmed that everything was more robust on the right side then the left.'

Filed under Regourdou handedness language laterality neanderthals neuroscience science speech brain lateralisation

46 notes

Language, Emotion and Well-Being Explored

ScienceDaily (Aug. 23, 2012) — We use language every day to express our emotions, but can this language actually affect what and how we feel? Two new studies from Psychological Science, a journal of the Association for Psychological Science, explore the ways in which the interaction between language and emotion influences our well-being.

Putting Feelings into Words Can Help Us Cope with Scary Situations

Katharina Kircanski and colleagues at the University of California, Los Angeles investigated whether verbalizing a current emotional experience, even when that experience is negative, might be an effective method for treating for people with spider phobias. In an exposure therapy study, participants were split into different experimental groups and they were instructed to approach a spider over several consecutive days.

One group was told to put their feelings into words by describing their negative emotions about approaching the spider. Another group was asked to ‘reappraise’ the situation by describing the spider using emotionally neutral words. A third group was told to talk about an unrelated topic (things in their home) and a fourth group received no intervention. Participants who put their negative feelings into words were most effective at lowering their levels of physiological arousal. They were also slightly more willing to approach the spider. The findings suggest that talking about your feelings — even if they’re negative — may help you to cope with a scary situation.

Unlocking Past Emotion: The Verbs We Use Can Affect Mood and Happiness

Our memory for events is influenced by the language we use. When we talk about a past occurrence, we can describe it as ongoing (I was running) or already completed (I ran). To investigate whether using these different wordings might affect our mood and overall happiness, Will Hart of the University of Alabama conducted four experiments in which participants either recalled or experienced a positive, negative, or neutral event. They found that people who described a positive event with words that suggested it was ongoing felt more positive. And when they described a negative event in the same way, they felt more negative.

The authors conclude that one potential way to improve mood could be to talk about negative past events as something that already happened as opposed to something that was happening.

Source: Science Daily

Filed under science neuroscience psychology brain language emotion

43 notes

Singing mice (scotinomys teguina) are not your average lab rats. Their fur is tawny brown instead of the common white albino strain; they hail from the tropical cloud forests in the mountains of Costa Rica; and, as their name hints, they use song to communicate.
University of Texas at Austin researcher Steven Phelps is examining these unconventional rodents to gain insights into the genes that contribute to the unique singing behavior—information that could help scientists understand and identify genes that affect language in humans.
The song of the singing mouse is a rapid-fire string of high-pitched chirps called trills mostly used by males in dominance displays and to attract mates. Up to 20 chirps are squeaked out per second, sounding similar to birdsong to untrained ears. But unlike birds, the mice generally stick to a song made up of only a single note.
“They sound kind of soft to human ears, but if you slow them down by about three-fold they are pretty dramatic," said Phelps.

Singing mice (scotinomys teguina) are not your average lab rats. Their fur is tawny brown instead of the common white albino strain; they hail from the tropical cloud forests in the mountains of Costa Rica; and, as their name hints, they use song to communicate.

University of Texas at Austin researcher Steven Phelps is examining these unconventional rodents to gain insights into the genes that contribute to the unique singing behavior—information that could help scientists understand and identify genes that affect language in humans.

The song of the singing mouse is a rapid-fire string of high-pitched chirps called trills mostly used by males in dominance displays and to attract mates. Up to 20 chirps are squeaked out per second, sounding similar to birdsong to untrained ears. But unlike birds, the mice generally stick to a song made up of only a single note.

They sound kind of soft to human ears, but if you slow them down by about three-fold they are pretty dramatic," said Phelps.

Filed under animals biology communication language deficits neuroscience science singing mice FOXP2 language

42 notes

Making it easier to learn to read: Dyslexia caused by signal processing in the brain

August 06, 2012

To participate successfully in life, it is important to be able to read and write. Nevertheless, many children and adults have difficulties in acquiring these skills and the reason is not always obvious. They suffer from dyslexia which can have a variety of symptoms. Thanks to research carried out by Begoña Díaz and her colleagues at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, a major step forward has been made in understanding the cause of dyslexia. The scientists have discovered an important neural mechanism underlying dyslexia and shown that many difficulties associated with dyslexia can potentially be traced back to a malfunction of the medial geniculate body in the thalamus. The results provide an important basis for developing potential treatments.

This figure compares the situation in the brain of dyslexics and the control group. The blue area depicts the auditory cortices and the green area represents the medial geniculate bodies. © MPI for Human Cognitive and Brain Sciences

People who suffer from dyslexia have difficulties with identifying speech sounds in spoken language. For example, while most children are able to recognise whether two words rhyme even before they go to school, dyslexic children often cannot do this until late primary school age. Those affected suffer from dyslexia their whole lives. However, there are also always cases where people can compensate for their dyslexia. “This suggests that dyslexia can be treated. We are therefore trying to find the neural causes of this learning disability in order to create a basis for improved treatment options,” says Díaz.

Between five and ten percent of the world’s children suffer from dyslexia, yet very little is know about its causes. Even though those affected do not lack intelligence or schooling, they have difficulties in reading, understanding and explaining individual words or entire texts. The researchers showed that dyslexic adults have a malfunction in a structure that transfers auditory information from the ear to the cortex is a major cause of the impairment: the medial geniculate body in the auditory thalamus does not process speech sounds correctly. “This malfunction at a low level of language processing could percolate through the entire system. This explains why the symptoms of dyslexia are so varied,” says Díaz.

Under the direction of Katharina von Kriegstein, the researchers conducted two experiments in which several volunteers had to perform various speech comprehension tasks. When affected individuals performed tasks that required the recognition of speech sounds, as compared to recognize the voices that pronounced the same speech, magnetic resonance tomography (MRT) recordings showed abnormal responses in the area around the medial geniculate body. In contrast, no differences were apparent between controls and dyslexic participants if the tasks involved only listening to the speech sounds without having to perform a specific task. “The problem, therefore, has nothing to do with sensory processing itself, but with the processing involved in speech recognition,” says Díaz. No differences could be ascertained between the two test groups in other areas of the auditory signalling path. 

The findings of the Leipzig scientists combine various theoretical approaches, which deal with the cause of dyslexia and, for the first time, bring together several of these theories to form an overall picture. “Recognising the cause of a problem is always the first step on the way to a successful treatment,” says Díaz. The researchers’ next project is now to study whether current treatment programmes can influence the medial geniculate body in order to make learning to read easier for everyone in the long term.

Filed under brain dyslexia neuroscience psychology science language

22 notes


Irony seen through the eye of MRI
A French team has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements. Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication. 
In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is said and what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Irony seen through the eye of MRI

A French team has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements. Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.

In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is said and what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Filed under science neuroscience brain psychology theory of mind language linguistics pragmatics MRI neuroimaging communication

36 notes


Human beings have the ability to convert complex phenomena into a one-dimensional sequence of letters and put it down in writing. In this process, keywords serve to convey the content of the text. How letters and words correlate with the subject of a text is something Eduardo Altmann and his colleagues from the Max Planck Institute for the Physics of Complex Systems have studied with the help of statistical methods. They discovered that what denotes keywords is not the fact that they appear very frequently in a given text. It is that they are found in greater numbers only at certain points in the text. They also discovered that relationships exist between sections of text which are distant from each other, in the sense that they preferentially use the same words and letters.

Read more: In search of the key word: Bursts of certain words within a text are what make them keywords

Human beings have the ability to convert complex phenomena into a one-dimensional sequence of letters and put it down in writing. In this process, keywords serve to convey the content of the text. How letters and words correlate with the subject of a text is something Eduardo Altmann and his colleagues from the Max Planck Institute for the Physics of Complex Systems have studied with the help of statistical methods. They discovered that what denotes keywords is not the fact that they appear very frequently in a given text. It is that they are found in greater numbers only at certain points in the text. They also discovered that relationships exist between sections of text which are distant from each other, in the sense that they preferentially use the same words and letters.

Read more: In search of the key word: Bursts of certain words within a text are what make them keywords

Filed under science neuroscience brain psychology mathematics semantics language linguistics statistics correlation text analysis

16 notes

British researchers create robot that can learn simple words by conversing with humans

In an attempt to replicate the early experiences of infants, researchers in England have created a robot that can learn simple words in minutes just by having a conversation with a human.

The three-foot-tall robot, named DeeChee, was built to produce any syllable in the English language. But it knew no words at the outset of the study, speaking only babble phrases like “een rain rain mahdl kross.”

During the experiment, a human volunteer attempted to teach the robot simple words for shapes and colors by using them repeatedly in regular speech.

Filed under science neuroscience language psychology

16 notes

Prototype Device Translates Sign Language

ScienceDaily (June 1, 2012) — Too often, communication barriers exist between those who can hear and those who cannot. Sign language has helped bridge such gaps, but many people are still not fluent in its motions and hand shapes.

During the past semester, students in UH’s engineering technology and industrial design programs teamed up to develop the concept and prototype for MyVoice, a device that reads sign language and translates its motions into audible words. (Credit: Image courtesy of University of Houston)

Thanks to a group of University of Houston students, the hearing impaired may soon have an easier time communicating with those who do not understand sign language. During the past semester, students in UH’s engineering technology and industrial design programs teamed up to develop the concept and prototype for MyVoice, a device that reads sign language and translates its motions into audible words. Recently, MyVoice earned first place among student projects at the American Society of Engineering Education (ASEE) — Gulf Southwest Annual Conference.

The development of MyVoice was through a collaborative senior capstone project for engineering technology students (Anthony Tran, Jeffrey Seto, Omar Gonzalez and Alan Tran) and industrial design students (Rick Salinas, Sergio Aleman and Ya-Han Chen). Overseeing the student teams were Farrokh Attarzadeh, associate professor of engineering technology, and EunSook Kwon, director of UH’s industrial design program.

MyVoice’s concept focuses on a handheld tool with a built-in microphone, speaker, soundboard, video camera and monitor. It would be placed on a hard surface where it reads a user’s sign language movements. Once MyVoice processes the motions, it then translates sign language into space through an electronic voice. Likewise, it would capture a person’s voice and can translate words into sign language, which is projected on its monitor.

The industrial designers researched the application of MyVoice by reaching out to the deaf community to understand the challenges associated with others not understanding sign language. They then designed MyVoice, while the engineering technology students had the arduous task of programming the device to translate motion into sound.

"The biggest difficulty was sampling together a databases of images of the sign languages. It involved 200-300 images per sign," Seto said. "The team was ecstatic when the prototype came together."

From its conceptual stage, MyVoice evolved into a prototype that could translate a single phrase: “A good job, Cougars.”

"This wasn’t just a project we did for a grade," said Aleman, who just graduated from UH. "While designing and developing it, it turned into something very personal. When we got to know members of the deaf community and really understood their challenges, it made this MyVoice very important to all of us."

Since MyVoice’s creation and first place prize at the ASEE conference, all of the team members have graduated. Still, Aleman said that the project is not history.

"We got it to work, but we hope to work with someone to implement this as a product," Aleman said. "We want to prove to the community that this will work for the hearing impaired."

"We are proud of such a contribution to society through MyVoice, which breaks the barrier between deaf community and common society," added Attarzadeh.

Source: Science Daily

Filed under science neuroscience brain psychology language

11 notes

The innate ability to learn language

March 26, 2012 By Angela Herring

All human languages contain two levels of structure, said Iris Berent, a psychology professor in Northeastern’s College of Science. One is syntax, or the ordering of words in a sentence. The other is phonology, or the sound structure of individual words.

Berent — whose research focuses on the phonological structure of language — examines the nature of linguistic competence, its origins and its interaction with reading. While previous studies have all centered on adult language acquisition, she is now working with infants to address two core questions.

“First,” she said, “do infants have the capacity to encode phonological rules? And, second, are some phonological rules innate?”

To address the first issue, Berent collaborated with neuroscientists Janet Werker, of the University of British Columbia, and Judit Gervain, of the Paris-based Centre National de la Recherche Scientifique.

By utilizing an optical brain imaging technique called near-infrared spectroscopy, or NIRS, the researchers found that newborns have the capacity to learn linguistic rules. This finding — published this month in the Journal of Cognitive Neuroscience — suggests that the neural foundations of language acquisition are present at birth.

Armed with this knowledge, Berent has begun conducting behavioral studies on more than two-dozen infants to explore whether linguistic rules are innate or entirely learned.

“We want to see whether infants prefer certain sound patterns to others even if neither occurs in their language,” Berent explained. “For instance, we know that human languages prefer sequences such as bnog over bdog. Would six-month-old infants show this preference even if their language (English) does not include either sequence?”

For the study, each child is placed in front of a video screen that displays an image pulsing in coordination with rotating sounds, such as “bnog” and “bdog.” Berent hypothesized that infants would look longer at the video screen when they hear sounds to which they are innately biased.

Preliminary results have upheld the hypothesis, but Berent is still accepting new subjects for the study. Her entire research program forms part of a new book called “The Phonological Mind,” which will be published by Cambridge University Press this year.

Provided by Northeastern University

Source: medicalxpress.com

Filed under science neuroscience psychology brain language

free counters