Neuroscience

Articles and news from the latest research reports.

Posts tagged science

63 notes

What mechanism generates our fingers and toes?
Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.
By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.
“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”
In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.
“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

What mechanism generates our fingers and toes?

Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.

By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.

“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”

In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.

“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

Filed under pattern formation mathematical model Turing model limb development evolution neuroscience science

92 notes

The end of a dogma: Bipolar cells generate action potentials
To make information transmission to the brain reliable, the retina first has to “digitize” the image. Until now, it was widely believed that this step takes place in the retinal ganglion cells, the output neurons of the retina. Scientists in the lab of Thomas Euler at the University of Tübingen, the Werner Reichardt Centre for Integrative Neuroscience and the Bernstein Center Tübingen were now able to show that already bipolar cells can generate “digital” signals. At least three types of mouse BC showed clear evidence of fast and stereotypic action potentials, so called “spikes”. These results show that the retina is by no means as well understood as is commonly believed. 
The retina in our eyes is not just a sheet of light sensors that – like a camera chip – faithfully transmits patterns of light to the brain. Rather, it performs complex computations, extracting several features from the visual stimuli, e.g., whether the light intensity at a certain place increases or decreases, in which direction a light source moves or whether there is an edge in the image. To transmit this information reliably across the optic nerve - acting as a kind of a cable - to the brain, the retina reformats it into a succession of stereotypic action potentials – it “digitizes” it. Classical textbook knowledge holds that this digital code – similar to the one employed by computers – is applied only in the retina’s ganglion cells, which send the information to the brain. Almost all other cells in the retina were believed to employ graded, analogue signals. But the Tübingen scientists could now show that, in mammals, already the bipolar cells, which are situated right after the photoreceptors within the retinal network, are able to work in a “digital mode” as well.
Using a new experimental technique, Tom Baden and colleagues recorded signals in the synaptic terminals of bipolar cells in the mouse retina. Based on the responses of these cells to simple light stimuli, they were able to separate the neurons into eight different response types. These types closely resembled those expected from physiological and anatomical studies. But surprisingly, the responses of the fastest cell types looked quite different than expected: they were fast, stereotypic and occurred in an all-or-nothing instead of a graded fashion. All these are typical features of action potentials. Such “digital” signals had occasionally been observed in bipolar cells before, but these were believed to be rare exceptional cases. Studies from the past two years on the fish retina had already cast doubt on the long-held belief that BCs do not spike. The new data from Tübingen clearly show that these “digital” signals are systematically generated in certain types of mammalian bipolar cells. Action potentials allow for much faster and temporally more precise signal transmission than graded potentials, thus offering advantages in certain situations. The results from Tübingen call a widely held dogma of neuroscience into question - and open up many new questions.

The end of a dogma: Bipolar cells generate action potentials

To make information transmission to the brain reliable, the retina first has to “digitize” the image. Until now, it was widely believed that this step takes place in the retinal ganglion cells, the output neurons of the retina. Scientists in the lab of Thomas Euler at the University of Tübingen, the Werner Reichardt Centre for Integrative Neuroscience and the Bernstein Center Tübingen were now able to show that already bipolar cells can generate “digital” signals. At least three types of mouse BC showed clear evidence of fast and stereotypic action potentials, so called “spikes”. These results show that the retina is by no means as well understood as is commonly believed. 

The retina in our eyes is not just a sheet of light sensors that – like a camera chip – faithfully transmits patterns of light to the brain. Rather, it performs complex computations, extracting several features from the visual stimuli, e.g., whether the light intensity at a certain place increases or decreases, in which direction a light source moves or whether there is an edge in the image. To transmit this information reliably across the optic nerve - acting as a kind of a cable - to the brain, the retina reformats it into a succession of stereotypic action potentials – it “digitizes” it. Classical textbook knowledge holds that this digital code – similar to the one employed by computers – is applied only in the retina’s ganglion cells, which send the information to the brain. Almost all other cells in the retina were believed to employ graded, analogue signals. But the Tübingen scientists could now show that, in mammals, already the bipolar cells, which are situated right after the photoreceptors within the retinal network, are able to work in a “digital mode” as well.

Using a new experimental technique, Tom Baden and colleagues recorded signals in the synaptic terminals of bipolar cells in the mouse retina. Based on the responses of these cells to simple light stimuli, they were able to separate the neurons into eight different response types. These types closely resembled those expected from physiological and anatomical studies. But surprisingly, the responses of the fastest cell types looked quite different than expected: they were fast, stereotypic and occurred in an all-or-nothing instead of a graded fashion. All these are typical features of action potentials. Such “digital” signals had occasionally been observed in bipolar cells before, but these were believed to be rare exceptional cases. Studies from the past two years on the fish retina had already cast doubt on the long-held belief that BCs do not spike. The new data from Tübingen clearly show that these “digital” signals are systematically generated in certain types of mammalian bipolar cells. Action potentials allow for much faster and temporally more precise signal transmission than graded potentials, thus offering advantages in certain situations. The results from Tübingen call a widely held dogma of neuroscience into question - and open up many new questions.

Filed under bipolar cells retina spikes visual system neuron ganglion cells neuroscience science

65 notes

Better understanding of the cause of Alzheimer’s disease

Alzheimer’s disease is the most common form of dementia, affecting over 35 million people worldwide. It is generally assumed that the clumping of beta-amyloid (Aß) protein causes neuronal loss in patients. Medication focuses on reducing Aß42, one of the most common proteins and the most harmful. University of Twente PhD student Annelies Vandersteen is refining the current approach. She explains: “The results of my research provide a broader understanding of the processes that lead to Alzheimer’s disease and in this way may help to bring about new medication”.

The Aß protein occurs in the body in various lengths, ranging from 33 to 49 amino acids. The shorter varieties are regarded as ‘safe’, unlike the longer ones – Aß42 and longer – which are highly aggregating. Current therapeutic strategy tries to reduce the clumping of Aß42, and its harmful effects, by limiting the release of Aß42. Reducing Aß42 production at the same time results in a rise in Aß38 levels. Vandersteen comments: “One of the findings of my research is that small amounts of Aß38 can in fact increase or temper the clumping and toxic effects of longer Aß proteins. The processes that result in Alzheimer’s disease are determined by the whole spectrum of Aß proteins. So the picture is far less black and white than has been assumed so far, and less common forms of Aß are far less harmless than we thought.”

The study
Vandersteen examined the protein mixtures in a laboratory situation. She devised a series of experiments based on a computer-calculated hypothesis. The behaviour of the various Aß proteins and mixtures was studied in detail and described using various biophysical techniques. The influence of the various Aß proteins and mixtures on neurons was then studied in a cell culture.

(Source: alphagalileo.org)

Filed under brain alzheimer's disease beta-amyloid proteins neuroscience science

422 notes

The split brain: A tale of two halves
In the first months after her surgery, shopping for groceries was infuriating. Standing in the supermarket aisle, Vicki would look at an item on the shelf and know that she wanted to place it in her trolley — but she couldn’t. “I’d reach with my right for the thing I wanted, but the left would come in and they’d kind of fight,” she says. “Almost like repelling magnets.” Picking out food for the week was a two-, sometimes three-hour ordeal. Getting dressed posed a similar challenge: Vicki couldn’t reconcile what she wanted to put on with what her hands were doing. Sometimes she ended up wearing three outfits at once. “I’d have to dump all the clothes on the bed, catch my breath and start again.”
In one crucial way, however, Vicki was better than her pre-surgery self. She was no longer racked by epileptic seizures that were so severe they had made her life close to unbearable. She once collapsed onto the bar of an old-fashioned oven, burning and scarring her back. “I really just couldn’t function,” she says. When, in 1978, her neurologist told her about a radical but dangerous surgery that might help, she barely hesitated. If the worst were to happen, she knew that her parents would take care of her young daughter. “But of course I worried,” she says. “When you get your brain split, it doesn’t grow back together.”
In June 1979, in a procedure that lasted nearly 10 hours, doctors created a firebreak to contain Vicki’s seizures by slicing through her corpus callosum, the bundle of neuronal fibres connecting the two sides of her brain. This drastic procedure, called a corpus callosotomy, disconnects the two sides of the neocortex, the home of language, conscious thought and movement control. Vicki’s supermarket predicament was the consequence of a brain that behaved in some ways as if it were two separate minds.
After about a year, Vicki’s difficulties abated. “I could get things together,” she says. For the most part she was herself: slicing vegetables, tying her shoe laces, playing cards, even waterskiing.
But what Vicki could never have known was that her surgery would turn her into an accidental superstar of neuroscience. She is one of fewer than a dozen ‘split-brain’ patients, whose brains and behaviours have been subject to countless hours of experiments, hundreds of scientific papers, and references in just about every psychology textbook of the past generation. And now their numbers are dwindling.
Through studies of this group, neuroscientists now know that the healthy brain can look like two markedly different machines, cabled together and exchanging a torrent of data. But when the primary cable is severed, information — a word, an object, a picture — presented to one hemisphere goes unnoticed in the other. Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara, and the godfather of modern split-brain science, says that even after working with these patients for five decades, he still finds it thrilling to observe the disconnection effects first-hand. “You see a split-brain patient just doing a standard thing — you show him an image and he can’t say what it is. But he can pull that same object out of a grab-bag,” Gazzaniga says. “Your heart just races!”
Continue reading

The split brain: A tale of two halves

In the first months after her surgery, shopping for groceries was infuriating. Standing in the supermarket aisle, Vicki would look at an item on the shelf and know that she wanted to place it in her trolley — but she couldn’t. “I’d reach with my right for the thing I wanted, but the left would come in and they’d kind of fight,” she says. “Almost like repelling magnets.” Picking out food for the week was a two-, sometimes three-hour ordeal. Getting dressed posed a similar challenge: Vicki couldn’t reconcile what she wanted to put on with what her hands were doing. Sometimes she ended up wearing three outfits at once. “I’d have to dump all the clothes on the bed, catch my breath and start again.”

In one crucial way, however, Vicki was better than her pre-surgery self. She was no longer racked by epileptic seizures that were so severe they had made her life close to unbearable. She once collapsed onto the bar of an old-fashioned oven, burning and scarring her back. “I really just couldn’t function,” she says. When, in 1978, her neurologist told her about a radical but dangerous surgery that might help, she barely hesitated. If the worst were to happen, she knew that her parents would take care of her young daughter. “But of course I worried,” she says. “When you get your brain split, it doesn’t grow back together.”

In June 1979, in a procedure that lasted nearly 10 hours, doctors created a firebreak to contain Vicki’s seizures by slicing through her corpus callosum, the bundle of neuronal fibres connecting the two sides of her brain. This drastic procedure, called a corpus callosotomy, disconnects the two sides of the neocortex, the home of language, conscious thought and movement control. Vicki’s supermarket predicament was the consequence of a brain that behaved in some ways as if it were two separate minds.

After about a year, Vicki’s difficulties abated. “I could get things together,” she says. For the most part she was herself: slicing vegetables, tying her shoe laces, playing cards, even waterskiing.

But what Vicki could never have known was that her surgery would turn her into an accidental superstar of neuroscience. She is one of fewer than a dozen ‘split-brain’ patients, whose brains and behaviours have been subject to countless hours of experiments, hundreds of scientific papers, and references in just about every psychology textbook of the past generation. And now their numbers are dwindling.

Through studies of this group, neuroscientists now know that the healthy brain can look like two markedly different machines, cabled together and exchanging a torrent of data. But when the primary cable is severed, information — a word, an object, a picture — presented to one hemisphere goes unnoticed in the other. Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara, and the godfather of modern split-brain science, says that even after working with these patients for five decades, he still finds it thrilling to observe the disconnection effects first-hand. “You see a split-brain patient just doing a standard thing — you show him an image and he can’t say what it is. But he can pull that same object out of a grab-bag,” Gazzaniga says. “Your heart just races!”

Continue reading

Filed under split brain corpus callosotomy corpus callosum hemispheres neuroscience psychology science

240 notes

Countering brain chemical could prevent suicides

Researchers have found the first proof that a chemical in the brain called glutamate is linked to suicidal behavior, offering new hope for efforts to prevent people from taking their own lives.
Writing in the journal Neuropsychopharmacology, Michigan State University’s Lena Brundin and an international team of co-investigators present the first evidence that glutamate is more active in the brains of people who attempt suicide. Glutamate is an amino acid that sends signals between nerve cells and has long been a suspect in the search for chemical causes of depression.
“The findings are important because they show a mechanism of disease in patients,” said Brundin, associate professor of translational science and molecular medicine in MSU’s College of Human Medicine. “There’s been a lot of focus on another neurotransmitter called serotonin for about 40 years now. The conclusion from our paper is that we need to turn some of that focus to glutamate.”
Brundin and colleagues examined glutamate activity by measuring quinolinic acid – which flips a chemical switch that makes glutamate send more signals to nearby cells – in the spinal fluid of 100 patients in Sweden. About two-thirds of the participants were admitted to a hospital after attempting suicide and the rest were healthy.
They found that suicide attempters had more than twice as much quinolinic acid in their spinal fluid as the healthy people, which indicated increased glutamate signaling between nerve cells. Those who reported the strongest desire to kill themselves also had the highest levels of the acid.
The results also showed decreased quinolinic acid levels among a subset of patients who came back six months later, when their suicidal behavior had ended.
The findings explain why earlier research has pointed to inflammation in the brain as a risk factor for suicide. The body produces quinolinic acid as part of the immune response that creates inflammation.
Brundin said anti-glutamate drugs are still in development, but could soon offer a promising tool for preventing suicide. In fact, recent clinical studies have shown the anesthetic ketamine – which inhibits glutamate signaling – to be extremely effective in fighting depression, though its side effects prevent it from being used widely today.
In the meantime, Brundin said physicians should be aware of inflammation as a likely trigger for suicidal behavior. She is partnering with doctors in Grand Rapids, Mich., to design clinical trials using anti-inflammatory drugs.
“In the future, it’s likely that blood samples from suicidal and depressive patients will be screened for inflammation,” Brundin said. “It is important that primary health care physicians and psychiatrists work closely together on this.”

Countering brain chemical could prevent suicides

Researchers have found the first proof that a chemical in the brain called glutamate is linked to suicidal behavior, offering new hope for efforts to prevent people from taking their own lives.

Writing in the journal Neuropsychopharmacology, Michigan State University’s Lena Brundin and an international team of co-investigators present the first evidence that glutamate is more active in the brains of people who attempt suicide. Glutamate is an amino acid that sends signals between nerve cells and has long been a suspect in the search for chemical causes of depression.

“The findings are important because they show a mechanism of disease in patients,” said Brundin, associate professor of translational science and molecular medicine in MSU’s College of Human Medicine. “There’s been a lot of focus on another neurotransmitter called serotonin for about 40 years now. The conclusion from our paper is that we need to turn some of that focus to glutamate.”

Brundin and colleagues examined glutamate activity by measuring quinolinic acid – which flips a chemical switch that makes glutamate send more signals to nearby cells – in the spinal fluid of 100 patients in Sweden. About two-thirds of the participants were admitted to a hospital after attempting suicide and the rest were healthy.

They found that suicide attempters had more than twice as much quinolinic acid in their spinal fluid as the healthy people, which indicated increased glutamate signaling between nerve cells. Those who reported the strongest desire to kill themselves also had the highest levels of the acid.

The results also showed decreased quinolinic acid levels among a subset of patients who came back six months later, when their suicidal behavior had ended.

The findings explain why earlier research has pointed to inflammation in the brain as a risk factor for suicide. The body produces quinolinic acid as part of the immune response that creates inflammation.

Brundin said anti-glutamate drugs are still in development, but could soon offer a promising tool for preventing suicide. In fact, recent clinical studies have shown the anesthetic ketamine – which inhibits glutamate signaling – to be extremely effective in fighting depression, though its side effects prevent it from being used widely today.

In the meantime, Brundin said physicians should be aware of inflammation as a likely trigger for suicidal behavior. She is partnering with doctors in Grand Rapids, Mich., to design clinical trials using anti-inflammatory drugs.

“In the future, it’s likely that blood samples from suicidal and depressive patients will be screened for inflammation,” Brundin said. “It is important that primary health care physicians and psychiatrists work closely together on this.”

Filed under brain glutamate suicidal behavior nerve cells suicide attempters neuroscience science

155 notes



Honey bees trained to stick out their tongues for science
Biologists at Bielefeld University have trained honey bees to  stick out their tongues when their antennae touch an object.
The tactile conditioning study was conducted by a team from the lab of Volker Dürr, professor for biological cybernetics at Bielefeld, and will allow researchers to investigate how the honey bees use touch in pattern recognition and sense memory.
"We work with honey bees because they are an important model system for behavioural biology and neurobiology," explained Dürr. "They can be trained. If you can train an insect to respond to a certain stimulus, then you can ask the bees questions in the form of ‘Is A like B? If so, stick your tongue out’."
The process by which a bee sticks out its tongue when faced with a stimulus is known as the proboscis extension response. It can be conditioned in the bees as a response to a particular textured surface using sugar water. Each time a harnessed honey bee’s antennae touched the surface, the bee was given sugar water. Eventually the bee extends its tongue whenever it touches the right surface.
Currently the biologists are hoping to use the response to find out more about how bees use antennae movements to gather information about their surroundings.
"It is clear that if a bee touches something with an antenna, a finely textured structure, the bee has to move it to get the information it wants," adds Dürr. "We don’t fully understand the relevance of this movement."

Honey bees trained to stick out their tongues for science

Biologists at Bielefeld University have trained honey bees to stick out their tongues when their antennae touch an object.

The tactile conditioning study was conducted by a team from the lab of Volker Dürr, professor for biological cybernetics at Bielefeld, and will allow researchers to investigate how the honey bees use touch in pattern recognition and sense memory.

"We work with honey bees because they are an important model system for behavioural biology and neurobiology," explained Dürr. "They can be trained. If you can train an insect to respond to a certain stimulus, then you can ask the bees questions in the form of ‘Is A like B? If so, stick your tongue out’."

The process by which a bee sticks out its tongue when faced with a stimulus is known as the proboscis extension response. It can be conditioned in the bees as a response to a particular textured surface using sugar water. Each time a harnessed honey bee’s antennae touched the surface, the bee was given sugar water. Eventually the bee extends its tongue whenever it touches the right surface.

Currently the biologists are hoping to use the response to find out more about how bees use antennae movements to gather information about their surroundings.

"It is clear that if a bee touches something with an antenna, a finely textured structure, the bee has to move it to get the information it wants," adds Dürr. "We don’t fully understand the relevance of this movement."

Filed under bees tactile conditioning touch perception proboscis extension science

52 notes

A Key Gene for Brain Development
About one in ten thousand babies is born with an abnormally small head. The cause for this disorder – which is known as microcephaly – is a defect in the develoment of the embryonic brain. Children with microcephaly are severely retarded and their life expectancy is low. Certain cases of autism and schizophrenia are also associated with the dysregulation of brain size.
The causes underlying impaired brain development can be environmental stress (such as alcohol abuse or radiation) or viral infections (such as rubella) during pregnancy. In many cases, however, a mutant gene causes the problem.
David Keays, a group leader at the IMP, has now found a new gene which is responsible for Microcephaly. Together with his PhD-student Martin Breuss, he was able to identify TUBB5 as the culprit. The gene is responsible for making tubulins, the building blocks of the cell’s internal skeleton. Whenever a cell moves or divides, it relies on guidance from this internal structure, acting like a scaffold.
The IMP-researchers, together with collaborators at Monash University (Victoria, Australia), were able to interfere with the function of the TUBB5 in the brains of unborn mice. This led to massive disturbances in the stem cell population and impaired the migration of nerve cells. Both, the generation of large numbers of neurons from the stem cell reservoir and their correct positioning in the cortex, are essential for the development of the mammalian brain.
To determine whether the findings are also relevant in humans, David Keays collaborates with clinicians from the Paris-Sorbonne University. The French team led by Jamel Chelly, examined 120 patients with pathological brain structures and severe disabilities. Three of the children were found to have a mutated TUBB5-gene.
This information will prove vital to doctors treating children with brain disease. It will allow the development of new genetic tests which will form the basis of genetic counseling, helping parents plan for the future. By understanding how different genes cause brain disorders, it is hoped that one day scientists will be able to create new drugs and therapies to treat them.
The new findings by the IMP-researchers are published in the current issue of the journal “Cell Reports”. For David Keays, understanding the function of TUBB5 is the key to understanding brain development. “Our project shows how research in the lab can help improve lives in the clinic”, he adds.
The paper “Mutations in the β-tubulin Gene TUBB5 Cause Microcephaly with Structural Brain Abnormalities” is published on December 13, 2012, in the online Journal Cell Reports.

A Key Gene for Brain Development

About one in ten thousand babies is born with an abnormally small head. The cause for this disorder – which is known as microcephaly – is a defect in the develoment of the embryonic brain. Children with microcephaly are severely retarded and their life expectancy is low. Certain cases of autism and schizophrenia are also associated with the dysregulation of brain size.

The causes underlying impaired brain development can be environmental stress (such as alcohol abuse or radiation) or viral infections (such as rubella) during pregnancy. In many cases, however, a mutant gene causes the problem.

David Keays, a group leader at the IMP, has now found a new gene which is responsible for Microcephaly. Together with his PhD-student Martin Breuss, he was able to identify TUBB5 as the culprit. The gene is responsible for making tubulins, the building blocks of the cell’s internal skeleton. Whenever a cell moves or divides, it relies on guidance from this internal structure, acting like a scaffold.

The IMP-researchers, together with collaborators at Monash University (Victoria, Australia), were able to interfere with the function of the TUBB5 in the brains of unborn mice. This led to massive disturbances in the stem cell population and impaired the migration of nerve cells. Both, the generation of large numbers of neurons from the stem cell reservoir and their correct positioning in the cortex, are essential for the development of the mammalian brain.

To determine whether the findings are also relevant in humans, David Keays collaborates with clinicians from the Paris-Sorbonne University. The French team led by Jamel Chelly, examined 120 patients with pathological brain structures and severe disabilities. Three of the children were found to have a mutated TUBB5-gene.

This information will prove vital to doctors treating children with brain disease. It will allow the development of new genetic tests which will form the basis of genetic counseling, helping parents plan for the future. By understanding how different genes cause brain disorders, it is hoped that one day scientists will be able to create new drugs and therapies to treat them.

The new findings by the IMP-researchers are published in the current issue of the journal “Cell Reports”. For David Keays, understanding the function of TUBB5 is the key to understanding brain development. “Our project shows how research in the lab can help improve lives in the clinic”, he adds.

The paper “Mutations in the β-tubulin Gene TUBB5 Cause Microcephaly with Structural Brain Abnormalities” is published on December 13, 2012, in the online Journal Cell Reports.

Filed under brain brain size microcephaly brain development mutations genetics neuroscience science

309 notes



Scientists Offer New Way To Look At The Origins Of Life
People have been trying to understand the origins of life on Earth through scientific means since the concept of science began and a pair of Arizona State University researchers suggests in a new report that we’ve been approaching the question incorrectly, almost from the beginning.
In a paper titled, “The algorithmic origins of life,” Paul Davies and Sara Walker proposed that understanding the correct chemical makeup for the origin of life only tells part of the story and scientists should also be focused on how chemical information is organized into life-creating processes.
They equate the shift in perspective to understanding how a computer works. To function, a computer not only needs hardware, akin to life’s chemical makeup, it also needs software, or chemical information.
“When we describe biological processes we typically use informational narratives – cells send out signals, developmental programs are run, coded instructions are read, genomic data are transmitted between generations and so forth,” Walker said. “So identifying life’s origin in the way information is processed and managed can open up new avenues for research.”
“We propose that the transition from non-life to life is unique and definable,” added Davies. “We suggest that life may be characterized by its distinctive and active use of information, thus providing a roadmap to identify rigorous criteria for the emergence of life. This is in sharp contrast to a century of thought in which the transition to life has been cast as a problem of chemistry, with the goal of identifying a plausible reaction pathway from chemical mixtures to a living entity.”
Walker and Davies argue that their approach skirts many issues that have confounded previous efforts to define the origin of life.

Scientists Offer New Way To Look At The Origins Of Life

People have been trying to understand the origins of life on Earth through scientific means since the concept of science began and a pair of Arizona State University researchers suggests in a new report that we’ve been approaching the question incorrectly, almost from the beginning.

In a paper titled, “The algorithmic origins of life,” Paul Davies and Sara Walker proposed that understanding the correct chemical makeup for the origin of life only tells part of the story and scientists should also be focused on how chemical information is organized into life-creating processes.

They equate the shift in perspective to understanding how a computer works. To function, a computer not only needs hardware, akin to life’s chemical makeup, it also needs software, or chemical information.

“When we describe biological processes we typically use informational narratives – cells send out signals, developmental programs are run, coded instructions are read, genomic data are transmitted between generations and so forth,” Walker said. “So identifying life’s origin in the way information is processed and managed can open up new avenues for research.”

“We propose that the transition from non-life to life is unique and definable,” added Davies. “We suggest that life may be characterized by its distinctive and active use of information, thus providing a roadmap to identify rigorous criteria for the emergence of life. This is in sharp contrast to a century of thought in which the transition to life has been cast as a problem of chemistry, with the goal of identifying a plausible reaction pathway from chemical mixtures to a living entity.”

Walker and Davies argue that their approach skirts many issues that have confounded previous efforts to define the origin of life.

Filed under life emergence of life causal architecture evolution science

41 notes



Follow the Eyes: Head-Mounted Cameras Could Help Robots Understand Social Interactions
What is everyone looking at? It’s a common question in social settings because the answer identifies something of interest, or helps delineate social groupings. Those insights someday will be essential for robots designed to interact with humans, so researchers at Carnegie Mellon University’s Robotics Institute have developed a method for detecting where people’s gazes intersect.
The researchers tested the method using groups of people with head-mounted video cameras. By noting where their gazes converged in three-dimensional space, the researchers could determine if they were listening to a single speaker, interacting as a group, or even following the bouncing ball in a ping-pong game.
The system thus uses crowdsourcing to provide subjective information about social groups that would otherwise be difficult or impossible for a robot to ascertain.
The researchers’ algorithm for determining “social saliency” could ultimately be used to evaluate a variety of social cues, such as the expressions on people’s faces or body movements, or data from other types of visual or audio sensors.
"This really is just a first step toward analyzing the social signals of people," said Hyun Soo Park, a Ph.D. student in mechanical engineering, who worked on the project with Yaser Sheikh, assistant research professor of robotics, and Eakta Jain of Texas Instruments, who was awarded a Ph.D. in robotics last spring. "In the future, robots will need to interact organically with people and to do so they must understand their social environment, not just their physical environment."

Follow the Eyes: Head-Mounted Cameras Could Help Robots Understand Social Interactions

What is everyone looking at? It’s a common question in social settings because the answer identifies something of interest, or helps delineate social groupings. Those insights someday will be essential for robots designed to interact with humans, so researchers at Carnegie Mellon University’s Robotics Institute have developed a method for detecting where people’s gazes intersect.

The researchers tested the method using groups of people with head-mounted video cameras. By noting where their gazes converged in three-dimensional space, the researchers could determine if they were listening to a single speaker, interacting as a group, or even following the bouncing ball in a ping-pong game.

The system thus uses crowdsourcing to provide subjective information about social groups that would otherwise be difficult or impossible for a robot to ascertain.

The researchers’ algorithm for determining “social saliency” could ultimately be used to evaluate a variety of social cues, such as the expressions on people’s faces or body movements, or data from other types of visual or audio sensors.

"This really is just a first step toward analyzing the social signals of people," said Hyun Soo Park, a Ph.D. student in mechanical engineering, who worked on the project with Yaser Sheikh, assistant research professor of robotics, and Eakta Jain of Texas Instruments, who was awarded a Ph.D. in robotics last spring. "In the future, robots will need to interact organically with people and to do so they must understand their social environment, not just their physical environment."

Filed under robots robotics eye gaze social interaction neuroscience science

206 notes





Social Synchronicity
Humans have a tendency to spontaneously synchronize their movements. For example, the footsteps of two friends walking together may synchronize, although neither individual is consciously aware that it is happening. Similarly, the clapping hands of an audience will naturally fall into synch. Although this type of synchronous body movement has been observed widely, its neurological mechanism and its role in social interactions remain obscure. In a new study, led by cognitive neuroscientists at the California Institute of Technology (Caltech), researchers found that body-movement synchronization between two participants increases following a short session of cooperative training, suggesting that our ability to synchronize body movements is a measurable indicator of social interaction.
"Our findings may provide a powerful tool for identifying the neural underpinnings of both normal social interactions and impaired social interactions, such as the deficits that are often associated with autism," says Shinsuke Shimojo, Gertrude Baltimore Professor of Experimental Psychology at Caltech and senior author of the study.
Shimojo, along with former postdoctoral scholar Kyongsik Yun, and Katsumi Watanabe, an associate professor at the University of Tokyo, presented their work in a paper published December 11 in Scientific Reports, an online and open-access journal from the Nature Publishing Group.
"The most striking outcome of our study is that not only the body-body synchrony but also the brain-brain synchrony between the two participants increased after a short period of social interaction," says Yun. "This may open new vistas to study the brain-brain interface. It appears that when a cooperative relationship exists, two brains form a loose dynamic system."
The team says this information may be potentially useful for romantic or business partner selection.
"Because we can quantify implicit social bonding between two people using our experimental paradigm, we may be able to suggest a more socially compatible partnership in order to maximize matchmaking success rates, by preexamining body synchrony and its increase during a short cooperative session" explains Yun.

Social Synchronicity

Humans have a tendency to spontaneously synchronize their movements. For example, the footsteps of two friends walking together may synchronize, although neither individual is consciously aware that it is happening. Similarly, the clapping hands of an audience will naturally fall into synch. Although this type of synchronous body movement has been observed widely, its neurological mechanism and its role in social interactions remain obscure. In a new study, led by cognitive neuroscientists at the California Institute of Technology (Caltech), researchers found that body-movement synchronization between two participants increases following a short session of cooperative training, suggesting that our ability to synchronize body movements is a measurable indicator of social interaction.

"Our findings may provide a powerful tool for identifying the neural underpinnings of both normal social interactions and impaired social interactions, such as the deficits that are often associated with autism," says Shinsuke Shimojo, Gertrude Baltimore Professor of Experimental Psychology at Caltech and senior author of the study.

Shimojo, along with former postdoctoral scholar Kyongsik Yun, and Katsumi Watanabe, an associate professor at the University of Tokyo, presented their work in a paper published December 11 in Scientific Reports, an online and open-access journal from the Nature Publishing Group.

"The most striking outcome of our study is that not only the body-body synchrony but also the brain-brain synchrony between the two participants increased after a short period of social interaction," says Yun. "This may open new vistas to study the brain-brain interface. It appears that when a cooperative relationship exists, two brains form a loose dynamic system."

The team says this information may be potentially useful for romantic or business partner selection.

"Because we can quantify implicit social bonding between two people using our experimental paradigm, we may be able to suggest a more socially compatible partnership in order to maximize matchmaking success rates, by preexamining body synchrony and its increase during a short cooperative session" explains Yun.

Filed under synchronization body movement social interaction neurodevelopmental disorders neuroscience science

free counters