Neuroscience

Articles and news from the latest research reports.

128 notes

Birdsong study pecks at theory that music is uniquely human
A bird listening to birdsong may experience some of the same emotions as a human listening to music, suggests a new study on white-throated sparrows, published in Frontiers of Evolutionary Neuroscience.
“We found that the same neural reward system is activated in female birds in the breeding state that are listening to male birdsong, and in people listening to music that they like,” says Sarah Earp, who led the research as an undergraduate at Emory University.
For male birds listening to another male’s song, it was a different story: They had an amygdala response that looks similar to that of people when they hear discordant, unpleasant music.
The study, co-authored by Emory neuroscientist Donna Maney, is the first to compare neural responses of listeners in the long-standing debate over whether birdsong is music.
“Scientists since the time of Darwin have wondered whether birdsong and music may serve similar purposes, or have the same evolutionary precursors,” Earp notes. “But most attempts to compare the two have focused on the qualities of the sound themselves, such as melody and rhythm.”
Earp reviewed studies that mapped human neural responses to music through brain imaging.
She also analyzed data from the Maney lab on white-throated sparrows. The lab maps brain responses in the birds by measuring Egr-1, part of a major biochemical pathway activated in cells that are responding to a stimulus.
The study used Egr-1 as a marker to map and quantify neural responses in the mesolimbic reward system in male and female white-throated sparrows listening to a male bird’s song. Some of the listening birds had been treated with hormones, to push them into the breeding state, while the control group had low levels of estradiol and testosterone.
During the non-breeding season, both sexes of sparrows use song to establish and maintain dominance in relationships. During the breeding season, however, a male singing to a female is almost certainly courting her, while a male singing to another male is challenging an interloper.
For the females in the breeding state every region of the mesolimbic reward pathway that has been reported to respond to music in humans, and that has a clear avian counterpart, responded to the male birdsong. Females in the non-breeding state, however, did not show a heightened response.
And the testosterone-treated males listening to another male sing showed an amygdala response, which may correlate to the amygdala response typical of humans listening to the kind of music used in the scary scenes of horror movies.
“The neural response to birdsong appears to depend on social context, which can be the case with humans as well,” Earp says. “Both birdsong and music elicit responses not only in brain regions associated directly with reward, but also in interconnected regions that are thought to regulate emotion. That suggests that they both may activate evolutionarily ancient mechanisms that are necessary for reproduction and survival.”
A major limitation of the study, Earp adds, is that many of the regions that respond to music in humans are cortical, and they do not have clear counterparts in birds. “Perhaps techniques will someday be developed to image neural responses in baleen whales, whose songs are both musical and learned, and whose brain anatomy is more easily compared with humans,” she says.

Birdsong study pecks at theory that music is uniquely human

A bird listening to birdsong may experience some of the same emotions as a human listening to music, suggests a new study on white-throated sparrows, published in Frontiers of Evolutionary Neuroscience.

“We found that the same neural reward system is activated in female birds in the breeding state that are listening to male birdsong, and in people listening to music that they like,” says Sarah Earp, who led the research as an undergraduate at Emory University.

For male birds listening to another male’s song, it was a different story: They had an amygdala response that looks similar to that of people when they hear discordant, unpleasant music.

The study, co-authored by Emory neuroscientist Donna Maney, is the first to compare neural responses of listeners in the long-standing debate over whether birdsong is music.

“Scientists since the time of Darwin have wondered whether birdsong and music may serve similar purposes, or have the same evolutionary precursors,” Earp notes. “But most attempts to compare the two have focused on the qualities of the sound themselves, such as melody and rhythm.”

Earp reviewed studies that mapped human neural responses to music through brain imaging.

She also analyzed data from the Maney lab on white-throated sparrows. The lab maps brain responses in the birds by measuring Egr-1, part of a major biochemical pathway activated in cells that are responding to a stimulus.

The study used Egr-1 as a marker to map and quantify neural responses in the mesolimbic reward system in male and female white-throated sparrows listening to a male bird’s song. Some of the listening birds had been treated with hormones, to push them into the breeding state, while the control group had low levels of estradiol and testosterone.

During the non-breeding season, both sexes of sparrows use song to establish and maintain dominance in relationships. During the breeding season, however, a male singing to a female is almost certainly courting her, while a male singing to another male is challenging an interloper.

For the females in the breeding state every region of the mesolimbic reward pathway that has been reported to respond to music in humans, and that has a clear avian counterpart, responded to the male birdsong. Females in the non-breeding state, however, did not show a heightened response.

And the testosterone-treated males listening to another male sing showed an amygdala response, which may correlate to the amygdala response typical of humans listening to the kind of music used in the scary scenes of horror movies.

“The neural response to birdsong appears to depend on social context, which can be the case with humans as well,” Earp says. “Both birdsong and music elicit responses not only in brain regions associated directly with reward, but also in interconnected regions that are thought to regulate emotion. That suggests that they both may activate evolutionarily ancient mechanisms that are necessary for reproduction and survival.”

A major limitation of the study, Earp adds, is that many of the regions that respond to music in humans are cortical, and they do not have clear counterparts in birds. “Perhaps techniques will someday be developed to image neural responses in baleen whales, whose songs are both musical and learned, and whose brain anatomy is more easily compared with humans,” she says.

Filed under music birdsong neural response reward system sparrows neuroscience science

183 notes

MRI Can Screen Patients for Alzheimer’s Disease or Frontotemporal Lobar Degeneration, Using Penn-designed Model
When trying to determine the root cause of a person’s dementia, using an MRI can effectively and non-invasively screen patients for Alzheimer’s disease or Frontotemporal Lobar Degeneration (FTLD), according to a new study by researchers from the Perelman School of Medicine at the University of Pennsylvania. Using an MRI-based algorithm effectively differentiated cases 75 percent of the time, according to the study, published in the December 26th, 2012, issue of Neurology, the medical journal of the American Academy of Neurology. The non-invasive approach reported in this study can track disease progression over time more easily and cost-effectively than other tests, particularly in clinical trials testing new therapies.
Researchers used the MRIs to predict the ratio of two biomarkers for the diseases - the proteins total tau and beta-amyloid - in the cerebrospinal fluid. Cerebrospinal fluid analyses remain the most accurate method for predicting the disease cause, but requires a more invasive lumbar puncture. “Using this novel method, we obtain a single biologically meaningful value from analyzing MRI data in this manner and then we can derive a probabilistic estimate of the likelihood of Alzheimer’s or FTLD,” said the study’s lead author, Corey McMillan, PhD, of the Perelman School of Medicine and Frontotemporal Degeneration Center at the University of Pennsylvania.
Using the MRI prediction method was 75 percent accurate at identifying the correct diagnosis in both patients with pre-confirmed disease diagnoses and those with biomarker levels confirmed by lumbar punctures, which shows comparable overlap between accuracy of the MRI and lumbar puncture methods. “For those remaining 25 percent of cases that are borderline, a lumbar puncture testing spinal fluid may provide a more accurate estimate of the pathological diagnosis.”
Accurate tests to measure disease progression are very important in neurodegenerative diseases, especially as clinical trials test new therapies to slow or stop the progression or the disease. Biomarkers for neurodegenerative diseases have been steadily improving, with new developments including spinal fluid tests detecting tau and amyloid-beta protein levels and other neuroimaging techniques developed at Penn Medicine, as part of the Alzheimer’s Disease Neuroimaging Initiative. While a spinal fluid test can be used to accurately pinpoint whether disease-specific proteins are present, the test requires a more invasive lumbar puncture making it more difficult to repeat over time. And for studies using other imaging techniques, such as test measuring whole brain volume, reduced sensitivity of the measurement requires more patients to be enrolled in clinical trials for statistical power to be achieved.
“Since this method yields a single biological value, it is possible to use MRI to screen patients for inclusion in clinical trials in a cost-effective manner and to provide an outcome measure that  optimizes power in drug treatment trials,” the authors concluded.

MRI Can Screen Patients for Alzheimer’s Disease or Frontotemporal Lobar Degeneration, Using Penn-designed Model

When trying to determine the root cause of a person’s dementia, using an MRI can effectively and non-invasively screen patients for Alzheimer’s disease or Frontotemporal Lobar Degeneration (FTLD), according to a new study by researchers from the Perelman School of Medicine at the University of Pennsylvania. Using an MRI-based algorithm effectively differentiated cases 75 percent of the time, according to the study, published in the December 26th, 2012, issue of Neurology, the medical journal of the American Academy of Neurology. The non-invasive approach reported in this study can track disease progression over time more easily and cost-effectively than other tests, particularly in clinical trials testing new therapies.

Researchers used the MRIs to predict the ratio of two biomarkers for the diseases - the proteins total tau and beta-amyloid - in the cerebrospinal fluid. Cerebrospinal fluid analyses remain the most accurate method for predicting the disease cause, but requires a more invasive lumbar puncture. “Using this novel method, we obtain a single biologically meaningful value from analyzing MRI data in this manner and then we can derive a probabilistic estimate of the likelihood of Alzheimer’s or FTLD,” said the study’s lead author, Corey McMillan, PhD, of the Perelman School of Medicine and Frontotemporal Degeneration Center at the University of Pennsylvania.

Using the MRI prediction method was 75 percent accurate at identifying the correct diagnosis in both patients with pre-confirmed disease diagnoses and those with biomarker levels confirmed by lumbar punctures, which shows comparable overlap between accuracy of the MRI and lumbar puncture methods. “For those remaining 25 percent of cases that are borderline, a lumbar puncture testing spinal fluid may provide a more accurate estimate of the pathological diagnosis.”

Accurate tests to measure disease progression are very important in neurodegenerative diseases, especially as clinical trials test new therapies to slow or stop the progression or the disease. Biomarkers for neurodegenerative diseases have been steadily improving, with new developments including spinal fluid tests detecting tau and amyloid-beta protein levels and other neuroimaging techniques developed at Penn Medicine, as part of the Alzheimer’s Disease Neuroimaging Initiative. While a spinal fluid test can be used to accurately pinpoint whether disease-specific proteins are present, the test requires a more invasive lumbar puncture making it more difficult to repeat over time. And for studies using other imaging techniques, such as test measuring whole brain volume, reduced sensitivity of the measurement requires more patients to be enrolled in clinical trials for statistical power to be achieved.

“Since this method yields a single biological value, it is possible to use MRI to screen patients for inclusion in clinical trials in a cost-effective manner and to provide an outcome measure that  optimizes power in drug treatment trials,” the authors concluded.

Filed under neurodegenerative diseases alzheimer's disease disease progression diagnosis neuroscience science

227 notes

Does Einstein’s brain hold the secret to his genius?
Albert Einstein’s brain fascinates scientists and the general public alike, because it may provide clues to the neurological basis of his extraordinary intellectual abilities. The latest study of the great physicist’s grey matter was published last month. The researchers analyzed previously unpublished photographs of the great physicist’s cerebral cortex, and claim to have identified unusual, and hitherto unknown, features. But some are sceptical about how the findings have been interpreted.
Shortly after Einstein’s death on 18th April, 1955, pathologist Thomas Harvey removed his brain and dissected it into 240 blocks, taking dozens of photographs while he did so. He then sent some of the tissue samples and photographs to a handful of researchers, and eventually, a small number of studies emerged. The early ones showed that Einstein’s brain was, in fact, slightly smaller, and weighed about 200 grams less, than average, but subsequent investigations revealed several unusual features, which were, it was claimed, somehow related to his visuo-spatial skills.
For the new study, anthropologist Dean Falk of Florida State University and her colleagues analyzed 14 of the’s photographs from the museum collection, which together reveal the entire surface of Einstein’s cerebral cortex for the first time, enabling the researchers to examine the pattern of grooves and ridges and in detail and compare them to those seen in other brains.
"The new photographs reveal parts of Einstein’s brain that have not previously been seen in published images," says Falk. "We have identified most of the external details of his cerebral cortex, [and] the complexity and pattern of convolutions on certain parts of Einstein’s cerebral cortex is striking and unusual in comparison to brains from normal individuals."
"This is especially noticeable in the prefrontal cortex, which is important for advanced cognition, the parietal lobes, which are important for spatial and arithmetic reasoning, and the visual cortex. The primary sensory and motor cortices are also extraordinarily expanded in certain parts."
Some argue that any conclusions drawn from such findings could be meaningless. “Studying Einstein’s brain is like studying the writings of Nostradamus,” says Chris Chambers, a cognitive neuroscientist at Cardiff University. “You can read them backwards, forward, or even sideways, and draw whatever conclusions you like.”
"We inevitably end up committing logical fallacies of reverse inference and faulty generalisation: that certain parts of Einstein’s brain may look a bit different to other brains, and that this explains his abilities. But the differences might have no functional importance whatsoever, and this makes any kind of conclusion extremely weak."
Chambers adds that there is enormous variability in human brain structure, and that this poses another problem when trying to interpret such findings. “We’re dealing with just one brain and this makes it impossible to draw any firm conclusions about the population at large. Human brains come in all shapes and sizes and there is no known relationship to cognition. Very few people have the ‘normal’ brain we see in textbooks, and neither did Einstein.”
Clinical neurologist Frederick Lepore, a co-author of the new study, made similar arguments in 2001, and in an interview published online earlier this month, he is quoted as saying that the new study confirms Einstein’s brain “was very different,” but that “we face an insurmountable explanatory gap if we attempt to use our neuroanatomical findings to account for the mind that envisioned the curvature of the universe.”
He goes on to say that the next logical step would be to try to generate Einstein’s connectome, a comprehensive map of the connections in his brain, and that a comparison of the brain to those of other geniuses is another possible avenue of research.
Falk believes that the photographs could help researchers to map Einstein’s connectome. “[We have published]… the ‘roadmap’ that provides a key between these areas and recently emerged histological slides of Einstein’s brain, which may allow scientists to study its internal connectivity. These photographs should become more meaningful in the future, as more is learned about the functions of various regions.”

Does Einstein’s brain hold the secret to his genius?

Albert Einstein’s brain fascinates scientists and the general public alike, because it may provide clues to the neurological basis of his extraordinary intellectual abilities. The latest study of the great physicist’s grey matter was published last month. The researchers analyzed previously unpublished photographs of the great physicist’s cerebral cortex, and claim to have identified unusual, and hitherto unknown, features. But some are sceptical about how the findings have been interpreted.

Shortly after Einstein’s death on 18th April, 1955, pathologist Thomas Harvey removed his brain and dissected it into 240 blocks, taking dozens of photographs while he did so. He then sent some of the tissue samples and photographs to a handful of researchers, and eventually, a small number of studies emerged. The early ones showed that Einstein’s brain was, in fact, slightly smaller, and weighed about 200 grams less, than average, but subsequent investigations revealed several unusual features, which were, it was claimed, somehow related to his visuo-spatial skills.

For the new study, anthropologist Dean Falk of Florida State University and her colleagues analyzed 14 of the’s photographs from the museum collection, which together reveal the entire surface of Einstein’s cerebral cortex for the first time, enabling the researchers to examine the pattern of grooves and ridges and in detail and compare them to those seen in other brains.

"The new photographs reveal parts of Einstein’s brain that have not previously been seen in published images," says Falk. "We have identified most of the external details of his cerebral cortex, [and] the complexity and pattern of convolutions on certain parts of Einstein’s cerebral cortex is striking and unusual in comparison to brains from normal individuals."

"This is especially noticeable in the prefrontal cortex, which is important for advanced cognition, the parietal lobes, which are important for spatial and arithmetic reasoning, and the visual cortex. The primary sensory and motor cortices are also extraordinarily expanded in certain parts."

Some argue that any conclusions drawn from such findings could be meaningless. “Studying Einstein’s brain is like studying the writings of Nostradamus,” says Chris Chambers, a cognitive neuroscientist at Cardiff University. “You can read them backwards, forward, or even sideways, and draw whatever conclusions you like.”

"We inevitably end up committing logical fallacies of reverse inference and faulty generalisation: that certain parts of Einstein’s brain may look a bit different to other brains, and that this explains his abilities. But the differences might have no functional importance whatsoever, and this makes any kind of conclusion extremely weak."

Chambers adds that there is enormous variability in human brain structure, and that this poses another problem when trying to interpret such findings. “We’re dealing with just one brain and this makes it impossible to draw any firm conclusions about the population at large. Human brains come in all shapes and sizes and there is no known relationship to cognition. Very few people have the ‘normal’ brain we see in textbooks, and neither did Einstein.”

Clinical neurologist Frederick Lepore, a co-author of the new study, made similar arguments in 2001, and in an interview published online earlier this month, he is quoted as saying that the new study confirms Einstein’s brain “was very different,” but that “we face an insurmountable explanatory gap if we attempt to use our neuroanatomical findings to account for the mind that envisioned the curvature of the universe.”

He goes on to say that the next logical step would be to try to generate Einstein’s connectome, a comprehensive map of the connections in his brain, and that a comparison of the brain to those of other geniuses is another possible avenue of research.

Falk believes that the photographs could help researchers to map Einstein’s connectome. “[We have published]… the ‘roadmap’ that provides a key between these areas and recently emerged histological slides of Einstein’s brain, which may allow scientists to study its internal connectivity. These photographs should become more meaningful in the future, as more is learned about the functions of various regions.”

Filed under Albert Einstein Einstein's brain photographs cerebral cortex connectome neuroscience science

72 notes

PredictAD software promises early diagnosis of Alzheimer’s
Scientists at VTT Technical Research Centre in Finland have developed new software called PredictAD that could significantly boost the early diagnosis of Alzheimer’s disease.
The comparative software contrasts patient’s measurements with those of other patients kept in large databases, then visualizes the status of the patient with an index and graphics.
The support system and imaging methods were developed by VTT and Imperial College London.
The researchers used material compiled in the U.S. by the Alzheimer’s Disease Neuroimaging Initiative based on the records of 288 patients with memory problems. Nearly half of them, or 140 individuals, were diagnosed with Alzheimer’s disease on average 21 months after the initial measurements, which is about the same as the current European average of 20 months.
The researchers concluded that half of the patients could have been diagnosed with the disease around a year earlier, or nine months after the initial measurements. They say the accuracy of the predictions was comparable to clinical diagnosis.
There are several advantages of an early diagnosis of Alzheimer’s. It can delay institutionalization and slow down the progress of the disease. It is also advantageous from the clinical trials perspective because if patients caught early can be included in the trials, treatment is likely to be more effective.
Working towards the same goal, researchers at Lancaster University in the U.K. recently developed an eye test method to detect early signs of Alzheimer’s.
The VTT researchers will spend the next five years carrying out the test at memory clinics in Europe. They also hope to expand its scope to include other illnesses that cause dementia. According to 2010 figures, an estimated 35.6 people live with dementia worldwide, and that number is expected to rise to 65.7 million by 2030.
The findings of the research were published in the Journal of Alzheimer’s Disease in November 2012. VTT cooperated with the University of Eastern Finland and Copenhagen University Hospital Rigshospitalet on this project.

PredictAD software promises early diagnosis of Alzheimer’s

Scientists at VTT Technical Research Centre in Finland have developed new software called PredictAD that could significantly boost the early diagnosis of Alzheimer’s disease.

The comparative software contrasts patient’s measurements with those of other patients kept in large databases, then visualizes the status of the patient with an index and graphics.

The support system and imaging methods were developed by VTT and Imperial College London.

The researchers used material compiled in the U.S. by the Alzheimer’s Disease Neuroimaging Initiative based on the records of 288 patients with memory problems. Nearly half of them, or 140 individuals, were diagnosed with Alzheimer’s disease on average 21 months after the initial measurements, which is about the same as the current European average of 20 months.

The researchers concluded that half of the patients could have been diagnosed with the disease around a year earlier, or nine months after the initial measurements. They say the accuracy of the predictions was comparable to clinical diagnosis.

There are several advantages of an early diagnosis of Alzheimer’s. It can delay institutionalization and slow down the progress of the disease. It is also advantageous from the clinical trials perspective because if patients caught early can be included in the trials, treatment is likely to be more effective.

Working towards the same goal, researchers at Lancaster University in the U.K. recently developed an eye test method to detect early signs of Alzheimer’s.

The VTT researchers will spend the next five years carrying out the test at memory clinics in Europe. They also hope to expand its scope to include other illnesses that cause dementia. According to 2010 figures, an estimated 35.6 people live with dementia worldwide, and that number is expected to rise to 65.7 million by 2030.

The findings of the research were published in the Journal of Alzheimer’s Disease in November 2012. VTT cooperated with the University of Eastern Finland and Copenhagen University Hospital Rigshospitalet on this project.

Filed under alzheimer's disease PredictAD dementia software diagnosis memory science

186 notes

How Excess Holiday Eating Disturbs Your ‘Food Clock’
If the sinful excess of holiday eating sends your system into butter-slathered, brandy-soaked overload, you are not alone: People who are jet-lagged, people who work graveyard shifts and plain-old late-night snackers know just how you feel.
All these activities upset the body’s “food clock,” a collection of interacting genes and molecules known technically as the food-entrainable oscillator, which keeps the human body on a metabolic even keel. A new study by researchers at UCSF is helping to reveal how this clock works on a molecular level.
Published this month in the journal Proceedings of the National Academy of Sciences, the UCSF team has shown that a protein called PKCγ is critical in resetting the food clock if our eating habits change.
The study showed that normal laboratory mice given food only during their regular sleeping hours will adjust their food clock over time and begin to wake up from their slumber, and run around in anticipation of their new mealtime. But mice lacking the PKCγ gene are not able to respond to changes in their meal time – instead sleeping right through it.The work has implications for understanding the molecular basis of diabetes, obesity and other metabolic syndromes because a desynchronized food clock may serve as part of the pathology underlying these disorders, said Louis Ptacek, MD, the John C. Coleman Distinguished Professor of Neurology at UCSF and a Howard Hughes Medical Institute Investigator.
It may also help explain why night owls are more likely to be obese than morning larks, Ptacek said.
“Understanding the molecular mechanism of how eating at the “wrong” time of the day desynchronizes the clocks in our body can facilitate the development of better treatments for disorders associated with night-eating syndrome, shift work and jet lag,” he added.
Resetting the Food Clock
Look behind the face of a mechanical clock and you will see a dizzying array of cogs, flywheels, reciprocating counterbalances and other moving parts. Biological clocks are equally complex, composed of multiple interacting genes that turn on or off in an orchestrated way to keep time during the day.
In most organisms, biological clockworks are governed by a master clock, referred to as the “circadian oscillator,” which keeps track of time and coordinates our biological processes with the rhythm of a 24-hour cycle of day and night.
Life forms as diverse as humans, mice and mustard greens all possess such master clocks. And in the last decade or so, scientists have uncovered many of their inner workings, uncovering many of the genes whose cycles are tied to the clock and discovering how in mammals it is controlled by a tiny spot in the brain known as the “superchiasmatic nucleus.”
Scientists also know that in addition to the master clock, our bodies have other clocks operating in parallel throughout the day. One of these is the food clock, which is not tied to one specific spot in the brain but rather multiple sites throughout the body.
The food clock is there to help our bodies make the most of our nutritional intake. It controls genes that help in everything from the absorption of nutrients in our digestive tract to their dispersal through the bloodstream, and it is designed to anticipate our eating patterns. Even before we eat a meal, our bodies begin to turn on some of these genes and turn off others, preparing for the burst of sustenance – which is why we feel the pangs of hunger just as the lunch hour arrives.
Scientist have known that the food clock can be reset over time if an organism changes its eating patterns, eating to excess or at odd times, since the timing of the food clock is pegged to feeding during the prime foraging and hunting hours in the day. But until now, very little was known about how the food clock works on a genetic level.
What Ptacek and his colleagues discovered is the molecular basis for this phenomenon: the PKCγ protein binds to another molecule called BMAL and stabilizes it, which shifts the clock in time.

How Excess Holiday Eating Disturbs Your ‘Food Clock’

If the sinful excess of holiday eating sends your system into butter-slathered, brandy-soaked overload, you are not alone: People who are jet-lagged, people who work graveyard shifts and plain-old late-night snackers know just how you feel.

All these activities upset the body’s “food clock,” a collection of interacting genes and molecules known technically as the food-entrainable oscillator, which keeps the human body on a metabolic even keel. A new study by researchers at UCSF is helping to reveal how this clock works on a molecular level.

Published this month in the journal Proceedings of the National Academy of Sciences, the UCSF team has shown that a protein called PKCγ is critical in resetting the food clock if our eating habits change.

The study showed that normal laboratory mice given food only during their regular sleeping hours will adjust their food clock over time and begin to wake up from their slumber, and run around in anticipation of their new mealtime. But mice lacking the PKCγ gene are not able to respond to changes in their meal time – instead sleeping right through it.

The work has implications for understanding the molecular basis of diabetes, obesity and other metabolic syndromes because a desynchronized food clock may serve as part of the pathology underlying these disorders, said Louis Ptacek, MD, the John C. Coleman Distinguished Professor of Neurology at UCSF and a Howard Hughes Medical Institute Investigator.

It may also help explain why night owls are more likely to be obese than morning larks, Ptacek said.

“Understanding the molecular mechanism of how eating at the “wrong” time of the day desynchronizes the clocks in our body can facilitate the development of better treatments for disorders associated with night-eating syndrome, shift work and jet lag,” he added.

Resetting the Food Clock

Look behind the face of a mechanical clock and you will see a dizzying array of cogs, flywheels, reciprocating counterbalances and other moving parts. Biological clocks are equally complex, composed of multiple interacting genes that turn on or off in an orchestrated way to keep time during the day.

In most organisms, biological clockworks are governed by a master clock, referred to as the “circadian oscillator,” which keeps track of time and coordinates our biological processes with the rhythm of a 24-hour cycle of day and night.

Life forms as diverse as humans, mice and mustard greens all possess such master clocks. And in the last decade or so, scientists have uncovered many of their inner workings, uncovering many of the genes whose cycles are tied to the clock and discovering how in mammals it is controlled by a tiny spot in the brain known as the “superchiasmatic nucleus.”

Scientists also know that in addition to the master clock, our bodies have other clocks operating in parallel throughout the day. One of these is the food clock, which is not tied to one specific spot in the brain but rather multiple sites throughout the body.

The food clock is there to help our bodies make the most of our nutritional intake. It controls genes that help in everything from the absorption of nutrients in our digestive tract to their dispersal through the bloodstream, and it is designed to anticipate our eating patterns. Even before we eat a meal, our bodies begin to turn on some of these genes and turn off others, preparing for the burst of sustenance – which is why we feel the pangs of hunger just as the lunch hour arrives.

Scientist have known that the food clock can be reset over time if an organism changes its eating patterns, eating to excess or at odd times, since the timing of the food clock is pegged to feeding during the prime foraging and hunting hours in the day. But until now, very little was known about how the food clock works on a genetic level.

What Ptacek and his colleagues discovered is the molecular basis for this phenomenon: the PKCγ protein binds to another molecule called BMAL and stabilizes it, which shifts the clock in time.

Filed under obesity food clock circadian oscillator superchiasmatic nucleus eating patterns genetics science

692 notes

Why Do We Blink So Frequently?
We all blink. A lot. The average person blinks some 15-20 times per minute—so frequently that our eyes are closed for roughly 10% of our waking hours overall.
Although some of this blinking has a clear purpose—mostly to lubricate the eyeballs, and occasionally protect them from dust or other debris—scientists say that we blink far more often than necessary for these functions alone. Thus, blinking is physiological riddle. Why do we do it so darn often? In a paper published in the Proceedings of the National Academy of Sciences, a group of scientists from Japan offers up a surprising new answer—that briefly closing our eyes might actually help us to gather our thoughts and focus attention on the world around us.
The researchers came to the hypothesis after noting an interesting fact revealed by previous research on blinking: that the exact moments when we blink aren’t actually random. Although seemingly spontaneous, studies have revealed that people tend to blink at predictable moments. For someone reading, blinking often occurs after each sentence is finished, while for a person listening to a speech, it frequently comes when the speaker pauses between statements. A group of people all watching the same video tend to blink around the same time, too, when action briefly lags.
As a result, the researchers guessed that we might subconsciously use blinks as a sort of mental resting point, to briefly shut off visual stimuli and allow us to focus our attention. To test the idea, they put 10 different volunteers in an fMRI machine and had them watch the TV show “Mr. Bean” (they had used the same show in their previous work on blinking, showing that it came at implicit break points in the video). They then monitored which areas of the brain showed increased or decreased activity when the study participants blinked.
Their analysis showed that when the Bean-watchers blinked, mental activity briefly spiked in areas related to the default network, areas of the brain that operate when the mind is in a state of wakeful rest, rather than focusing on the outside world. Momentary activation of this alternate network, they theorize, could serve as a mental break, allowing for increased attention capacity when the eyes are opened again.
To test whether this mental break was simply a result of the participants’ visual inputs being blocked, rather than a subconscious effort to clear their minds, the researchers also manually inserted “blackouts” into the video at random intervals that lasted roughly as long as a blink. In the fMRI data, though, the brain areas related to the default network weren’t similarly activated. Blinking is something more than temporarily not seeing anything.
It’s far from conclusive, but the research demonstrates that we do enter some sort of altered mental state when we blink—we’re not just doing it to lubricate our eyes. A blink could provide a momentary island of introspective calm in the ocean of visual stimuli that defines our lives.

Why Do We Blink So Frequently?

We all blink. A lot. The average person blinks some 15-20 times per minute—so frequently that our eyes are closed for roughly 10% of our waking hours overall.

Although some of this blinking has a clear purpose—mostly to lubricate the eyeballs, and occasionally protect them from dust or other debris—scientists say that we blink far more often than necessary for these functions alone. Thus, blinking is physiological riddle. Why do we do it so darn often? In a paper published in the Proceedings of the National Academy of Sciences, a group of scientists from Japan offers up a surprising new answer—that briefly closing our eyes might actually help us to gather our thoughts and focus attention on the world around us.

The researchers came to the hypothesis after noting an interesting fact revealed by previous research on blinking: that the exact moments when we blink aren’t actually random. Although seemingly spontaneous, studies have revealed that people tend to blink at predictable moments. For someone reading, blinking often occurs after each sentence is finished, while for a person listening to a speech, it frequently comes when the speaker pauses between statements. A group of people all watching the same video tend to blink around the same time, too, when action briefly lags.

As a result, the researchers guessed that we might subconsciously use blinks as a sort of mental resting point, to briefly shut off visual stimuli and allow us to focus our attention. To test the idea, they put 10 different volunteers in an fMRI machine and had them watch the TV show “Mr. Bean” (they had used the same show in their previous work on blinking, showing that it came at implicit break points in the video). They then monitored which areas of the brain showed increased or decreased activity when the study participants blinked.

Their analysis showed that when the Bean-watchers blinked, mental activity briefly spiked in areas related to the default network, areas of the brain that operate when the mind is in a state of wakeful rest, rather than focusing on the outside world. Momentary activation of this alternate network, they theorize, could serve as a mental break, allowing for increased attention capacity when the eyes are opened again.

To test whether this mental break was simply a result of the participants’ visual inputs being blocked, rather than a subconscious effort to clear their minds, the researchers also manually inserted “blackouts” into the video at random intervals that lasted roughly as long as a blink. In the fMRI data, though, the brain areas related to the default network weren’t similarly activated. Blinking is something more than temporarily not seeing anything.

It’s far from conclusive, but the research demonstrates that we do enter some sort of altered mental state when we blink—we’re not just doing it to lubricate our eyes. A blink could provide a momentary island of introspective calm in the ocean of visual stimuli that defines our lives.

Filed under brain vision blinking default network mental activity mental state science

53 notes

Simple eye scan can reveal extent of Multiple Sclerosis
A simple eye test may offer a fast and easy way to monitor patients with multiple sclerosis (MS), medical experts say in the journal Neurology. Optical Coherence Tomography (OCT) is a scan that measures the thickness of the lining at the back of the eye - the retina. It takes a few minutes per eye and can be performed in a doctor’s surgery.
In a trial involving 164 people with MS, those with thinning of their retina had earlier and more active MS. The team of researchers from the Johns Hopkins University School of Medicine say larger trials with a long follow up are needed to judge how useful the test might be in everyday practice. The latest study tracked the patients’ disease progression over a two-year period.
Unpredictable disease
Multiple sclerosis is an illness that affects the nerves in the brain and spinal cord causing problems with muscle movement, balance and vision. In MS, the protective sheath or layer around nerves, called myelin, comes under attack which, in turn, leaves the nerves open to damage.
There are different types of MS - most people with the condition have the relapsing remitting type where the symptoms come and go over days, weeks or months. Usually after a decade or so, half of patients with this type of MS will develop secondary progressive disease where the symptoms get gradually worse and there are no or very few periods of remission.
Another type of MS is primary progressive disease where symptoms get worse from the outset. There is no cure but treatments can help slow disease progression. It can be difficult for doctors to monitor MS because it has a varied course and can be unpredictable.
Brain scans can reveal inflammation and scarring, but it is not clear how early these changes might occur in the disease and whether they accurately reflect ongoing damage.
Scientists have been looking for additional ways to track MS, and believe OCT may be a contender. OCT measures the thickness of nerve fibres housed in the retina at the back of the eye. Unlike nerve cells in the rest of the brain which are covered with protective myelin, the nerve cells in the retina are bare with no myelin coat. Experts suspect that this means the nerves here will show the earliest signs of MS damage.
The study at Johns Hopkins found that people with MS relapses had much faster thinning of their retina than people with MS who had no relapses. So too did those whose level of disability worsened. Similarly, people with MS who had inflammatory lesions that were visible on brain scans also had faster retinal thinning than those without visible brain lesions. Study author Dr Peter Calabresi said OCT may show how fast MS is progressing.
"As more therapies are developed to slow the progression of MS, testing retinal thinning in the eyes may be helpful in evaluating how effective those therapies are," he added.
In an accompanying editorial in the same medical journal that the research is published in, MS experts Drs Robert Bermel and Matilde Inglese say OCT “holds promise” as an MS test.
(Image courtesy: Boston University Eye Associates, Inc.)

Simple eye scan can reveal extent of Multiple Sclerosis

A simple eye test may offer a fast and easy way to monitor patients with multiple sclerosis (MS), medical experts say in the journal Neurology. Optical Coherence Tomography (OCT) is a scan that measures the thickness of the lining at the back of the eye - the retina. It takes a few minutes per eye and can be performed in a doctor’s surgery.

In a trial involving 164 people with MS, those with thinning of their retina had earlier and more active MS. The team of researchers from the Johns Hopkins University School of Medicine say larger trials with a long follow up are needed to judge how useful the test might be in everyday practice. The latest study tracked the patients’ disease progression over a two-year period.

Unpredictable disease

Multiple sclerosis is an illness that affects the nerves in the brain and spinal cord causing problems with muscle movement, balance and vision. In MS, the protective sheath or layer around nerves, called myelin, comes under attack which, in turn, leaves the nerves open to damage.

There are different types of MS - most people with the condition have the relapsing remitting type where the symptoms come and go over days, weeks or months. Usually after a decade or so, half of patients with this type of MS will develop secondary progressive disease where the symptoms get gradually worse and there are no or very few periods of remission.

Another type of MS is primary progressive disease where symptoms get worse from the outset. There is no cure but treatments can help slow disease progression. It can be difficult for doctors to monitor MS because it has a varied course and can be unpredictable.

Brain scans can reveal inflammation and scarring, but it is not clear how early these changes might occur in the disease and whether they accurately reflect ongoing damage.

Scientists have been looking for additional ways to track MS, and believe OCT may be a contender. OCT measures the thickness of nerve fibres housed in the retina at the back of the eye. Unlike nerve cells in the rest of the brain which are covered with protective myelin, the nerve cells in the retina are bare with no myelin coat. Experts suspect that this means the nerves here will show the earliest signs of MS damage.

The study at Johns Hopkins found that people with MS relapses had much faster thinning of their retina than people with MS who had no relapses. So too did those whose level of disability worsened. Similarly, people with MS who had inflammatory lesions that were visible on brain scans also had faster retinal thinning than those without visible brain lesions. Study author Dr Peter Calabresi said OCT may show how fast MS is progressing.

"As more therapies are developed to slow the progression of MS, testing retinal thinning in the eyes may be helpful in evaluating how effective those therapies are," he added.

In an accompanying editorial in the same medical journal that the research is published in, MS experts Drs Robert Bermel and Matilde Inglese say OCT “holds promise” as an MS test.

(Image courtesy: Boston University Eye Associates, Inc.)

Filed under MS OCT nerve cells retina retinal thinning eye scan neuroscience science

256 notes

Fetal healing: Curing congenital diseases in the womb

Our time in the womb is one of the most vulnerable periods of our existence. Pregnant women are warned to steer clear of certain foods and alcohol, and doctors refrain from medical interventions unless absolutely necessary, to avoid the faintest risk of causing birth defects.

image

Yet it is this very stage that is now being considered for some of the most daring and radical medical procedures yet devised: stem cell and gene therapies. “It’s really the ultimate preventative therapy,” says Alan Flake, a surgeon at the Children’s Hospital of Philadelphia in Pennsylvania. “The idea is to avoid any manifestations of disease.”

The idea may sound alarming, but there is a clear rationale behind it. Use these therapies on an adult, and the body part that you are trying to fix is fully formed. Use them before birth, on the other hand, and you may solve the problem before it even arises. “This will set a new paradigm for treatment of many genetic disorders in future,” says Flake.

Flake has been performing surgery on unborn babies for nearly 30 years, using techniques refined on pregnant animals to ensure they met the challenges of working on tiny bodies and avoided triggering miscarriage. The first operation on a human fetus took place in 1981 to fix a blocked urethra, the tube that carries urine out of the bladder. Since then the field has grown to encompass many types of surgery, such as correction of spinal cord defects to prevent spina bifida.

While fetal surgery may now be mainstream, performing stem cell therapy or gene therapy in the womb would arguably be an order of magnitude more challenging. Yet these techniques seem to represent the future of medicine, offering the chance to vanquish otherwise incurable illnesses by re-engineering the body at the cellular level. Several groups around the world are currently testing them out on animals in the womb.

Of the two, stem cell therapy has the longer history: we have been carrying it out on adults since the 1950s, in the form of bone marrow transplants. Bone marrow contains stem cells that give rise to all the different blood cells, from those that make up the immune system to the oxygen-carrying red blood cells. Bone marrow transplants are mainly carried out to treat cancers of immune cells, such as leukaemia, or the various genetic disorders of red blood cells that give rise to anaemia.

One of Flake’s interests is sickle-cell anaemia, in which red blood cells are distorted into a sickle shape by a mutation in the gene for haemoglobin. People with the condition are usually treated with blood transfusions and drugs to ease the symptoms, but even so they may well die in their 40s or 50s. Some are offered a bone marrow transplant, although perhaps only 1 in 3 can find a donor who is a good match genetically and whose cells are thus unlikely to be rejected by their body. “The biggest issue with treating disease with stem cells is the immune system,” says Flake.

And therein lies the main reason for trying a bone marrow transplant in an unborn baby: its immune system is not fully formed. At around the fourteenth week of pregnancy, the fetus’s immune system learns not to attack its own body by killing off any immune cells that react to the fetus’s own tissues. This raises the prospect of introducing donor stem cells during this learning window and so fooling the immune system into accepting those cells. “You can develop a state of complete tolerance to the donor,” says Flake. “If it works for sickle cell, then there are at least 30 related genetic disorders that could be treated.”

Read more …

Filed under congenital diseases fetus genetic disorders stem cells womb fetal surgery science

333 notes

Kim Peek, The Real Rain Man
Kim Peek, who lent inspiration to the fictional character Raymond Babbitt—played by Dustin Hoffman—in the movie Rain Man, was a remarkable savant. A savant is an individual who—with little or no apparent effort—completes intellectual tasks that would be impossible for ordinary people to master.
Kim Peek’s special abilities started early, around the age of a year and a half. He could read both pages of an open book at once, one page with one eye and the other with the other eye. This style of reading continued until his dead in 2009. His reading comprehension was impressive. He would retain 98 percent of the information he read. Since he spent most of his days in the library with his dad, he quickly made it through thousands of books, encyclopedia and maps. He could read a thick book in an hour and remember just about anything in it. Because he could quickly absorb loads of information and recall it when necessary, his condition made him a living encyclopedia and a walking GPS. He could provide driving directions between almost any two cities in the world. He could also do calendar calculations (“which day was June 15, 1632?”) and remember old baseball scores and a vast amount of musical, historical and political facts. His memory abilities were astounding.
Unlike many individuals with savant syndrome, Kim Peek was not afflicted with autistic spectrum disorder. Though he was strongly introverted, he did not have difficulties with social understanding and communication. The main cause of his remarkable abilities seems to have been the lack of connections between his brain’s two hemispheres. An MRI scan revealed an absence of the corpus callosum, the anterior commissure and the hippocampal commissure, the parts of the neurological system that transfer information between hemispheres. In some sense Kim was a natural born split-brain patient.
Read more

Kim Peek, The Real Rain Man

Kim Peek, who lent inspiration to the fictional character Raymond Babbitt—played by Dustin Hoffman—in the movie Rain Man, was a remarkable savant. A savant is an individual who—with little or no apparent effort—completes intellectual tasks that would be impossible for ordinary people to master.

Kim Peek’s special abilities started early, around the age of a year and a half. He could read both pages of an open book at once, one page with one eye and the other with the other eye. This style of reading continued until his dead in 2009. His reading comprehension was impressive. He would retain 98 percent of the information he read. Since he spent most of his days in the library with his dad, he quickly made it through thousands of books, encyclopedia and maps. He could read a thick book in an hour and remember just about anything in it. Because he could quickly absorb loads of information and recall it when necessary, his condition made him a living encyclopedia and a walking GPS. He could provide driving directions between almost any two cities in the world. He could also do calendar calculations (“which day was June 15, 1632?”) and remember old baseball scores and a vast amount of musical, historical and political facts. His memory abilities were astounding.

Unlike many individuals with savant syndrome, Kim Peek was not afflicted with autistic spectrum disorder. Though he was strongly introverted, he did not have difficulties with social understanding and communication. The main cause of his remarkable abilities seems to have been the lack of connections between his brain’s two hemispheres. An MRI scan revealed an absence of the corpus callosum, the anterior commissure and the hippocampal commissure, the parts of the neurological system that transfer information between hemispheres. In some sense Kim was a natural born split-brain patient.

Read more

Filed under ACC Kim Peek congenital disorders corpus callosum memory savants split-brain neuroscience science

free counters