Neuroscience

Articles and news from the latest research reports.

Posts tagged science

92 notes

Learning a New Sense
Rats use a sense that humans don’t: whisking. They move their facial whiskers back and forth about eight times a second to locate objects in their environment. Could humans acquire this sense? And if they can, what could understanding the process of adapting to new sensory input tell us about how humans normally sense? At the Weizmann Institute, researchers explored these questions by attaching plastic “whiskers” to the fingers of blindfolded volunteers and asking them to carry out a location task. The findings, which recently appeared in the Journal of Neuroscience, have yielded new insight into the process of sensing, and they may point to new avenues in developing aids for the blind.

Learning a New Sense

Rats use a sense that humans don’t: whisking. They move their facial whiskers back and forth about eight times a second to locate objects in their environment. Could humans acquire this sense? And if they can, what could understanding the process of adapting to new sensory input tell us about how humans normally sense? At the Weizmann Institute, researchers explored these questions by attaching plastic “whiskers” to the fingers of blindfolded volunteers and asking them to carry out a location task. The findings, which recently appeared in the Journal of Neuroscience, have yielded new insight into the process of sensing, and they may point to new avenues in developing aids for the blind.

Filed under perception whiskers sensory perception neuroscience brain science

143 notes


Inside the unconscious brain
A new study from MIT and Massachusetts General Hospital (MGH) reveals, for the first time, what happens inside the brain as patients lose consciousness during anesthesia.
By monitoring brain activity as patients were given a common anesthetic, the researchers were able to identify a distinctive brain activity pattern that marked the loss of consciousness. This pattern, characterized by very slow oscillation, corresponds to a breakdown of communication between different brain regions, each of which experiences short bursts of activity interrupted by longer silences.
“Within a small area, things can look pretty normal, but because of this periodic silencing, everything gets interrupted every few hundred milliseconds, and that prevents any communication,” says Laura Lewis, a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS) and one of the lead authors of a paper describing the findings in the Proceedings of the National Academy of Sciences this week.
This pattern may help anesthesiologists to better monitor patients as they receive anesthesia, preventing rare cases where patients awaken during surgery or stop breathing after excessive doses of anesthesia drugs.

Inside the unconscious brain

A new study from MIT and Massachusetts General Hospital (MGH) reveals, for the first time, what happens inside the brain as patients lose consciousness during anesthesia.

By monitoring brain activity as patients were given a common anesthetic, the researchers were able to identify a distinctive brain activity pattern that marked the loss of consciousness. This pattern, characterized by very slow oscillation, corresponds to a breakdown of communication between different brain regions, each of which experiences short bursts of activity interrupted by longer silences.

“Within a small area, things can look pretty normal, but because of this periodic silencing, everything gets interrupted every few hundred milliseconds, and that prevents any communication,” says Laura Lewis, a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS) and one of the lead authors of a paper describing the findings in the Proceedings of the National Academy of Sciences this week.

This pattern may help anesthesiologists to better monitor patients as they receive anesthesia, preventing rare cases where patients awaken during surgery or stop breathing after excessive doses of anesthesia drugs.

Filed under brain brain activity anesthesia consciousness oscillations neuroscience psychology science

30 notes

Controlling Vascular Disease May Be Key to Reducing Prevalence of Alzheimer’s Disease

Over the last 15 years, researchers have found a significant association between vascular diseases such as hypertension, atherosclerosis, diabetes type 2, hyperlipidemia, and heart disease and an increased risk of Alzheimer’s disease. In a special issue of the Journal of Alzheimer’s Disease, leading experts provide a comprehensive overview of the pathological, biochemical, and physiological processes that contribute to Alzheimer’s disease risk and ways that may delay or reverse these age-related abnormalities.

“Vascular risk factors to Alzheimer’s disease offer the possibility of markedly reducing incident dementia by early identification and appropriate medical management of these likely precursors of cognitive deterioration and dementia,” says Guest Editor Jack C. de la Torre, MD, PhD, of the University of Texas, Austin. “Improved understanding coupled with preventive strategies could be a monumental step forward in reducing worldwide prevalence of Alzheimer’s disease, which is doubling every 20 years.”

The issue explores how vascular disease can affect cerebral blood flow and impair signaling, contributing to Alzheimer’s disease (AD). The diagnostics of cardiovascular risk factors in AD are addressed, as are potential therapeutic approaches.

Paradoxically, the presence of vascular risk factors in middle age is associated with the development of AD more strongly than late-life vascular disease. In fact, some research suggests that vascular symptoms later in life may have a protective effect against the development of the disease. The physiopathological mechanisms that may underlie this phenomenon are discussed.

To date, trials that target major cardiovascular risk factors in the prevention of AD remain inconclusive but have become an important focus of international research as described by contributors of this special volume in their overviews. The multifactorial nature of AD and the need to identify the proper time window for intervention when designing possible interventions, and methodological issues that will have to be addressed to achieve an optimal design of new randomized controlled trials, are discussed. Promising avenues for treatment, such as the potential of low-level light therapy to increase the rate of oxygen consumption in the brain and enhance cortical metabolic capacity, and the possibility that some antihypertensive drug classes reduce the risk and progression of AD more than others, are discussed.

Dr. de la Torre notes that the presence of vascular risk factors is not an absolute pathway to dementia, and it may be as important to study how or why individuals who are cognitively normal but have vascular risk are able to avoid dementia. “Reducing Alzheimer’s disease prevalence by focusing right now on vascular risk factors to Alzheimer’s disease, even with our limited technology, is not a simple or easy task. But the task must begin somewhere and without delay because time is running out for millions of people whose destiny with dementia may start sooner rather than later,” he concludes.

(Source: alphagalileo.org)

Filed under vascular diseases alzheimer alzheimer's disease neuroscience science

64 notes


Amyloid-beta peptide behind Alzheimer - for the first time hydrogen bonds were analysed
Using solid-state nuclear magnetic resonance (NMR) spectroscopy, researchers at Luleå University of Technology in collaboration with Warwick University in the UK for the first time in the world managed to analyse hydrogen bonds in tiny fibrils of Amyloid-beta peptide, which probably causes Alzheimer’s disease. Thanks to these new results, there is a successful method avaliable – for analysis of structure of Amyloid-beta peptides in their most toxic form, that is, when they are most dangerous for the brain neurons.
- This is a very important step in research on Alzheimer’s disease at a molecular level, says Oleg N. Antzutkin, professor in chemistry of interfaces, at Luleå University of Technology.

Until a few years ago scientists believed that amyloid plaques in the brain directly cause Alzheimer’s disease. This is because very large amounts of plaques in the brain of Alzheimer´s patients are usually found. Since the activity of our brain is greatest in the regions responsible for short-term memory, there most of the amyloid plaques were found. Here is also usually where Alzheimer’s disease is first noticed, in the form of reduced short-term memory. However, it seems to be that Amyloid plaque are rather a residual of something worse.

Amyloid-beta peptide behind Alzheimer - for the first time hydrogen bonds were analysed

Using solid-state nuclear magnetic resonance (NMR) spectroscopy, researchers at Luleå University of Technology in collaboration with Warwick University in the UK for the first time in the world managed to analyse hydrogen bonds in tiny fibrils of Amyloid-beta peptide, which probably causes Alzheimer’s disease. Thanks to these new results, there is a successful method avaliable – for analysis of structure of Amyloid-beta peptides in their most toxic form, that is, when they are most dangerous for the brain neurons.

- This is a very important step in research on Alzheimer’s disease at a molecular level, says Oleg N. Antzutkin, professor in chemistry of interfaces, at Luleå University of Technology.

Until a few years ago scientists believed that amyloid plaques in the brain directly cause Alzheimer’s disease. This is because very large amounts of plaques in the brain of Alzheimer´s patients are usually found. Since the activity of our brain is greatest in the regions responsible for short-term memory, there most of the amyloid plaques were found. Here is also usually where Alzheimer’s disease is first noticed, in the form of reduced short-term memory. However, it seems to be that Amyloid plaque are rather a residual of something worse.

Filed under NMR alzheimer alzheimer's disease amyloid-beta peptide neuroscience science

138 notes


Noam Chomsky on Where Artificial Intelligence Went Wrong
If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.
In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.
Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s bookVerbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Read more

Noam Chomsky on Where Artificial Intelligence Went Wrong

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s bookVerbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Read more

Filed under Noam Chomsky AI intelligence cognition behaviorism statistical models neuroscience psychology science

89 notes


First gene therapy in Europe
The European Commission confirmed the EMA’s recommendation for market authorisation for the gene therapy Glybera (alipogene tiparvovec), a treatment for patients with lipoprotein lipase deficiency (LPLD) suffering from recurring acute pancreatitis. The rare, inherited disease affects about 350-700 patients in Europe. Patients are unable to metabolise fat particles carried in their blood, which leads to inflammation of the pancreas (pancreatitis), a potentially lethal condition. Up to now, no gene therapy has been approved in the EU or the US. Glybera consists of an Adeno-associated Virus (AAV) vector expressing the faulty LPL enzyme.
“This therapy will have a dramatic impact on the lives of these patients. Currently their only recourse is to severely restrict the amount of fat they consume”, commented Professor John Kastelein from Academic Medical Center of the University of Amsterdam. “By helping to normalise the metabolism of fat, Glybera prevents inflammation of the pancreas, thereby averting the associated pain and suffering and, if administered early enough, the associated co-morbidities.”

First gene therapy in Europe

The European Commission confirmed the EMA’s recommendation for market authorisation for the gene therapy Glybera (alipogene tiparvovec), a treatment for patients with lipoprotein lipase deficiency (LPLD) suffering from recurring acute pancreatitis. The rare, inherited disease affects about 350-700 patients in Europe. Patients are unable to metabolise fat particles carried in their blood, which leads to inflammation of the pancreas (pancreatitis), a potentially lethal condition. Up to now, no gene therapy has been approved in the EU or the US. Glybera consists of an Adeno-associated Virus (AAV) vector expressing the faulty LPL enzyme.

“This therapy will have a dramatic impact on the lives of these patients. Currently their only recourse is to severely restrict the amount of fat they consume”, commented Professor John Kastelein from Academic Medical Center of the University of Amsterdam. “By helping to normalise the metabolism of fat, Glybera prevents inflammation of the pancreas, thereby averting the associated pain and suffering and, if administered early enough, the associated co-morbidities.”

Filed under gene therapy Glybera AAV Europe science

74 notes


New metric to track prosthetic arm progress
A new validated and reliable measure of how well an adult amputee is able to perform everyday tasks with a prosthetic arm will help physical and occupational therapists, prosthetists, and doctors assess the progress that patients make during training with their new limb.
Amputees with a new prosthetic arm must learn how to use their device to perform everyday tasks that were once second nature. Taking off a shirt becomes a conscious, multistep effort: grasp the shirt, lift the shirt over the head, pull arms through the sleeves, place the shirt on the table, let go of the shirt.
In the best cases of treatment, patients work with teams of doctors, prosthetists, and therapists to learn how their new limbs can help them regain function and quality of life. But clinicians have had few tools to assess whether that crucial teaching/learning process is going well, because of a lack of standardized measurements to use with adults with upper limb amputations. To change that, a research team has unveiled a new index that clinicians can use to assess their patients’ progress. They describe the Activities Measure for Upper Limb Amputees (the AM-ULA) in an article published online Oct. 19 in the Archives of Physical Medicine and Rehabilitation.

New metric to track prosthetic arm progress

A new validated and reliable measure of how well an adult amputee is able to perform everyday tasks with a prosthetic arm will help physical and occupational therapists, prosthetists, and doctors assess the progress that patients make during training with their new limb.

Amputees with a new prosthetic arm must learn how to use their device to perform everyday tasks that were once second nature. Taking off a shirt becomes a conscious, multistep effort: grasp the shirt, lift the shirt over the head, pull arms through the sleeves, place the shirt on the table, let go of the shirt.

In the best cases of treatment, patients work with teams of doctors, prosthetists, and therapists to learn how their new limbs can help them regain function and quality of life. But clinicians have had few tools to assess whether that crucial teaching/learning process is going well, because of a lack of standardized measurements to use with adults with upper limb amputations. To change that, a research team has unveiled a new index that clinicians can use to assess their patients’ progress. They describe the Activities Measure for Upper Limb Amputees (the AM-ULA) in an article published online Oct. 19 in the Archives of Physical Medicine and Rehabilitation.

Filed under prosthetics prosthetic arm evaluation amputation AM-ULA science

224 notes

Why Children Think They Are Invisible when Covering Their Eyes

Dr. James Russell and a research team at the University of Cambridge recently published work on young children’s conception of personal visibility, which furthers the understanding of cognitive development and of our emerging sense of self.

The research involved children three to four years of age. Researchers placed an eye mask on each of the children and asked them if they could be seen when wearing it. They then asked each child if an adult who was wearing a similar mask could be seen. The majority of the children involved in the study believed they were not visible when wearing the mask. Most also believed that the adult wearing the eye mask was also hidden.

Additional tests revealed a unique layer of complexity, demonstrating that although the children thought they were invisible when there eyes were covered, they still believed that their head and body were able to be seen.

The research team concluded by process of elimination that the factor that makes children believe they are visible is eye contact with another person.

“… it would seem that children apply the principle of joint attention to the self and assume that for somebody to be perceived, experience must be shared and mutually known to be shared, as it is when two pairs of eyes meet,” the researchers reported. “Young children’s natural tendency to acquire knowledge intersubjectively, by joint attention, leads them to undergo a developmental period in which they believe the self is something that must be mutually experienced for it to be perceived.”

Evidently, children only believe they exist when making eye contact with another person. The implications point to a simple but necessary way to make children feel present and involved. Cultures worldwide seem to have some version of “peek-a-boo,” as a quick Google image search reveals. Lack of eye contact in children has been linked as an early sign of autism, while the presence of eye contact is associated with empathy. Dr. Russell’s team seems to have discovered a key facet of cognitive development.

The results of Dr. Russell’s study were published in the Journal of Cognition and Development.

(Source: united-academics.org)

Filed under children personal visibility eye contact perception neuroscience psychology science

free counters