Neuroscience

Articles and news from the latest research reports.

Posts tagged brain

94 notes

Microbleeding in Brain May Be Behind Senior Moments
People may grow wiser with age, but they don’t grow smarter. Many of our mental abilities decline after midlife, and now researchers say that they’ve fingered a culprit. A study presented here last week at the annual  meeting of the Association for Psychological Science points to microbleeding in the brain caused by stiffening arteries. The finding may lead to new therapies to combat senior moments.
This isn’t the first time that microbleeds have been suspected as a cause of cognitive decline. “We have known [about them] for some time thanks to neuroimaging studies,” says Matthew Pase, a psychology Ph.D. student at Swinburne University of Technology in Melbourne, Australia. The brains of older people are sometimes peppered with dark splotches where blood vessels have burst and created tiny dead zones of tissue. How important these microbleeds are to cognitive decline, and what causes them, have remained open questions, however.
Pase wondered if high blood pressure might be behind the microbleeds. The brain is a very blood-hungry organ, he notes. “It accounts for only 2% of the body weight yet receives 15% of the cardiac output and consumes 20% of the body’s oxygen expenditure.” Rather than getting the oxygen in pulses, the brain needs a smooth, continuous supply. So the aorta, the largest blood vessel branching off the heart, smooths out blood pressure before it reaches the brain by absorbing the pressure with its flexible walls. But as people age, the aorta stiffens. That translates to higher pressure on the brain, especially during stress. The pulse of blood can be strong enough to burst vessels in the brain, resulting in microbleeds.
A stumbling block has been accurately measuring the blood pressure that the brain experiences. The hand-pumped armband devices commonly used in doctor’s offices measure only the local pressure of blood in the arm, known as the brachial pressure. To calculate aorta stiffness, the “central blood pressure” in the aorta is needed. A technique for measuring central blood pressure was developed in the late 1990s, called applanation tonometry (AT). It works by comparing the pressure wave of blood from the heart with the reflected pressure wave from the vessels farthest from the heart—the aorta stiffness is calculated from the difference in pressure from the two. Devices for measuring AT have appeared on the market that are fast and painless.
To see if central blood pressure and aorta stiffening are related to cognitive abilities, Pase and colleagues recruited 493 people in Melbourne, 20 to 82 years old. They made traditional blood pressure measurements and also used AT to measure central blood pressure and estimate aorta stiffness. They also measured their subjects’ cognitive abilities with a standard battery of computer tests.
Central blood pressure and aorta stiffness alone were sensitive predictors of cognitive abilities, Pase reported at the meeting. The higher the central pressure and aorta stiffness, the worse people tended to perform on tests of visual processing and memory. The traditional measures of blood pressure in the arm were correlated with only scores on one test of visual processing.
To prove that aorta stiffening causes microbleeds, the researchers will need to repeat the experiment on the same people over the course of several years, using neuroimaging as well to establish that aorta stiffening leads to both microbleeding and cognitive decline. Pase notes that other causes of microbleeding have been proposed, such as weakening of blood vessels in the brain.
"This work is so important because the problem is so pervasive," says Earl Hunt, a veteran intelligence researcher at the University of Washington, Seattle, who was not involved in the work. The individual effects of these microbleeds are probably too small to measure. "But even a trifling difference multiplied a million times is big," he says. Pase’s collaborator at Swinburne, Con Stough, is now leading a study of how to prevent microbleeding through dietary supplements. He proposes that the elasticity of the aorta could be preserved by providing fatty acids or antioxidants that help maintain its structure. The results are expected in 2015.

Microbleeding in Brain May Be Behind Senior Moments

People may grow wiser with age, but they don’t grow smarter. Many of our mental abilities decline after midlife, and now researchers say that they’ve fingered a culprit. A study presented here last week at the annual meeting of the Association for Psychological Science points to microbleeding in the brain caused by stiffening arteries. The finding may lead to new therapies to combat senior moments.

This isn’t the first time that microbleeds have been suspected as a cause of cognitive decline. “We have known [about them] for some time thanks to neuroimaging studies,” says Matthew Pase, a psychology Ph.D. student at Swinburne University of Technology in Melbourne, Australia. The brains of older people are sometimes peppered with dark splotches where blood vessels have burst and created tiny dead zones of tissue. How important these microbleeds are to cognitive decline, and what causes them, have remained open questions, however.

Pase wondered if high blood pressure might be behind the microbleeds. The brain is a very blood-hungry organ, he notes. “It accounts for only 2% of the body weight yet receives 15% of the cardiac output and consumes 20% of the body’s oxygen expenditure.” Rather than getting the oxygen in pulses, the brain needs a smooth, continuous supply. So the aorta, the largest blood vessel branching off the heart, smooths out blood pressure before it reaches the brain by absorbing the pressure with its flexible walls. But as people age, the aorta stiffens. That translates to higher pressure on the brain, especially during stress. The pulse of blood can be strong enough to burst vessels in the brain, resulting in microbleeds.

A stumbling block has been accurately measuring the blood pressure that the brain experiences. The hand-pumped armband devices commonly used in doctor’s offices measure only the local pressure of blood in the arm, known as the brachial pressure. To calculate aorta stiffness, the “central blood pressure” in the aorta is needed. A technique for measuring central blood pressure was developed in the late 1990s, called applanation tonometry (AT). It works by comparing the pressure wave of blood from the heart with the reflected pressure wave from the vessels farthest from the heart—the aorta stiffness is calculated from the difference in pressure from the two. Devices for measuring AT have appeared on the market that are fast and painless.

To see if central blood pressure and aorta stiffening are related to cognitive abilities, Pase and colleagues recruited 493 people in Melbourne, 20 to 82 years old. They made traditional blood pressure measurements and also used AT to measure central blood pressure and estimate aorta stiffness. They also measured their subjects’ cognitive abilities with a standard battery of computer tests.

Central blood pressure and aorta stiffness alone were sensitive predictors of cognitive abilities, Pase reported at the meeting. The higher the central pressure and aorta stiffness, the worse people tended to perform on tests of visual processing and memory. The traditional measures of blood pressure in the arm were correlated with only scores on one test of visual processing.

To prove that aorta stiffening causes microbleeds, the researchers will need to repeat the experiment on the same people over the course of several years, using neuroimaging as well to establish that aorta stiffening leads to both microbleeding and cognitive decline. Pase notes that other causes of microbleeding have been proposed, such as weakening of blood vessels in the brain.

"This work is so important because the problem is so pervasive," says Earl Hunt, a veteran intelligence researcher at the University of Washington, Seattle, who was not involved in the work. The individual effects of these microbleeds are probably too small to measure. "But even a trifling difference multiplied a million times is big," he says. Pase’s collaborator at Swinburne, Con Stough, is now leading a study of how to prevent microbleeding through dietary supplements. He proposes that the elasticity of the aorta could be preserved by providing fatty acids or antioxidants that help maintain its structure. The results are expected in 2015.

Filed under brain microbleeding cognitive decline blood vessels blood pressure psychology neuroscience science

230 notes

Neuroscientists get yes-no answers via brain activity 
Western researchers have used neuroimaging to read human thought via brain activity when they are conveying specific ‘yes’ or ‘no’ answers.
Their findings were published today in The Journal of Neuroscience in a study titled, The Brain’s Silent Messenger: Using Selective Attention to Decode Human Thought for Brain-Based Communication.
According to lead researcher Lorina Naci, the interpretation of human thought from brain activity – without depending on speech or action – is one of the most provoking and challenging frontiers of modern neuroscience. Specifically, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication.
Participants were asked to concentrate on a ‘yes’ or ‘no’ response to questions like “Are you married?” or “Do you have brothers and sisters?” and only think their response, not speak it.
“This novel method allowed healthy individuals to answers questions asked in the scanner, simply by paying attention to the word they wanted to convey. By looking at their brain activity we were able to correctly decode the correct answers for each individual,” said Naci, a postdoctoral fellow at Western’s Brain and Mind Institute. “The majority of volunteers conveyed their answers within three minutes of scanning, a time window that is well-suited for communication with brain-computer interfaces.”
Naci and her Western colleagues Rhodri Cusack, Vivian Z. Jia and Adrian Owen are now utilizing this method to communicate with behaviorally non-responsive patients, who may be misdiagnosed as being in a vegetative state.
“The strengths of this technique, especially its ease of use, robustness, and rapid detection, may maximize the chances that any such patient will be able to achieve brain-based communication,” Naci said.

Neuroscientists get yes-no answers via brain activity

Western researchers have used neuroimaging to read human thought via brain activity when they are conveying specific ‘yes’ or ‘no’ answers.

Their findings were published today in The Journal of Neuroscience in a study titled, The Brain’s Silent Messenger: Using Selective Attention to Decode Human Thought for Brain-Based Communication.

According to lead researcher Lorina Naci, the interpretation of human thought from brain activity – without depending on speech or action – is one of the most provoking and challenging frontiers of modern neuroscience. Specifically, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication.

Participants were asked to concentrate on a ‘yes’ or ‘no’ response to questions like “Are you married?” or “Do you have brothers and sisters?” and only think their response, not speak it.

“This novel method allowed healthy individuals to answers questions asked in the scanner, simply by paying attention to the word they wanted to convey. By looking at their brain activity we were able to correctly decode the correct answers for each individual,” said Naci, a postdoctoral fellow at Western’s Brain and Mind Institute. “The majority of volunteers conveyed their answers within three minutes of scanning, a time window that is well-suited for communication with brain-computer interfaces.”

Naci and her Western colleagues Rhodri Cusack, Vivian Z. Jia and Adrian Owen are now utilizing this method to communicate with behaviorally non-responsive patients, who may be misdiagnosed as being in a vegetative state.

“The strengths of this technique, especially its ease of use, robustness, and rapid detection, may maximize the chances that any such patient will be able to achieve brain-based communication,” Naci said.

Filed under brain brain activity neuroimaging neuromuscular system vegetative state neuroscience science

136 notes

Art appreciation is measureable
Is it your own innate taste or what you have been taught that decides if you like a work of art? Both, according to an Australian-Norwegian research team.
Have you experienced seeing a painting or a play that has left you with no feelings whatsoever, whilst a friend thought it was beautiful and meaningful? Experts have argued for years about the feasibility of researching art appreciation, and what should be taken into consideration.
Neuroscientists believe that biological processes that take place in the brain decide whether one likes a work of art or not. Historians and philosophers say that this is far too narrow a viewpoint. They believe that what you know about the artist’s intentions, when the work was created, and other external factors, also affect how you experience a work of art.
Building bridgesA new model that combines both the historical and the psychological approach has been developed.
We think that both traditions are just as important, although incomplete. We want to show that they complement each other, says Rolf Reber, Professor of Psychology at the University of Bergen. Together with Nicolas Bullot, Doctor of Philosophy at the Macquarie University in Australia, he has developed a new model to help us understand art appreciation. The results have been published in ‘Behavioral and Brain Sciences’ and are commented on by 27 scientists from different disciplines.
Neuroscientists often measure brain activity to find out how much a testee likes a work of art, without investigating whether he or she actually understands the work. This is insufficient, as artistic understanding also affects assessment, says Reber.
Eye-opening experience- We know from earlier research that a painting that is difficult – yet possible – to interpret, is felt to be more meaningful than a painting that one looks at and understands immediately. The painter, Eugène Delacroix, made use of this fact to depict war. Joseph Mallord William Turner did the same in ‘Snow storm’. When you have to struggle to understand, you can have an eye-opening experience, which the brain appreciates, explains Reber.
He hopes that other scientists will use the Australian-Norwegian model.- By measuring brain activity, interviewing test persons about thoughts and reactions, and charting their artistic knowledge, it’s possible to gain new and exciting insight into what makes people appreciate good works of art. The model can be used for visual art, music, theatre and literature, says Reber.

Art appreciation is measureable

Is it your own innate taste or what you have been taught that decides if you like a work of art? Both, according to an Australian-Norwegian research team.

Have you experienced seeing a painting or a play that has left you with no feelings whatsoever, whilst a friend thought it was beautiful and meaningful? Experts have argued for years about the feasibility of researching art appreciation, and what should be taken into consideration.

Neuroscientists believe that biological processes that take place in the brain decide whether one likes a work of art or not. Historians and philosophers say that this is far too narrow a viewpoint. They believe that what you know about the artist’s intentions, when the work was created, and other external factors, also affect how you experience a work of art.

Building bridges
A new model that combines both the historical and the psychological approach has been developed.

  • We think that both traditions are just as important, although incomplete. We want to show that they complement each other, says Rolf Reber, Professor of Psychology at the University of Bergen. Together with Nicolas Bullot, Doctor of Philosophy at the Macquarie University in Australia, he has developed a new model to help us understand art appreciation. The results have been published in ‘Behavioral and Brain Sciences and are commented on by 27 scientists from different disciplines.
  • Neuroscientists often measure brain activity to find out how much a testee likes a work of art, without investigating whether he or she actually understands the work. This is insufficient, as artistic understanding also affects assessment, says Reber.

Eye-opening experience
- We know from earlier research that a painting that is difficult – yet possible – to interpret, is felt to be more meaningful than a painting that one looks at and understands immediately. The painter, Eugène Delacroix, made use of this fact to depict war. Joseph Mallord William Turner did the same in ‘Snow storm’. When you have to struggle to understand, you can have an eye-opening experience, which the brain appreciates, explains Reber.

He hopes that other scientists will use the Australian-Norwegian model.
- By measuring brain activity, interviewing test persons about thoughts and reactions, and charting their artistic knowledge, it’s possible to gain new and exciting insight into what makes people appreciate good works of art. The model can be used for visual art, music, theatre and literature, says Reber.

Filed under brain brain activity art appreciation art psychology neuroscience science

141 notes

Scientists discover the origin of a giant synapse
Humans and most mammals can determine the spatial origin of sounds with remarkable acuity. We use this ability all the time—crossing the street; locating an invisible ringing cell phone in a cluttered bedroom. To accomplish this small daily miracle, the brain has developed a circuit that’s rapid enough to detect the tiny lag that occurs between the moment the auditory information reaches one of our ears, and the moment it reaches the other. The mastermind of this circuit is the “Calyx of Held,” the largest known synapse in the brain. EPFL scientists have revealed the role that a certain protein plays in initiating the growth of these giant synapses.
The discovery, published in Nature Neuroscience, could also help shed light on a number of neuropsychiatric disorders.
Enormous synapses enable faster communication
Ordinarily, neurons have thousands of contact points – known as synapses - with neighboring neurons. Within a given time frame, a neuron has to receive several signals from its neighbors in order to be able to fire its own signal in response. Because of this, information passes from neuron to neuron in a relatively random manner.
In the auditory part of the brain, this is not the case. Synapses often grow to extremely large sizes, and these behemoths are known as “Calyx of Held” synapses. Because they have hundreds of contact points, they are capable of transmitting a signal singlehandedly to a neighboring neuron. “It’s almost like peer-to-peer communication between neurons,” explains EPFL professor Ralf Schneggenburger, who led the study. The result is that information is processed extremely quickly, in a few fractions of a millisecond, instead of the slower pace of more than 10 milliseconds that occurs in most other neuronal circuits.
Identifying the protein
To isolate the protein responsible for controlling the growth of this gigantic synapse, the scientists had to perform painstaking research. Using methods for analyzing gene expression in mice, they identified several members of the “BMP” family of proteins from among more than 20,000 possible candidates.
To verify that they had truly identified the right protein, the researchers disabled BMP protein receptors in the auditory part of a mouse brain. “The resulting electrophysiological signal of the Calyx of Held was significantly altered,” explains Le Xiao, first author on the study. “This would suggest a large anatomical difference.”
The scientists then reconstructed the synapses in three dimensions from slices that were observed under an electron microscope. Instead of a single, massive Calyx of Held, which would encompass nearly half the neuron, the 3D image of the neuron clearly shows several, smaller synapses. “This shows that the process involving the BMP protein not only causes that one synapse to grow, but also performs a selection, by eliminating the others,” says Schneggenburger.
Synaptic connectivity, the key to many psychiatric puzzles
The impact of this study will go well beyond increasing our understanding of the auditory system. The results suggest that the BMP protein plays an important role in developing connectivity in the brain. Schneggenburger and his colleagues are currently investigating its role elsewhere in the brain. “Some neuropsychiatric disorders, such as schizophrenia and autism, are characterized by the abnormal development of synaptic connectivity in certain key parts of the brain,” explains Schneggenburger. By identifying and explaining the role of various proteins in this process, the scientists hope to be able to shed more light on these poorly understood disorders.

Scientists discover the origin of a giant synapse

Humans and most mammals can determine the spatial origin of sounds with remarkable acuity. We use this ability all the time—crossing the street; locating an invisible ringing cell phone in a cluttered bedroom. To accomplish this small daily miracle, the brain has developed a circuit that’s rapid enough to detect the tiny lag that occurs between the moment the auditory information reaches one of our ears, and the moment it reaches the other. The mastermind of this circuit is the “Calyx of Held,” the largest known synapse in the brain. EPFL scientists have revealed the role that a certain protein plays in initiating the growth of these giant synapses.

The discovery, published in Nature Neuroscience, could also help shed light on a number of neuropsychiatric disorders.

Enormous synapses enable faster communication

Ordinarily, neurons have thousands of contact points – known as synapses - with neighboring neurons. Within a given time frame, a neuron has to receive several signals from its neighbors in order to be able to fire its own signal in response. Because of this, information passes from neuron to neuron in a relatively random manner.

In the auditory part of the brain, this is not the case. Synapses often grow to extremely large sizes, and these behemoths are known as “Calyx of Held” synapses. Because they have hundreds of contact points, they are capable of transmitting a signal singlehandedly to a neighboring neuron. “It’s almost like peer-to-peer communication between neurons,” explains EPFL professor Ralf Schneggenburger, who led the study. The result is that information is processed extremely quickly, in a few fractions of a millisecond, instead of the slower pace of more than 10 milliseconds that occurs in most other neuronal circuits.

Identifying the protein

To isolate the protein responsible for controlling the growth of this gigantic synapse, the scientists had to perform painstaking research. Using methods for analyzing gene expression in mice, they identified several members of the “BMP” family of proteins from among more than 20,000 possible candidates.

To verify that they had truly identified the right protein, the researchers disabled BMP protein receptors in the auditory part of a mouse brain. “The resulting electrophysiological signal of the Calyx of Held was significantly altered,” explains Le Xiao, first author on the study. “This would suggest a large anatomical difference.”

The scientists then reconstructed the synapses in three dimensions from slices that were observed under an electron microscope. Instead of a single, massive Calyx of Held, which would encompass nearly half the neuron, the 3D image of the neuron clearly shows several, smaller synapses. “This shows that the process involving the BMP protein not only causes that one synapse to grow, but also performs a selection, by eliminating the others,” says Schneggenburger.

Synaptic connectivity, the key to many psychiatric puzzles

The impact of this study will go well beyond increasing our understanding of the auditory system. The results suggest that the BMP protein plays an important role in developing connectivity in the brain. Schneggenburger and his colleagues are currently investigating its role elsewhere in the brain. “Some neuropsychiatric disorders, such as schizophrenia and autism, are characterized by the abnormal development of synaptic connectivity in certain key parts of the brain,” explains Schneggenburger. By identifying and explaining the role of various proteins in this process, the scientists hope to be able to shed more light on these poorly understood disorders.

Filed under brain synapses calyx of held synapses neurons auditory system psychiatric disorders neuroscience science

162 notes

Brain uses internal ‘average voice’ prototype to identify who is talking
The human brain is able to identify individuals’ voices by comparing them against an internal ‘average voice’ prototype, according to neuroscientists.
A study carried out by researchers at the University of Glasgow and reported in the journal Current Biology demonstrates that voice identity is coded in the brain by reference to two internal voice prototypes – one male, one female.
Voices that have the greatest difference from the prototype are perceived as more distinctive and produce greater neural activity than voices deemed very similar.
The researchers in the Institute of Neuroscience & Psychology conducted the study by generating a voice prototype through morphing 32 same-gender voices together resulting in a smooth, idealised voice with few irregularities.
They then generated different voices by altering the ‘distance-to-mean’ of the prototype voice – for example, changing the tone and pitch or morphing two or more voices together.
Using functional Magnetic Resonance Imaging (fMRI), the researchers were able to see increased neural activity the further from the prototype the voices were.
Professor Pascal Belin said: “Like faces, voices can be used to identify a person, yet the neural basis of this ability remains poorly understood. Here we provide the first evidence of a norm-based coding mechanism the brain uses to identify a speaker.
“The research indicates this is a similar process for the identification of faces, where the brain also uses an average face to compare against other faces it encounters in order to establish identity.
“So, rather than having to remember each single voice it hears every day for a lifetime, the brain facilitates the task of identification by remembering only the differences from the prototype it stores.
“It leads to a range of interesting and important questions, such as whether the prototypes are innate, stored templates or whether they are subject to environmental and cultural influences. Could the prototype consist of an average of all voices experiences during one’s life?”
(Image: Shutterstock)

Brain uses internal ‘average voice’ prototype to identify who is talking

The human brain is able to identify individuals’ voices by comparing them against an internal ‘average voice’ prototype, according to neuroscientists.

A study carried out by researchers at the University of Glasgow and reported in the journal Current Biology demonstrates that voice identity is coded in the brain by reference to two internal voice prototypes – one male, one female.

Voices that have the greatest difference from the prototype are perceived as more distinctive and produce greater neural activity than voices deemed very similar.

The researchers in the Institute of Neuroscience & Psychology conducted the study by generating a voice prototype through morphing 32 same-gender voices together resulting in a smooth, idealised voice with few irregularities.

They then generated different voices by altering the ‘distance-to-mean’ of the prototype voice – for example, changing the tone and pitch or morphing two or more voices together.

Using functional Magnetic Resonance Imaging (fMRI), the researchers were able to see increased neural activity the further from the prototype the voices were.

Professor Pascal Belin said: “Like faces, voices can be used to identify a person, yet the neural basis of this ability remains poorly understood. Here we provide the first evidence of a norm-based coding mechanism the brain uses to identify a speaker.

“The research indicates this is a similar process for the identification of faces, where the brain also uses an average face to compare against other faces it encounters in order to establish identity.

“So, rather than having to remember each single voice it hears every day for a lifetime, the brain facilitates the task of identification by remembering only the differences from the prototype it stores.

“It leads to a range of interesting and important questions, such as whether the prototypes are innate, stored templates or whether they are subject to environmental and cultural influences. Could the prototype consist of an average of all voices experiences during one’s life?”

(Image: Shutterstock)

Filed under neural activity prototype voice voices brain auditory cortex fMRI neuroscience science

122 notes

Common Brain Processes of Anesthetic-Induced Unconsciousness Identified 
A study from the June issue of Anesthesiology found feedback from the front region of the brain is a crucial building block for consciousness and that its disruption is associated with unconsciousness when the anesthetics ketamine, propofol or sevoflurane are administered.
Brain centers and mechanisms of consciousness have not been well understood, resulting in a need for better monitors of consciousness during anesthesia. In addition, how anesthetics with different structures and pharmacological properties can generate unconsciousness has been a persistent question in anesthesiology since the beginning of the field in the mid-19th century.
A team of researchers from the University of Michigan, Ann Arbor, Mich., and Asan Medical Center, Seoul, South Korea, conducted a brain wave (electroencephalographic, or EEG) study of the front and back regions of the brain in 30 surgical patients who received intravenous ketamine. They compared the results of this study to the EEG data collected from 18 surgical patients who received either intravenous propofol or inhaled sevoflurane in a previous study. These three anesthetics, known to act on different parts of the brain and produce different EEG patterns, had the same effect of disrupting communication in the brain.
“Understanding a commonality among the actions of these diverse drugs could lead to a more comprehensive theory of how general anesthetics induce unconsciousness,” said study author George Mashour, M.D., Ph.D., assistant professor and associate chair for faculty affairs, Department of Anesthesiology, University of Michigan. “Our research shows that studying general anesthesia from the perspective of consciousness may be a fruitful approach and create new avenues for further investigation of anesthetic mechanisms and monitoring.”
An accompanying editorial by Jamie W. Sleigh, M.D., professor of anaesthesiology and intensive care, Department of Anaesthesia, University of Auckland, Hamilton, New Zealand, supported the study’s ability to better understand the neurobiology of consciousness.
“If the study’s findings are confirmed by subsequent work, the paper will achieve landmark status,” said Dr. Sleigh. “The study not only sheds light on the phenomenon of general anesthesia, but also how it is necessary for certain regions of the brain to communicate accurately with one another for consciousness to emerge.”
In addition, Dr. Sleigh recognized the study’s potential to lead to the development of better depth-of-anesthesia monitors that work for all general anesthetics.
(Image: Shutterstock)

Common Brain Processes of Anesthetic-Induced Unconsciousness Identified

A study from the June issue of Anesthesiology found feedback from the front region of the brain is a crucial building block for consciousness and that its disruption is associated with unconsciousness when the anesthetics ketamine, propofol or sevoflurane are administered.

Brain centers and mechanisms of consciousness have not been well understood, resulting in a need for better monitors of consciousness during anesthesia. In addition, how anesthetics with different structures and pharmacological properties can generate unconsciousness has been a persistent question in anesthesiology since the beginning of the field in the mid-19th century.

A team of researchers from the University of Michigan, Ann Arbor, Mich., and Asan Medical Center, Seoul, South Korea, conducted a brain wave (electroencephalographic, or EEG) study of the front and back regions of the brain in 30 surgical patients who received intravenous ketamine. They compared the results of this study to the EEG data collected from 18 surgical patients who received either intravenous propofol or inhaled sevoflurane in a previous study. These three anesthetics, known to act on different parts of the brain and produce different EEG patterns, had the same effect of disrupting communication in the brain.

“Understanding a commonality among the actions of these diverse drugs could lead to a more comprehensive theory of how general anesthetics induce unconsciousness,” said study author George Mashour, M.D., Ph.D., assistant professor and associate chair for faculty affairs, Department of Anesthesiology, University of Michigan. “Our research shows that studying general anesthesia from the perspective of consciousness may be a fruitful approach and create new avenues for further investigation of anesthetic mechanisms and monitoring.”

An accompanying editorial by Jamie W. Sleigh, M.D., professor of anaesthesiology and intensive care, Department of Anaesthesia, University of Auckland, Hamilton, New Zealand, supported the study’s ability to better understand the neurobiology of consciousness.

“If the study’s findings are confirmed by subsequent work, the paper will achieve landmark status,” said Dr. Sleigh. “The study not only sheds light on the phenomenon of general anesthesia, but also how it is necessary for certain regions of the brain to communicate accurately with one another for consciousness to emerge.”

In addition, Dr. Sleigh recognized the study’s potential to lead to the development of better depth-of-anesthesia monitors that work for all general anesthetics.

(Image: Shutterstock)

Filed under anesthetics consciousness anesthesia brain frontal cortex cortical feedback neuroscience science

352 notes

Brain can be trained in compassion

Until now, little was scientifically known about the human potential to cultivate compassion — the emotional state of caring for people who are suffering in a way that motivates altruistic behavior.

image

A new study by researchers at the Center for Investigating Healthy Minds at the Waisman Center of the University of Wisconsin-Madison shows that adults can be trained to be more compassionate. The report, recently published online in the journal Psychological Science, is the first to investigate whether training adults in compassion can result in greater altruistic behavior and related changes in neural systems underlying compassion.

"Our fundamental question was, ‘Can compassion be trained and learned in adults? Can we become more caring if we practice that mindset?’" says Helen Weng, a graduate student in clinical psychology and lead author of the paper. "Our evidence points to yes."

In the study, the investigators trained young adults to engage in compassion meditation, an ancient Buddhist technique to increase caring feelings for people who are suffering. In the meditation, participants envisioned a time when someone has suffered and then practiced wishing that his or her suffering was relieved. They repeated phrases to help them focus on compassion such as, “May you be free from suffering. May you have joy and ease.”

Participants practiced with different categories of people, first starting with a loved one, someone whom they easily felt compassion for like a friend or family member. Then, they practiced compassion for themselves and, then, a stranger. Finally, they practiced compassion for someone they actively had conflict with called the “difficult person,” such as a troublesome coworker or roommate.

"It’s kind of like weight training," Weng says. "Using this systematic approach, we found that people can actually build up their compassion ‘muscle’ and respond to others’ suffering with care and a desire to help."

Compassion training was compared to a control group that learned cognitive reappraisal, a technique where people learn to reframe their thoughts to feel less negative. Both groups listened to guided audio instructions over the Internet for 30 minutes per day for two weeks. “We wanted to investigate whether people could begin to change their emotional habits in a relatively short period of time,” says Weng.

The real test of whether compassion could be trained was to see if people would be willing to be more altruistic — even helping people they had never met. The research tested this by asking the participants to play a game in which they were given the opportunity to spend their own money to respond to someone in need (called the “Redistribution Game”). They played the game over the Internet with two anonymous players, the “Dictator” and the “Victim.” They watched as the Dictator shared an unfair amount of money (only $1 out of $10) with the Victim. They then decided how much of their own money to spend (out of $5) in order to equalize the unfair split and redistribute funds from the Dictator to the Victim.

"We found that people trained in compassion were more likely to spend their own money altruistically to help someone who was treated unfairly than those who were trained in cognitive reappraisal," Weng says.

"We wanted to see what changed inside the brains of people who gave more to someone in need. How are they responding to suffering differently now?" asks Weng. The study measured changes in brain responses using functional magnetic resonance imaging (fMRI) before and after training. In the MRI scanner, participants viewed images depicting human suffering, such as a crying child or a burn victim, and generated feelings of compassion towards the people using their practiced skills. The control group was exposed to the same images, and asked to recast them in a more positive light as in reappraisal.

The researchers measured how much brain activity had changed from the beginning to the end of the training, and found that the people who were the most altruistic after compassion training were the ones who showed the most brain changes when viewing human suffering. They found that activity was increased in the inferior parietal cortex, a region involved in empathy and understanding others. Compassion training also increased activity in the dorsolateral prefrontal cortex and the extent to which it communicated with the nucleus accumbens, brain regions involved in emotion regulation and positive emotions.

"People seem to become more sensitive to other people’s suffering, but this is challenging emotionally. They learn to regulate their emotions so that they approach people’s suffering with caring and wanting to help rather than turning away," explains Weng.

Compassion, like physical and academic skills, appears to be something that is not fixed, but rather can be enhanced with training and practice. “The fact that alterations in brain function were observed after just a total of seven hours of training is remarkable,” explains UW-Madison psychology and psychiatry professor Richard J. Davidson, founder and chair of the Center for Investigating Healthy Minds and senior author of the article.

"There are many possible applications of this type of training," Davidson says. "Compassion and kindness training in schools can help children learn to be attuned to their own emotions as well as those of others, which may decrease bullying. Compassion training also may benefit people who have social challenges such as social anxiety or antisocial behavior."

Weng is also excited about how compassion training can help the general population. “We studied the effects of this training with healthy participants, which demonstrated that this can help the average person. I would love for more people to access the training and try it for a week or two — what changes do they see in their own lives?”

Both compassion and reappraisal trainings are available on the Center for Investigating Healthy Minds’ website. “I think we are only scratching the surface of how compassion can transform people’s lives,” says Weng.

(Source: news.wisc.edu)

Filed under compassion altruistic behavior brain activity brain psychology neuroscience science

142 notes

B vitamins could delay dementia
Despite spending billions of dollars on research and development, drug companies have been unable to come up with effective treatments for dementia and Alzheimer’s Disease (AD). Now, A. David Smith at the University of Oxford and his colleagues have discovered that, in some patients experiencing mild cognitive impairment (MCI), a cocktail of high-dose B vitamins could prevent gray matter loss associated with progression to AD. The study appears in the Proceedings of the National Academy of Sciences.
The World Health Organization predicts that between 2010 and 2050 the number of dementia cases will increase from 26 million to 115 million worldwide. Although there is an urgent demand for treatment, pharmaceutical companies have been unable to develop drugs that will delay or cure dementia. So far, approved drugs merely ease symptoms.
Smith and his team wanted to see if B vitamins reduced the risk of AD by lowering total homocysteine (tHcy) levels. There is a positive correlation between high tHcy levels and risk of cognitive impairment and AD.
The researchers studied 156 subjects over 70 in Oxford, England who suffered from MCI. The subjects received either a placebo or a high-dose B vitamin cocktail consisting of 20 milligrams of vitamin B6, 0.5 milligrams of vitamin B12 and 0.8 milligrams of folic acid.
Over a two-year period, subjects in both the experimental and control groups lost gray matter in the medial temporal, lateral temporoparietal and occipital regions and in the anterior and posterior cingulate cortex.
However, those receiving B vitamin treatment experienced significantly less atrophy in regions of the brain most affected in people with AD and people with MCI who go on to develop AD. These include the bilateral hippocampus, the parahippocampal gyrus, the retrosplenial precuneus, the lingual gyrus, the fusiform gyrus and the cerebellum. The placebo group experienced a 3.7 percent loss of gray matter in these regions, compared with a 0.5 percent loss among the experimental group.
When they looked at baseline tHcy levels, Smith and his colleagues found that B-vitamin treatment did not significantly reduce gray matter atrophy among subjects with tHcy levels below the median. The B-vitamin cocktail did have a significant effect on high-tHcy participants: those receiving the cocktail experienced only a 0.6 percent loss of gray matter, while high-tHcy participants in the placebo group experienced a 5.2 percent loss.
The team found a correlation between gray matter loss and worsening of scores on tests that measure cognitive function.
A causal Bayesian network analysis showed that B vitamins lower tHcy levels. This decreases gray matter atrophy, which delays cognitive decline.

B vitamins could delay dementia

Despite spending billions of dollars on research and development, drug companies have been unable to come up with effective treatments for dementia and Alzheimer’s Disease (AD). Now, A. David Smith at the University of Oxford and his colleagues have discovered that, in some patients experiencing mild cognitive impairment (MCI), a cocktail of high-dose B vitamins could prevent gray matter loss associated with progression to AD. The study appears in the Proceedings of the National Academy of Sciences.

The World Health Organization predicts that between 2010 and 2050 the number of dementia cases will increase from 26 million to 115 million worldwide. Although there is an urgent demand for treatment, pharmaceutical companies have been unable to develop drugs that will delay or cure dementia. So far, approved drugs merely ease symptoms.

Smith and his team wanted to see if B vitamins reduced the risk of AD by lowering total homocysteine (tHcy) levels. There is a positive correlation between high tHcy levels and risk of cognitive impairment and AD.

The researchers studied 156 subjects over 70 in Oxford, England who suffered from MCI. The subjects received either a placebo or a high-dose B vitamin cocktail consisting of 20 milligrams of vitamin B6, 0.5 milligrams of vitamin B12 and 0.8 milligrams of folic acid.

Over a two-year period, subjects in both the experimental and control groups lost gray matter in the medial temporal, lateral temporoparietal and occipital regions and in the anterior and posterior cingulate cortex.

However, those receiving B vitamin treatment experienced significantly less atrophy in regions of the brain most affected in people with AD and people with MCI who go on to develop AD. These include the bilateral hippocampus, the parahippocampal gyrus, the retrosplenial precuneus, the lingual gyrus, the fusiform gyrus and the cerebellum. The placebo group experienced a 3.7 percent loss of gray matter in these regions, compared with a 0.5 percent loss among the experimental group.

When they looked at baseline tHcy levels, Smith and his colleagues found that B-vitamin treatment did not significantly reduce gray matter atrophy among subjects with tHcy levels below the median. The B-vitamin cocktail did have a significant effect on high-tHcy participants: those receiving the cocktail experienced only a 0.6 percent loss of gray matter, while high-tHcy participants in the placebo group experienced a 5.2 percent loss.

The team found a correlation between gray matter loss and worsening of scores on tests that measure cognitive function.

A causal Bayesian network analysis showed that B vitamins lower tHcy levels. This decreases gray matter atrophy, which delays cognitive decline.

Filed under alzheimer's disease B vitamins cognitive impairment gray matter brain neuroscience science

145 notes

Mediterranean diet seems to boost ageing brain power
A Mediterranean diet with added extra virgin olive oil or mixed nuts seems to improve the brain power of older people better than advising them to follow a low-fat diet, indicates research published online in the Journal of Neurology Neurosurgery and Psychiatry.
The authors from the University of Navarra in Spain base their findings on 522 men and women aged between 55 and 80 without cardiovascular disease but at high vascular risk because of underlying disease/conditions.
These included either type 2 diabetes or three of the following: high blood pressure; an unfavourable blood fat profile; overweight; a family history of early cardiovascular disease; and being a smoker.
Participants, who were all taking part in the PREDIMED trial looking at how best to ward off cardiovascular disease, were randomly allocated to a Mediterranean diet with added olive oil or mixed nuts or a control group receiving advice to follow the low-fat diet typically recommended to prevent heart attack and stroke
A Mediterranean diet is characterised by the use of virgin olive oil as the main culinary fat; high consumption of fruits, nuts, vegetables and pulses; moderate to high consumption of fish and seafood; low consumption of dairy products and red meat; and moderate intake of red wine.
Participants had regular check-ups with their family doctor and quarterly checks on their compliance with their prescribed diet.
After an average of 6.5 years, they were tested for signs of cognitive decline using a Mini Mental State Exam and a clock drawing test, which assess higher brain functions, including orientation, memory, language, visuospatial and visuoconstrution abilities and executive functions such as working memory, attention span, and abstract thinking.
At the end of the study period, 60 participants had developed mild cognitive impairment: 18 on the olive oil supplemented Mediterranean diet; 19 on the diet with added mixed nuts; and 23 on the control group.
A further 35 people developed dementia: 12 on the added olive oil diet; six on the added nut diet; and 17 on the low fat diet.
The average scores on both tests were significantly higher for those following either of the Mediterranean diets compared with those on the low fat option.
These findings held true irrespective of other influential factors, including age, family history of cognitive impairment or dementia, the presence of ApoE protein—associated with Alzheimer’s disease—educational attainment, exercise levels, vascular risk factors; energy intake and depression.
The authors acknowledge that their sample size was relatively small, and that because the study involved a group at high vascular risk, it doesn’t necessarily follow that their findings are applicable to the general population.
But they say, theirs is the first long term trial to look at the impact of the Mediterranean diet on brain power, and that it adds to the increasing body of evidence suggesting that a high quality dietary pattern seems to protect cognitive function in the ageing brain.

Mediterranean diet seems to boost ageing brain power

A Mediterranean diet with added extra virgin olive oil or mixed nuts seems to improve the brain power of older people better than advising them to follow a low-fat diet, indicates research published online in the Journal of Neurology Neurosurgery and Psychiatry.

The authors from the University of Navarra in Spain base their findings on 522 men and women aged between 55 and 80 without cardiovascular disease but at high vascular risk because of underlying disease/conditions.

These included either type 2 diabetes or three of the following: high blood pressure; an unfavourable blood fat profile; overweight; a family history of early cardiovascular disease; and being a smoker.

Participants, who were all taking part in the PREDIMED trial looking at how best to ward off cardiovascular disease, were randomly allocated to a Mediterranean diet with added olive oil or mixed nuts or a control group receiving advice to follow the low-fat diet typically recommended to prevent heart attack and stroke

A Mediterranean diet is characterised by the use of virgin olive oil as the main culinary fat; high consumption of fruits, nuts, vegetables and pulses; moderate to high consumption of fish and seafood; low consumption of dairy products and red meat; and moderate intake of red wine.

Participants had regular check-ups with their family doctor and quarterly checks on their compliance with their prescribed diet.

After an average of 6.5 years, they were tested for signs of cognitive decline using a Mini Mental State Exam and a clock drawing test, which assess higher brain functions, including orientation, memory, language, visuospatial and visuoconstrution abilities and executive functions such as working memory, attention span, and abstract thinking.

At the end of the study period, 60 participants had developed mild cognitive impairment: 18 on the olive oil supplemented Mediterranean diet; 19 on the diet with added mixed nuts; and 23 on the control group.

A further 35 people developed dementia: 12 on the added olive oil diet; six on the added nut diet; and 17 on the low fat diet.

The average scores on both tests were significantly higher for those following either of the Mediterranean diets compared with those on the low fat option.

These findings held true irrespective of other influential factors, including age, family history of cognitive impairment or dementia, the presence of ApoE protein—associated with Alzheimer’s disease—educational attainment, exercise levels, vascular risk factors; energy intake and depression.

The authors acknowledge that their sample size was relatively small, and that because the study involved a group at high vascular risk, it doesn’t necessarily follow that their findings are applicable to the general population.

But they say, theirs is the first long term trial to look at the impact of the Mediterranean diet on brain power, and that it adds to the increasing body of evidence suggesting that a high quality dietary pattern seems to protect cognitive function in the ageing brain.

Filed under mediterranean diet brain cognitive function aging cardiovascular disease neuroscience science

127 notes

Complex brain function depends on flexibility
Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.
However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.
“We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.
In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.
“You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.”
Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab.
Multitasking neurons
Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.
For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.
Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.
At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.
For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.
During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).
Expanding capacity
Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.
The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.
Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.
“Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.
Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.
“The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.”

Complex brain function depends on flexibility

Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.

However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.

“We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.

In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.

“You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.”

Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab.

Multitasking neurons

Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.

For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.

Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.

At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.

For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.

During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).

Expanding capacity

Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.

The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.

Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.

“Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.

Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.

“The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.”

Filed under brain neurons prefrontal cortex neuronal activity multitasking neuroscience science

free counters