Posts tagged science
Posts tagged science
A study led by researchers from Plymouth University Peninsula Schools of Medicine and Dentistry has for the first time revealed how the loss of a particular tumour suppressing protein leads to the abnormal growth of tumours of the brain and nervous system.
The study is published in Brain: A Journal of Neurology.
Tumour suppressors exist in cells to prevent abnormal cell division in our bodies. The loss of a tumour suppressor called Merlin leads to tumours in many cell types within our nervous systems. There are two copies of a tumour suppressor, one on each chromosome that we inherit from our parents. The loss of Merlin can be caused by random loss of both copies in a single cell, causing sporadic tumours, or by inheriting one abnormal copy and losing the second copy throughout our lifetime as is seen in the inherited condition of neurofibromatosis type 2 (NF2).
With either sporadic loss or inherited NF2, these tumours lacking the Merlin protein develop in the Schwann cells that form the sheaths that surround and electrically insulate neurons. These tumours are called schwannomas, but tumours can also arise in the cells that form the membrane around the brain and spinal cord, and the cells that line the ventricles of the brain.
Although the schwannomas are slow-growing and benign, they are frequent and come in numbers. The sheer number of tumours caused by this gene defect can overwhelm a patient, often leading to hearing loss, disability and eventually death. Patients can suffer from 20 to 30 tumours at any one time, and the condition typically manifests in the teenage years and through into adulthood.
No effective therapy for these tumours exists, other than repeated invasive surgery or radiotherapy aiming at a single tumour at a time and which is unlikely to eradicate the full extent of the tumours.
The Brain study investigated how loss of a protein called Sox10 functions in causing these tumours. Sox10 is known to play a major role in the development of Schwann cells, but this is the first time it has been shown to be involved in the growth of schwannoma tumour cells. By understanding the mechanism, the research team has opened the way for new therapies to be developed that will provide a viable to alternative to surgery or radiotherapy.
The study, undertaken by researchers from Plymouth University Peninsula Schools of Medicine and Dentistry with colleagues from the State University of New York and Universitat Erlangen-Nurmberg, was led by Professor David Parkinson.
He said: “We have for the first time shown that human schwannoma cells have reduced expression of Sox10 protein and messenger RNA. By identifying this correlation and gaining an understanding of the mechanism of this process, we hope that drug-based therapies may in time be created and introduced that will reduce or negate the need for multiple surgery or radiotherapy.”
Investigation by researchers from the University of Exeter and ETH Zurich has shed new light on a protein which is linked to a common neurological disorder called Charcot-Marie-Tooth disease.
Peroxisomes (green) and mitochondria (red) in a mammalian cell. The nucleus (blue) contains the cellular DNA.
The team has discovered that a protein previously identified on mitochondria - the energy factories of the cell - is also found on the fat-metabolising organelles peroxisomes, suggesting a closer link between the two organelles.
Charcot-Marie-Tooth disease is currently incurable and affects around one in every 2,500 people in the UK, meaning that it is one of the most common inherited neurological disorders, thus understanding the molecular basis of the disease is of great importance. Symptoms can range from tremors and loss of touch sensation in the feet and legs to difficulties with breathing, swallowing, speaking, hearing and vision.
The research published online in EMBO Reports combines work from University of Exeter Biosciences researcher Dr Michael Schrader and PhD student Sofia Guimaraes. The major finding of the study is that the protein GDAP1, originally thought to only be involved in fragmentation of mitochondria, also contributes to the regulation of peroxisome number through their division.
Peroxisomes are small organelles occurring in nearly all cells, from yeast to crop plants to humans, and are essential for cell viability due to their important role in the metabolism of fatty acids and reactive oxygen species. Peroxisomes are also of particular interest as they play a key role in ageing.
This current study shows that the division of both mitochondria and peroxisomes follows a similar mechanism, although many of the disease-causing mutations occur in a region of the gene that is more critical for mitochondrial than peroxisomal division.
Dr Michael Schrader said of this project: “This study supports our hypothesis of a closer connection between mitochondria and peroxisomes. We have identified several membrane proteins, which are shared by both organelles, particularly key components of the division machinery, meaning there must be coordinated biogenesis and cross-talk.”
As numerous diseases have been linked to problems in the mitochondria, Dr Schrader proposes that this connection could have far-reaching medical implications.
This work contributes to the research being addressed through the prestigious Marie Curie Initial Training Network PERFUME programme (PERoxisome, FUnction, and MEtabolism), recently awarded to Michael Schrader along with several other top European research groups which focus on peroxisome biology.
Whether we’re listening to Bach or the blues, our brains are wired to make music-color connections depending on how the melodies make us feel, according to new research from the University of California, Berkeley. For instance, Mozart’s jaunty Flute Concerto No. 1 in G major is most often associated with bright yellow and orange, whereas his dour Requiem in D minor is more likely to be linked to dark, bluish gray.
Moreover, people in both the United States and Mexico linked the same pieces of classical orchestral music with the same colors. This suggests that humans share a common emotional palette – when it comes to music and color – that appears to be intuitive and can cross cultural barriers, UC Berkeley researchers said.
“The results were remarkably strong and consistent across individuals and cultures and clearly pointed to the powerful role that emotions play in how the human brain maps from hearing music to seeing colors,” said UC Berkeley vision scientist Stephen Palmer, lead author of a paper published this week in the journal Proceedings of the National Academy of Sciences.
Using a 37-color palette, the UC Berkeley study found that people tend to pair faster-paced music in a major key with lighter, more vivid, yellow colors, whereas slower-paced music in a minor key is more likely to be teamed up with darker, grayer, bluer colors.
“Surprisingly, we can predict with 95 percent accuracy how happy or sad the colors people pick will be based on how happy or sad the music is that they are listening to,” said Palmer, who will present these and related findings at the International Association of Colour conference at the University of Newcastle in the U.K. on July 8. At the conference, a color light show will accompany a performance by the Northern Sinfonia orchestra to demonstrate “the patterns aroused by music and color converging on the neural circuits that register emotion,” he said.
The findings may have implications for creative therapies, advertising and even music player gadgetry. For example, they could be used to create more emotionally engaging electronic music visualizers, computer software that generates animated imagery synchronized to the music being played. Right now, the colors and patterns appear to be randomly generated and do not take emotion into account, researchers said.
They may also provide insight into synesthesia, a neurological condition in which the stimulation of one perceptual pathway, such as hearing music, leads to automatic, involuntary experiences in a different perceptual pathway, such as seeing colors. An example of sound-to-color synesthesia was portrayed in the 2009 movie The Soloist when cellist Nathaniel Ayers experiences a mesmerizing interplay of swirling colors while listening to the Los Angeles symphony. Artists such as Wassily Kandinksky and Paul Klee may have used music-to-color synesthesia in their creative endeavors.
Nearly 100 men and women participated in the UC Berkeley music-color study, of which half resided in the San Francisco Bay Area and the other half in Guadalajara, Mexico. In three experiments, they listened to 18 classical music pieces by composers Johann Sebastian Bach, Wolfgang Amadeus Mozart and Johannes Brahms that varied in tempo (slow, medium, fast) and in major versus minor keys.
In the first experiment, participants were asked to pick five of the 37 colors that best matched the music to which they were listening. The palette consisted of vivid, light, medium, and dark shades of red, orange, yellow, green, yellow-green, green, blue-green, blue, and purple.
Participants consistently picked bright, vivid, warm colors to go with upbeat music and dark, dull, cool colors to match the more tearful or somber pieces. Separately, they rated each piece of music on a scale of happy to sad, strong to weak, lively to dreary and angry to calm.
Two subsequent experiments studying music-to-face and face-to-color associations supported the researchers’ hypothesis that “common emotions are responsible for music-to-color associations,” said Karen Schloss, a postdoctoral researchers at UC Berkeley and co-author of the paper.
For example, the same pattern occurred when participants chose the facial expressions that “went best” with the music selections, Schloss said. Upbeat music in major keys was consistently paired with happy-looking faces while subdued music in minor keys was paired with sad-looking faces. Similarly, happy faces were paired with yellow and other bright colors and angry faces with dark red hues.
Next, Palmer and his research team plan to study participants in Turkey where traditional music employs a wider range of scales than just major and minor. “We know that in Mexico and the U.S. the responses are very similar,” he said. “But we don’t yet know about China or Turkey.”
Our researchers have found a previously undiscovered link between epileptic seizures and the signs of autism in adults.
Dr SallyAnn Wakeford from the Department of Psychology revealed that adults with epilepsy were more likely to have higher traits of autism and Asperger syndrome.
Characteristics of autism, which include impairment in social interaction and communication as well as restricted and repetitive interests, can be severe and go unnoticed for many years, having tremendous impact on the lives of those who have them.
The research found that epileptic seizures disrupt the neurological function that affects social functioning in the brain resulting in the same traits seen in autism.
Dr Wakeford said: “The social difficulties in epilepsy have been so far under-diagnosed and research has not uncovered any underlying theory to explain them. This new research links social difficulties to a deficit in somatic markers in the brain, explaining these characteristics in adults with epilepsy.”
Dr Wakeford and her colleagues discovered that having increased autistic traits was common to all epilepsy types, however, this was more pronounced for adults with Temporal Lobe Epilepsy (TLE).
The researchers suggest that one explanation may be because anti-epileptic drugs are often less effective for TLE. The reason why they suspect these drugs are implicated is because they were strongly related to the severity of autistic characteristics.
Dr Wakeford carried out a comprehensive range of studies with volunteers with epilepsy and discovered that all of the adults with epilepsy showed autism traits.
She said: “It is unknown whether these adults had a typical developmental period during childhood or whether they were predisposed to having autistic traits before the onset of their epilepsy. However what is known is that the social components of autistic characteristics in adults with epilepsy may be explained by social cognitive differences, which have largely been unrecognised until now.”
Dr Wakeford said the findings could lead to improved treatment for people with epilepsy and autism. She said: “Epilepsy has a history of cultural stigma, however the more we understand about the psychological consequences of epilepsy the more we can remove the stigma and mystique of this condition.
“These findings could mean that adults with epilepsy get access to better services, as there is a wider range of treatments available for those with autism condition.”
Margaret Rawnsley, research administration officer at Epilepsy Action welcomed the findings.
She said: “We welcome any research that could further our understanding of epilepsy and ultimately improve the lives of those with the condition. This research has the potential to tell us more about the links between epilepsy and other conditions, such as autism spectrum disorders.”
With obesity reaching epidemic levels in some parts of the world, scientists have only begun to understand why it is such a persistent condition. A study in the Journal of Biological Chemistry adds substantially to the story by reporting the discovery of a molecular chain of events in the brains of obese rats that undermined their ability to suppress appetite and to increase calorie burning.
It’s a vicious cycle, involving a breakdown in how brain cells process key proteins, that allows obesity to beget further obesity. But in a finding that might prove encouraging in the long term, the researchers at Brown University and Lifespan also found that they could intervene to break that cycle by fixing the core protein-processing problem.
Before the study, scientists knew that one mechanism in which obesity perpetuates itself was by causing resistance to leptin, a hormone that signals the brain about the status of fat in the body. But years ago senior author Eduardo A. Nillni, professor of medicine at Brown University and a researcher at Rhode Island Hospital, observed that after meals obese rats had a dearth of another key hormone — alpha-MSH — compared to rats of normal weight.
Alpha-MSH has two jobs in parts of the hypothalamus region of the brain. One is to suppress the activity of food-seeking brain cells. The second is to signal other brain cells to produce the hormone TRH, which prompts the thyroid gland to spur calorie burning activity in the body.
In the obese rats alpha-MSH was low, despite an abundance of leptin and despite normal levels of gene expression both for its biochemical precursor protein called pro-opiomelanocortin (POMC) and for a key enzyme called PC2 that processes POMC in brain cells. There had to be more to the story than just leptin, and it wasn’t a problem with expressing the needed genes.
Nillni and his co-authors, including lead authors Isin Cakir and Nicole Cyr, conducted the new study to find out where the alpha-MSH deficit was coming from. Nillni said he suspected that the problem might lie in the brain cells’ mechanism for processing the POMC protein to make alpha-MSH.
Protein processing problems
To do their work, the team fed some rats a high-calorie diet and fed others a normal diet for 12 weeks. The overfed rats developed the condition of “diet-induced obesity.” The team then studied the hormone levels and brain cell physiology of the rats. They also tested their findings by experimenting with the biochemistry of key individual cells on the lab bench.
They found that in the obese rats, a key “machine” in the brain cells’ assembly line of protein-making, called the endoplasmic reticulum (ER), becomes stressed and overwhelmed. The overloaded ER apparently fumbles the proper handling of PC2, perhaps discarding it because it can’t be folded up properly. The PC2 levels they measured in obese rats, for example, were 53 percent lower than in normal rats. Alpha-MSH peptides were also barely more than half as abundant in obese rats as they were in healthy rats.
“In our study we showed that what actually prevents the production of more alpha-MSH peptide is that ER stress was decreasing the biosynthesis of POMC by affecting one key enzyme that is essential for the formation of alpha-MSH,” Nillni said. “This is so novel. Nobody ever looked at that.”
Novel as it was, the story — a stressed ER mishandles PC2, which leaves POMC unfolded, which impedes alpha-MSH production — needed experimental confirmation.
The team provided that confirmation in several ways: In obese rats they measured elevated levels of known markers of ER stress. They also purposely induced ER stress in cells using pharmacological agents and saw that both PC2 and Alpha-MSH levels dropped.
Next they conducted an experiment to see if fixing ER stress would improve alpha-MSH production. They treated lean and obese rats for two days with a chemical called TUDCA, which is known to alleviate ER stress. If ER stress is responsible for alpha-MSH production problems, the researchers would see alpha-MSH recover in obese rats treated with TUDCA. Sure enough, while TUDCA didn’t increase alpha-MSH production in normal rats, it increased it markedly in the obese rats.
Similarly on the benchtop they took mouse neurons that produce PC2 and POMC and pretreated some with a similar chemical called PBA that prevents ER stress. They left others untreated. Then they induced ER stress in all the cells. Under that ER stress, those that had been pretreated with PBA produced about twice as much PC2 as those that had not.
Nillni cautioned that although his team found ways to restore PC2 and alpha-MSH by treating ER stress in living rats and individual cells, the agents used in the study are not readily applicable as medicines for treating obesity in humans. There could well be unknown and unwanted side effects, for example, and TUDCA is not approved for human use by the U.S. Food and Drug Administration.
But by laying out the exact mechanism responsible for why the brains of the obese rats failed to curb appetite or spur greater calorie burning, Nillni said, the study points drug makers to several opportunities where they can intervene to break this new, vicious cycle that helps obesity to perpetuate itself.
“Understanding the central control of energy-regulating neuropeptides during diet-induced obesity is important for the identification of therapeutic targets to prevent and or mitigate obesity pathology,” the authors wrote.
In our interaction with our environment we constantly refer to past experiences stored as memories to guide behavioral decisions. But how memories are formed, stored and then retrieved to assist decision-making remains a mystery. By observing whole-brain activity in live zebrafish, researchers from the RIKEN Brain Science Institute have visualized for the first time how information stored as long-term memory in the cerebral cortex is processed to guide behavioral choices.
The study, published today in the journal Neuron, was carried out by Dr. Tazu Aoki and Dr. Hitoshi Okamoto from the Laboratory for Developmental Gene Regulation, a pioneer in the study of how the brain controls behavior in zebrafish.
The mammalian brain is too large to observe the whole neural circuit in action. But using a technique called calcium imaging, Aoki et al. were able to visualize for the first time the activity of the whole zebrafish brain during memory retrieval.
Calcium imaging takes advantage of the fact that calcium ions enter neurons upon neural activation. By introducing a calcium sensitive fluorescent substance in the neural tissue, it becomes possible to trace the calcium influx in neurons and thus visualize neural activity.
The researchers trained transgenic zebrafish expressing a calcium sensitive protein to avoid a mild electric shock using a red LED as cue. By observing the zebrafish brain activity upon presentation of the red LED they were able to visualize the process of remembering the learned avoidance behavior.
They observe spot-like neural activity in the dorsal part of the fish telencephalon, which corresponds to the human cortex, upon presentation of the red LED 24 hours after the training session. No activity is observed when the cue is presented 30 minutes after training.
In another experiment, Aoki et al. show that if this region of the brain is removed, the fish are able to learn the avoidance behavior, remember it short-term, but cannot form any long-term memory of it.
“This indicates that short-term and long-term memories are formed and stored in different parts of the brain. We think that short-term memories must be transferred to the cortical region to be consolidated into long-term memories,” explains Dr. Aoki.
The team then tested whether memories for the best behavioral choices can be modified by new learning. The fish were trained to learn two opposite avoidance behaviors, each associated with a different LED color, blue or red, as a cue. They find that presentation of the different cues leads to the activation of different groups of neurons in the telencephalon, which indicates that different behavioral programs are stored and retrieved by different populations of neurons.
“Using calcium imaging on zebrafish, we were able to visualize an on-going process of memory consolidation for the first time. This approach opens new avenues for research into memory using zebrafish as model organism,” concludes Dr. Okamoto.
In the future, if you want to improve your ability to manipulate numbers in your head, you might just plug yourself in. So say researchers who report in the Cell Press journal Current Biology on May 16 on studies of a harmless form of brain stimulation applied to an area known to be important for math ability.
“With just five days of cognitive training and noninvasive, painless brain stimulation, we were able to bring about long-lasting improvements in cognitive and brain functions,” says Roi Cohen Kadosh of the University of Oxford.
Incredibly, the improvements held for a period of six months after training. No one knows exactly how this relatively new method of stimulation, called transcranial random noise stimulation (TRNS), works. But the researchers say the evidence suggests that it allows the brain to work more efficiently by making neurons fire more synchronously.
Cohen Kadosh and his colleagues had shown previously that another form of brain stimulation could make people better at learning and processing new numbers. But, he says, TRNS is even less perceptible to those receiving it. TRNS also has the potential to help even more people. That’s because it has been shown to improve mental arithmetic—the ability to add, subtract, or multiply a string of numbers in your head, for example—not just new number learning. Mental arithmetic is a more complex and challenging task, which more than 20 percent of people struggle with.
Ultimately, Cohen Kadosh says, with better integration of neuroscience and education, this line of study could really help humans reach our cognitive potential in math and beyond. It might also be of particular help to those suffering with neurodegenerative illness, stroke, or learning difficulties.
“Maths is a highly complex cognitive faculty that is based on a myriad of different abilities,” Cohen Kadosh says. “If we can enhance mathematics, therefore, there is a good chance that we will be able to enhance simpler cognitive functions.”
If you’re a left-brain thinker, chances are you use your right hand to hold your cell phone up to your right ear, according to a newly published study from Henry Ford Hospital in Detroit.
The study – to appear online in JAMA Otolaryngology-Head & Neck Surgery – shows a strong correlation between brain dominance and the ear used to listen to a cell phone. More than 70% of participants held their cell phone up to the ear on the same side as their dominant hand, the study finds.
Left-brain dominant people – who account for about 95% of the population and have their speech and language center located on the left side of the brain – are more likely to use their right hand for writing and other everyday tasks.
Likewise, the Henry Ford study reveals most left-brain dominant people also use the phone in their right ear, despite there being no perceived difference in their hearing in the left or right ear. And, right-brain dominant people are more likely to use their left hand to hold the phone in their left ear.
“Our findings have several implications, especially for mapping the language center of the brain,” says Michael Seidman, M.D., FACS, director of the division of otologic and neurotologic surgery in the Department of Otolaryngology-Head and Neck Surgery at Henry Ford.
“By establishing a correlation between cerebral dominance and sidedness of cell phone use, it may be possible to develop a less-invasive, lower-cost option to establish the side of the brain where speech and language occurs rather than the Wada test, a procedure that injects an anesthetic into the carotid artery to put part of the brain to sleep in order to map activity.”
He notes that the study also may offer additional evidence that cell phone use and tumors of the brain, head and neck may not necessarily be linked.
Since nearly 80% of people use the cell phone in their right ear, he says if there were a strong connection there would be far more people diagnosed with cancer on the right side of their brain, head and neck, the dominant side for cell phone use. It’s likely, he says, that the development of tumors is more “dose-dependent” based on cell phone usage.
The study began with the simple observation that most people use their right hand to hold a cell phone to their right ear. This practice, Dr. Seidman says, is illogical since it is challenging to listen on the phone with the right ear and take notes with the right hand.
To determine if there is an association between sidedness of cell phone use and auditory or language hemispheric dominance, the Henry Ford team developed an online survey using modifications of the Edinburgh Handedness protocol, a tool used for more than 40 years to assess handedness and predict cerebral dominance.
The survey included questions about which hand was used for tasks such as writing; time spent talking on cell phone; whether the right or left ear is used to listen to phone conversations; and if respondents had been diagnosed with a brain or head and neck tumor.
It was distributed to 5,000 individuals who were either with an otology online group or a patient undergoing Wada and MRI for non-invasive localization purposes.
On average, respondents’ cell phone usage was 540 minutes per month. The majority of respondents (90%) were right handed, 9% were left handed and 1% was ambidextrous.
Among those who are right handed, 68% reported that they hold the phone to their right ear, while 25% used the left ear and 7% used both right and left ears. For those who are left handed, 72% said they used their left ear for cell phone conversations, while 23% used their right ear and 5% had no preference.
The study also revealed that having a hearing difference can impact ear preference for cell phone use.
In all, the study found that there is a correlation between brain dominance and laterality of cell phone use, and there is a significantly higher probability of using the dominant hand side ear.
Studies are underway to look at tumor registry banks of patients with head, neck and brain cancer to evaluate cell phone usage. Controversy still exists around a potential association of cell phone use and tumors. Until this is fully understood, Dr. Seidman advises using hands-free modes for calls rather than holding a phone up to the side of the head.
(Original publication: “Study Examines Relationship Between Hemispheric Dominance and Cell Phone Use” JAMA Otolaryngology-Head & Neck Surgery, 2013; Michael D. Seidman et al.)
The breakthrough technique that allowed scientists to obtain one-of-a-kind, colorful images of the myriad connections in the brain and nervous system is about to get a significant upgrade.
A group of Harvard researchers, led by Joshua Sanes, the Jeff C. Tarr Professor of Molecular and Cellular Biology and Paul J. Finnegan Family Director, Center for Brain Science, and Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and Santíago Ramón y Cajal Professor of Arts and Sciences, has made a host of technical improvements in the “Brainbow” imaging technique. Their work is described in a May 5 paper in Nature Methods.
First described in 2007, the system combines three fluorescent proteins — one red, one blue, and one green — to label different cells with as many as 90 colors. By studying the resulting images, researchers were able to begin to understand how the millions of neurons in the brain are connected.
“‘Brainbow’ generated beautiful images of a kind we had never been able to obtain before, but it was difficult in some ways,” said Sanes, who also serves as director of the Center for Brain Science.
“These modifications aim to overcome some of the more problematic features of the original genetic constructs,” Lichtman said. “Lead author Dawen Cai, a research associate in our labs, worked hard and creatively to find ways to make the ‘Brainbow’ colors brighter, more variable, and useable in situations where the original gene constructs were hard to implement. Our first look at these animals suggests that these improvements are fantastic.”
Among the challenges faced by researchers using the original method, Sanes said, was the chance that certain colored proteins would bleach out faster than others.
“If one color bleaches faster than the others, you start with a ‘Brainbow,’ but by the time you’re done imaging, you might just have a ‘blue-bow,’ because the red and yellow bleach too fast,” he said.
Sanes said that some colors also were too dim, causing problems in the imaging process, while in other cases the protein didn’t fill the whole neuron evenly enough, or there was an overabundance of a certain color in an image.
“What we decided to do was to make the next generation of ‘Brainbow,’” Sanes said. “We systematically set out to look at these problems. We looked at a whole range of fluorescent proteins to find the ones that were brightest and wouldn’t bleach as much, and we developed new transgenic methods to avoid the predominance of a particular color.”
The researchers also explored new ways to create “Brainbow” images, including using viruses to introduce fluorescent proteins into cells.
The advantage of the new technique, Sanes said, is it offers researchers the chance to target certain parts of the brain and better understand how neurons radiate out to connect with other brain regions. Ultimately, he said, he hopes that other researchers are able to apply the techniques outlined in the paper in the same way that they expanded on the first “Brainbow” method.
“People adapted the method to study a number of interesting questions in other tissues to examine cellular relationships and cell lineages in kidney and skin cells,” he said. “It was also used to examine the nervous system in animals like zebrafish and C. elegans. With these new tools, I think we’ve taken the next step.”
A new tool being developed by UT Arlington assistant professor of physics could help scientists map and track the interactions between neurons inside different areas of the brain.
The journal Optics Letters recently published a paper by Samarendra Mohanty on the development of a fiber-optic, two-photon, optogenetic stimulator and its use on human cells in a laboratory. The tiny tool builds on Mohanty’s previous discovery that near-infrared light can be used to stimulate a light-sensitive protein introduced into living cells and neurons in the brain. This new method could show how different parts of the brain react when a linked area is stimulated.
The technology would be useful in the BRAIN mapping initiative recently championed by President Barack Obama, Mohanty said. BRAIN stands for Brain Research Through Advancing Innovative Neurotechnologies and will include $100 million in government investments in research.
“Scientists have spent a lot of time looking at the physical connections between different regions of the brain. But that information is not sufficient unless we examine how those connections function,” Mohanty said. “That’s where two-photon optogenetics comes into play. This is a tool not only to control the neuronal activity but to understand how the brain works.”
The two-photon optogenetic stimulation described in the Optics Letter paper involves introducing the gene for ChR2, a protein that responds to light, into a sample of excitable cells. A fiber-optic infrared beam of light can then be used to precisely excite the neurons in a tissue circuit.
In the brain, researchers would then observe responses in the excited area as well as other parts of the neural circuit. In living subjects, scientists would also observe the behavioral outcome, Mohanty said.
Optogenetic stimulation avoids damage to living tissue by using light to stimulate neurons instead of electric pulses used in past research. Mohanty’s method of using low-energy near-infrared light also enables more precision and a deeper focus than the blue or green light beams often used in optogenetic stimulation, the paper said.
Using fiber optics to deliver the two-photon optogenetic beam is another advance. Previous methods required bulky microscopes or complex scanning beams. Mohanty’s group is collaborating with UT Arlington Department of Psychology assistant professor Linda Perrotti to apply this technology in living animals.
“Dr. Mohanty’s innovations continue to be recognized because of the great potential they hold,” said Pamela Jansma, dean of the UT Arlington College of Science. “Hopefully, his work will one day provide researchers in other fields the tools they need to examine how the human body works and why normal processes sometimes fail.”