Dangerous brain tumors hijack the brain’s existing blood supply throughout their progression, by growing only within narrow potential spaces between and along the brain’s thousands of small blood vessels, new research shows for the first time.

(Caption: This microscopic view of a mouse brain tumor shows small clusters of tumor cells (in green), marked with white arrows, growing along tiny blood vessels (in red) in the brain and filling the space in between the vessels.)
The findings contradict the concept that brain tumors need to grow their own blood vessels to keep themselves growing – and help explain why drugs that aim to stop growth of the new blood vessels have failed in clinical trials to extend the lives of patients with the worst brain tumors.
In fact, trying to block the growth of new blood vessels in the brain actually spurs malignant tumors called gliomas to grow faster and further, the research shows. On the hopeful side, the research suggests a new avenue for finding better drugs.
The discoveries come from a University of Michigan Medical School team studying tumors in rodents and humans, and advanced computer models, in collaboration with colleagues from Arizona State University. Published online in the journal Neoplasia, they’ll be featured as the journal’s cover article later this month.
A team of researchers from the Virginia Tech-Wake Forest University School of Biomedical Engineering and Sciences have developed a new way of using electricity to open the blood-brain-barrier (BBB). The Vascular Enabled Integrated Nanosecond pulse (VEIN pulse) procedure consists of inserting minimally invasive needle electrodes into the diseased tissue and applying multiple bursts of nanosecond pulses with alternating polarity. It is thought that the bursts disrupt tight junction proteins responsible for maintaining the integrity of the BBB without causing damage to the surrounding tissue. This technique is being developed for the treatment of brain cancer and neurological disorders, such as Parkinson’s disease, and is set to appear in the upcoming issue of the journal TECHNOLOGY.

(Caption: Two, minimally invasive needle electrodes with 1 mm active length were spaced 4.0 mm apart and inserted into the right cerebral hemisphere 1.5 mm beneath the surface of the dura. A burst of 200, 500 ns duration square pulses of alternating polarity with a voltage-to-distance ratio of 250 V/cm were applied through the electrodes. In the case shown above, bursts were repeated once per second for 10 min. The extent of BBB disruption is shown by the dotted line surrounding Evans blue-albumin complex uptake on the gross brain slice preparation (left) and the corresponding fluorescent image (middle). Additionally, areas of BBB disruption appear as hyperintense (white) on the T1-weighted MRI exam, due to the uptake of a gadolinium-Evans blue tracer. Scale bar represents 5 mm. Credit: John H. Rossmeisl Jr., Neurology and Neurosurgery, Virginia-Maryland Regional College of Veterinary Medicine and Virginia Tech-Wake Forest University School of Biomedical Engineering and Sciences).
The BBB is a network of tight junctions that normally acts to protect the brain from foreign substances by preventing them from leaking out of blood vessels. However, it also limits the effectiveness of drugs to treat brain disease. Temporarily opening the BBB is a way to ensure that drugs can still be effective.
For the treatment of brain cancer, “VEIN pulses could be applied at the same time as biopsy or through the same track as the biopsy probe in order to mitigate damage to the healthy tissue by limiting the number of needle insertions,” says Rafael V. Davalos, Ph.D, director of the Bioelectromechanical Systems Laboratory at Virginia Tech.
Additionally, the group shows that VEIN pulses can be applied without causing muscle contractions, which may dislodge the electrodes and require the use of a neuroblocker and general anesthesia. According to Christopher B. Arena, Ph.D., co-lead author on the paper with Paulo A. Garcia, Ph.D. and Michael B. Sano, Ph.D., “the fact that the pulses alternate in polarity helps to avoid unwanted, electrically induced movement. Therefore, it could be possible to perform this procedure without using a neuroblocker and with patients under conscious sedation. This is similar to how deep brain stimulation is implemented clinically to treat Parkinson’s disease.”
The team now plans to translate the technology into clinical applications through a university spin-out company, VoltMed, Inc.
An analysis of autism research covering genetics, brain imaging, and cognition led by Laurent Mottron of the University of Montreal has overhauled our understanding of why autism potentially occurs, develops and results in a diversity of symptoms. The team of senior academics involved in the project calls it the “Trigger-Threshold-Target” model. Brain plasticity refers to the brain’s ability to respond and remodel itself, and this model is based on the idea that autism is a genetically induced plastic reaction. The trigger is multiple brain plasticity-enhancing genetic mutations that may or may not combine with a lowered genetic threshold for brain plasticity to produce either intellectual disability alone, autism, or autism without intellectual disability. The model confirms that the autistic brain develops with enhanced processing of certain types of information, which results in the brain searching for materials that possess the qualities it prefers and neglecting materials that don’t. “One of the consequences of our new model will be to focus early childhood intervention on developing the particular strengths of the child’s brain, rather than exclusively trying to correct missing behaviors, a practice that may be a waste of a once in a lifetime opportunity,” Mottron said.

Mottron and his colleagues developed the model by examining the effect of mutations involved in autism together with the brain activity of autistic people as they undertake perceptual tasks. “Geneticists, using animals implanted with the mutations involved in autism, have found that most of them enhance synaptic plasticity – the capacity of brain cells to create connections when new information is encountered. In parallel, our group and others have established that autism represents an altered balance between the processing of social and non-social information, i.e. the interest, performance and brain activity, in favor of non-social information,” Mottron explained. “The Trigger-Threshold-Target model builds a bridge between these two series of facts, using the neuro cognitive effects of sensory deprivation to resolve the missing link between them.”
The various superiorities that subgroups of autistic people present in perception or in language indicates that an autistic infant’s brain adapts to the information it is given in a strikingly similar way to sensory-deprived people. A blind infant’s brain compensate the lack of visual input by developing enhanced auditory processing abilities for example, and a deaf infant readapts to process visual inputs in a more refined fashion. Similarly, cognitive and brain imaging studies of autistic people work reveal enhanced activity, connectivity and structural modifications in the perceptive areas of the brain. Differences in the domain of information “targeted” by these plastic processes are associated with the particular pattern of strengths and weaknesses of each autistic individual. “Speech and social impairment in some autistic toddlers may not be the result of a primary brain dysfunction of the mechanisms related to these abilities, but the result of their early neglect,” Mottron said. “Our model suggests that the autistic superior perceptual processing compete with speech learning because neural resources are oriented towards the perceptual dimensions of language, neglecting its linguistic dimensions. Alternatively, for other subgroups of autistic people, known as Asperger, it’s speech that’s overdeveloped. In both cases, the overdeveloped function outcompetes social cognition for brain resources, resulting in a late development of social skills.”
The model provides insight into the presence or absence of intellectual disability, which when causative mutation alter the function of brain cell networking. Rather than simply triggering a normal but enhanced plastic reaction, these mutations cause neurons to connect in a way that does not exist in non-autistic people. When brain cell networking functions normally, only the allocation of brain resources is changed.
As is the case with all children, environment and stimulation have an effect on the development and organization of an autistic child’s brain. “Most early intervention programs adopt a restorative approach by working on aspects like social interest. However this focus may monopolize resources in favor of material that the child process with more difficulties, Mottron said. “We believe that early intervention for autistic children should take inspiration from the experience of congenitally deaf children, whose early exposure to sign language has a hugely positive effect on their language abilities. Interventions should therefore focus on identifying and harnessing the autistic child’s strengths, like written language.” By indicating that autistic ‘’restricted interests” result from cerebral plasticity, this model suggest that they have an adaptive value and should therefore be the focus of intervention strategies for autism.
Dartmouth researchers demonstrate in a new study that a previously understudied part of the brain, the retrosplenial cortex, is essential for forming the basis for contextual memories, which help you to recall events ranging from global disasters to where you parked your car.
An important aspect of memory is the ability to recall the physical place, or context, in which an event occurred. For example, in recalling emotionally charged events such as the September 11 terror attacks or the assassination of President John F. Kennedy, we remember not only the event but also where we were when it happened. Indeed, in discussing such events with others, we often ask, “Where were you when … ?” Processing “where” information is also important for mundane events such as remembering where you parked your car.
Although it is known that a specific network of brain regions is important for contextual memory, it has not been known how different parts of the network contribute to this process. But using a newly developed technology known as “chemogenetics,” Professor David Bucci’s laboratory is beginning to show how different brain structures contribute to contextual learning and memory. Developed at the University of North Carolina School of Medicine, the chemogenetics technique enables researchers to “remotely control” the activity of brains cells. This is accomplished by using a virus to transfers genes for a synthetic receptor into a brain region. The receptors are responsive only to a synthetic drug that is administered through a simple injection. By binding to the receptors, the drug temporarily turns off—or on—brain cells in that region for a short amount of time.
Using this approach, Bucci’s laboratory demonstrated in an experiment with rats that the retrosplenial cortex is critical for forming the basis for contextual memories. It was the first time the chemogenetics technique had been used to turn off cells along the entire retrosplenial cortex. The importance of this finding is underscored by two recent studies showing that the hippocampus, another key brain region involved in contextual memories, is not itself active or necessary for forming the initial associations that underlie contextual memory.
The National Science Foundation recently awarded Bucci a five-year, $725,000 grant to continue this research.
“By providing new insight into the function of this part of the brain, our work will also have implications for understanding the basis for illnesses that impact contextual memory, such as Alzheimer’s disease,” Bucci says. “In fact, recent studies have shown that the retrosplenial cortex is one of the first brain areas that is damaged in persons with Alzheimer’s disease.”
The findings appear in The Journal of Neuroscience.
Scientists show bad androgen receptor impairs body’s ability to dispose of damaged cells
Researchers at University of California, San Diego School of Medicine have identified the mechanism by which a rare, inherited neurodegenerative disease causes often crippling muscle weakness in men, in addition to reduced fertility.
The study, published August 10 in the journal Nature Neuroscience, shows that a gene mutation long recognized as a key to the development of Kennedy’s disease impairs the body’s ability to degrade, remove and recycle clumps of “trash” proteins that may otherwise build up on neurons, progressively impairing their ability to control muscle contraction. This mechanism, called autophagy, is akin to a garbage disposal system and is the only way for the body to purge itself of non-working, misshapen trash proteins.
“We’ve known since the mid-1990s that Alzheimer’s disease, Parkinson’s disease and Huntington’s disease are caused by the accumulation of misfolded proteins that should have been degraded, but cannot be turned over,” said senior author Albert La Spada, MD, PhD and professor of pediatrics, cellular and molecular medicine, and neurosciences. “The value of this study is that it identifies a target for halting the progression of protein build-up, not just in this rare disease, but in many other diseases that are associated with impaired autophagy pathway function.”
Of the 400 to 500 men in the U.S. with Kennedy’s disease, the slow but progressive loss of motor function results in about 15 to 20 percent of those with the disease becoming wheel-chair bound during later stages of the disease.
Kennedy’s disease, also known as spinal and bulbar muscular atrophy, is a recessive X-linked disease men inherit from their mother. Women don’t get the disease because they have two copies of the X chromosome. The genetic abnormality causes men to produce a mutant androgen receptor protein, which impairs the body’s sensitivity and response to male sex hormones, sometimes resulting in testicular atrophy and enlargement of male breasts.
In experiments with mice, scientists discovered that the mutant androgen receptor protein besides disrupting male reproductive biology also deactivates a protein called transcription factor EB (TFEB) that is believed to be a master regulator of autophagy in nerve and other cell types.
Specifically, the mutant androgen receptor protein in Kennedy’s disease binds to TFEB and blocks its ability to mediate the break-down and removal of non-working proteins and aggregated proteins.
“Our study tells us that if we can find a way to keep TFEB working, we likely can prevent this disease and others like it from progressing,” La Spada said. “We now have a target for new therapies to treat not only Kennedy’s disease, but also many more common neurological disorders.”
Testosterone, a steroid hormone, is well known to contribute to aggressive behavior in males, but the neural circuits through which testosterone exerts these effects have not been clear.
Prior studies found that the administration of a single dose of testosterone influenced brain circuit function. Surprisingly, however, these studies were conducted exclusively in women.
Researchers, led by Dr. Justin Carré, sought to rectify this gap by conducting a study of the effects of testosterone on the brain’s response to threat cues in healthy men.
They focused their attention on brain structures that mediate threat processing and aggressive behavior, including the amygdala, hypothalamus, and periaqueductal gray.
The researchers recruited 16 healthy young male volunteers, who completed two test days on which they received either testosterone or placebo. On both testing days, the men first received a drug that suppressed their testosterone. This step ensured that testosterone levels were similar among all study participants. The amount of testosterone administered in this study only returned testosterone levels to the normal range. Subjects then completed a face-matching task while undergoing a functional magnetic resonance imaging scan.
Data analyses revealed that, compared with placebo, testosterone increased reactivity of the amygdala, hypothalamus and periaqueductal grey when viewing angry facial expressions.
"We were able to show for the first time that increasing levels of testosterone within the normal physiological range can have a profound effect on brain circuits that are involved in threat-processing and human aggression," said Carré, Assistant Professor at Nipissing University.
"Understanding testosterone effects on the brain activity patterns associated with threat and aggression may help us to better understand the ‘fight or flight’ response in males that may be relevant to aggression and anxiety," commented Dr. John Krystal, Editor of Biological Psychiatry.
Expanding our knowledge of exactly how testosterone affects the male brain is particularly important, as testosterone augmentation has become increasingly promoted and aggressively marketed as a solution to reduced virility in aging men. Further work is indeed continuing, Carré said. “Our current work is examining the extent to which a single administration of testosterone influences aggressive and competitive behavior in men.”
In a world-first, a newly published study has captured in detail the brain electrical activity in children as they emerge from anaesthesia, shedding light on why some are distressed and agitated when they wake up.

Researchers from Swinburne University of Technology together with colleagues from the Murdoch Childrens Research Institute (MCRI) were able to collect electroencephalography (EEG) data on children who exhibited emergence delirium.
Emergence delirium is a major risk associated with anaesthesia in children and occurs when patients wake up from anaesthesia in a delirious and disassociated state.
Swinburne Professor David Liley said PhD student Jessica Martin and staff at MCRI were able to record, with unprecedented fidelity, brain electrical activity from 60 children aged 5-15 years who emerged from anaesthesia some of whom went on to exhibit emergence delirium.
“This clinical phenomenon is prevalent in children aged six and under, with an estimated 10-30% exhibiting emergence delirium,” said Professor Liley.
Researchers found that the brain activity recorded just after stopping sevoflurane (a form of gas anaesthesia) in children exhibiting emergence delirium was substantially different to those children who woke up peacefully.
Associate Professor Andrew Davidson from MCRI said they discovered that children who wake up suddenly from a deeper plane of anaesthetic are more likely to develop the delirium.
“In contrast, the children who develop sleep like patterns on their EEG before they wake up are more likely to wake up peacefully.”
“Intriguingly, emergence delirium looks very much like the more severe form of night terror, which occurs when some pre-school children are disturbed during deep sleep.
“Our study suggests the EEG signatures and the mechanisms may indeed be similar between night terror and emergence delirium.
“Allowing children to wake up in a quiet and undisturbed environment should increase the likelihood that they go into a light sleep-like state after the anaesthetic and then wake up peacefully,” said Associate Professor Davidson.
The findings will have significant implications in both predicting those children who will go on to develop emergence delirium, as well as helping medical professionals better understand its causes in both children and adults.
The study, Alterations in the Functional Connectivity of Frontal Lobe Networks Preceding Emergence Delirium in Children, will appear in the October issue of the high profile clinical journal, Anesthesiology and is electronically available ahead of print.
People tend to understand nonliteral language – metaphor, hyperbole and exaggerated statements – when they realize the purpose of the communication, according to new Stanford research.
Noah Goodman, an assistant professor of psychology at Stanford, believes that figurative language – the nuanced ways that people use language to communicate meanings different than the literal meaning of their words – is one of the deepest mysteries of human communication.
"Human communication," he said, "is rife with nonliteral language that includes metaphor, irony and hyperbole. When we say ‘Juliet is the sun’ or ‘That watch cost a million dollars,’ listeners read through the direct meanings – which are often false if taken literally – to understand subtle connotations."

'Sharp' vs. 'round' numbers
To understand this communication dynamic, Goodman, director of the Computation and Cognition Lab at Stanford, and his colleagues used computational modeling. Stanford graduate student Justine Kao was the first author on the paper, which included co-authors Jean Wu, a former graduate student at Stanford, and Leon Bergen of the Massachusetts Institute of Technology.
In their lab, they develop computational models that use pragmatic reasoning to interpret metaphorical utterances. Their research for this particular project involved four online experiments with 340 subjects.
Participants in the experiments read different scenarios involving hyperbole. For example, a person bought a watch and was asked by a friend whether it was expensive. That person responded with different figures, ranging from low- to high-cost figures – such as $50, $51, $10,000 or $10,001. Given this, the participants rated the probability of the purchaser thinking it was an expensive watch or not.
People tended to interpret “sharp numbers” – such as a watch costing $51 – more precisely than “round numbers,” as in a watch costing $50.
The findings suggest that even creative and figurative language may follow predictable and rational principles.
Kao said, “This research advances our understanding of communication by providing evidence that reasoning about a speaker’s goals is critical for understanding nonliteral language. We were able to capture nuanced and nonliteral interpretations of number words using a computational model.”
Common ground
The research showed that if listeners are trying to understand the topic and goal of communication as well as the underlying subtext – that which is not expressed explicitly – they’re better able to truly understand the utterance. A sense of common knowledge about what is being described or expressed is also important. Speakers and listeners assume that individuals are rational agents who use common ground and reference points to best maximize information.
As Kao put it, “There is still a long way to go before computers can understand Shakespeare, but it is a start.”
Goodman offered this example: Imagine someone describing a new restaurant, and she says, “It took 30 minutes to get a table.” People are most likely to interpret this to mean she waited about 30 minutes. But if she says, “It took a million years to get a table,” people will probably interpret this to mean that the wait was shorter than a million years, but that the person thinks it was much too long.
"One of the most fascinating facts about communication is that people do not always mean what they say – a crucial part of the listener’s job is to understand an utterance even when its literal meaning is false," the researchers wrote.
Goodman said the computational model he and his colleagues use to understand nonliteral utterances integrates empirically measured background knowledge, communication principles and reasoning about communication goals.
What is next in line research-wise?
Goodman and the others said they believe that the same ideas and techniques can extend to metaphor, irony and many other uses of language. For example, they have a promising initial exploration of “is a” metaphors such as “your lawyer is a shark,” Goodman said.
"Beyond these cases of figurative speech, the overall mathematical framework is beginning to give a precise theory of natural language understanding that takes into account context, intention and many subtle shades of meaning," he said, adding, "There is a lot more work to do."
A University of Cincinnati experiment aimed at this diverse and growing population could spark development of advanced tools to help all the aging baby boomers, injured veterans, diabetics and white-cane-wielding pedestrians navigate the blurred edges of everyday life.
These tools could be based on a device called the Enactive Torch, which looks like a combination between a TV remote and Captain Kirk’s weapon of choice. But it can do much greater things than change channels or stun aliens.

Luis Favela, a graduate student in philosophy and psychology, has found the torch enables the visually impaired to judge their ability to comfortably pass through narrow passages, like an open door or busy sidewalk, as good as if they were actually seeing such pathways themselves.
The handheld torch uses infra-red sensors to “see” objects in front of it. When the torch detects an object, it emits a vibration – similar to a cellphone alert – through an attached wristband. The gentle buzz increases in intensity as the torch nears the object, letting the user make judgments about where to move based on a virtual touch.
"Results of this experiment point in the direction of different kinds of tools or sensory augmentation devices that could help people who have visual impairment or other sorts of perceptual deficiencies. This could start a research program that could help people like that," Favela says.
Favela presented his research “Augmenting the Sensory Judgment Abilities of the Visually Impaired” at the American Psychological Association’s (APA) annual convention, held Aug. 7-10 in Washington, D.C. More than 11,000 psychology professionals, scholars and students from around the world annually attend APA’s convention.
A Growing Population in Need
Favela studies how people perceive their environment and how those perceptions inform their judgments. For this experiment, he was inspired by what he knew about the surging population of visually impaired Americans.

The Centers for Disease Control and Prevention (CDC) predicts that more than 6 million Americans age 40 and older will be affected by blindness or low vision by 2030 – double the number from 2004 – due to diabetes or other chronic diseases and the rapidly aging population. The CDC also notes that vision loss is among the top 10 causes of disability in the U.S., and vision impairment is one of the most prevalent disabilities in children.
"In my research I’ve found that there’s an emotional stigma that people who are visually impaired experience, particularly children," Favela says. "When you’re a kid in elementary school, you want to blend in and be part of the group. It’s hard to do that when you’re carrying this big, white cane."
Substituting Sight with Touch
In Favela’s experiment, 27 undergraduate students with normal or corrected-to-normal vision and no prior experience with mobility assistance devices were asked to make perceptual judgments about their ability to pass through an opening a few feet in front of them without needing to shift their normal posture. Favela tested participants’ judgments in three ways: using only their vision, using a cane while blindfolded and using the Enactive Torch while blindfolded. The idea was to compare judgments made with vision against those made by touch.

The results of the experiment were surprising. Favela figured vision-based judgments would be the most accurate because vision tends to be most people’s dominant perceptual modality. However, he found the three types of judgments were equally accurate.
"When you compare the participants’ judgments with vision, cane and Enactive Torch, there was not a significant difference, meaning that they made the same judgments," Favela says. "The three modalities are functionally equivalent. People can carry out actions just about to the same degree whether they’re using their vision or their sense of touch. I was really surprised."
Favela plans additional experiments requiring more complicated judgments, such as the ability to step over an obstacle or to climb stairs. With further study and improvements to the Enactive Torch, Favela says similar tools that augment touch-based perception could have a significant impact on the lives of the visually impaired.
"If the future version of the Enactive Torch is smaller and more compact, kids who use it wouldn’t stand out from the crowd, they might feel like they blend in more," he says, noting people can quickly adapt to using the torch. "That bodes well, say, for someone in the Marines who was injured by a roadside bomb. They could be devastated. But hope’s not lost. They will learn how to navigate the world pretty quickly."
In an extensive study of sleep monitoring and sleeping pill use in astronauts, researchers from Brigham and Women’s Hospital (BWH) Division of Sleep and Circadian Disorders, Harvard Medical School, and the University of Colorado found that astronauts suffer considerable sleep deficiency in the weeks leading up to and during space flight. The research also highlights widespread use of sleeping medication use among astronauts.
The study, published in The Lancet Neurology on August 8, 2014, recorded more than 4,000 nights of sleep on Earth, and more than 4,200 nights in space using data from 64 astronauts on 80 Shuttle missions and 21 astronauts aboard International Space Station (ISS) missions. The 10-year study is the largest study of sleep during space flight ever conducted. The study concludes that more effective countermeasures to promote sleep during space flight are needed in order to optimize human performance.
"Sleep deficiency is pervasive among crew members," stated Laura K. Barger, PhD, associate physiologist in the BWH Division of Sleep and Circadian Disorders, and lead study author. "It’s clear that more effective measures are needed to promote adequate sleep in crew members, both during training and space flight, as sleep deficiency has been associated with performance decrements in numerous laboratory and field-based studies."
Despite NASA scheduling 8.5 hours of sleep per night for crew members in space flight, the average (mean) duration of sleep during space flight was just under six (5.96) hours on shuttle missions, and just over six hours (6.09) on ISS missions. Twelve percent of sleep episodes on shuttle missions and 24 percent on ISS missions lasted seven hours or more, as compared to 42 percent and 50 percent, respectively, in a post-flight data collection interval when most astronauts slept at home.
Moreover, the results suggest that astronauts’ build-up of sleep deficiency began long before launch, as they averaged less than 6.5 hours sleep per night during the training interval occurring approximately three months prior to space flight.
The research also highlights widespread use of sleeping medications such as zolpidem and zaleplon during space flight. Three-quarters of ISS crew members reported taking sleep medication at some point during their time on the space station, and more than three-quarters (78 percent) of shuttle-mission crew members used medication on more than half (52 percent) of nights in space.
"The ability for a crew member to optimally perform if awakened from sleep by an emergency alarm may be jeopardized by the use of sleep-promoting pharmaceuticals," said Barger. "Routine use of such medications by crew members operating spacecraft are of particular concern, given the U. S. Federal Drug Administration (FDA) warning that patients using sleeping pills should be cautioned against engaging in hazardous occupations requiring complete mental alertness or motor coordination, including potential impairment of performance of such activities that may occur the day following ingestion of sedative/hypnotics. This consideration is especially important because all crew members on a given mission may be under the influence of a sleep promoting medication at the same time."
Charles Czeisler, PhD, MD, FRCP, chief, BWH Division of Sleep and Circadian Disorders, and senior study author, adds: “Future exploration spaceflight missions to the moon, Mars or beyond will require development of more effective countermeasures to promote sleep during spaceflight in order to optimize human performance. These measures may include scheduling modifications, strategically timed exposure to specific wavelengths of light, and behavioral strategies to ensure adequate sleep, which is essential for maintaining health, performance and safety.”
At least one part of the human brain may be able to process information the same way in older age as it does in the prime of life, according to new research conducted at the University of Adelaide.

A study compared the ability of 60 older and younger people to respond to visual and non-visual stimuli in order to measure their “spatial attention” skills.
Spatial attention is critical for many aspects of life, from driving, to walking, to picking up and using objects.
"Our studies have found that older and younger adults perform in a similar way on a range of visual and non-visual tasks that measure spatial attention," says Dr Joanna Brooks, who conducted the study as a Visiting Research Fellow with the University of Adelaide’s School of Psychology and the School of Medicine.
"Both younger (aged 18-38 years) and older (55-95 years) adults had the same responses for spatial attention tasks involving touch, sight or sound.
"In one task, participants were asked to feel wooden objects whilst blindfolded and decide where the middle of the object was - participants’ judgements were significantly biased towards the left-hand side of the true object centre. This bias is subtle but highly consistent," Dr Brooks says.
"When we think of ageing, we think not just of the physical aspects but also the cognitive side of it, especially when it comes to issues such as reaction time, which is typically slower among older adults. However, our research suggests that certain types of cognitive systems in the right cerebral hemisphere - like spatial attention - are ‘encapsulated’ and may be protected from ageing," she says.
Dr Brooks, who is now a Research Fellow in Healthy Ageing based at the Australian National University, recently presented her results at the 12th International Cognitive Neuroscience Conference in Brisbane. Her project is part of an international collaboration with scientists at the University of Edinburgh and Queen Margaret University in Scotland to better understand spatial attention in the human brain.
"Our results challenge current models of cognitive ageing because they show that the right side of the brain remains dominant for spatial processing throughout the entire adult lifespan," Dr Brooks says. "We now need to better understand how and why some areas of the brain seem to be more affected by ageing than others."
Dr Brooks’s research could also be helpful in better understanding how diseases such as Alzheimer’s affect the brain.
Nature is thrifty. The same signals that embryonic cells use to decide whether to become nerves, skin or bone come into play again when adult animals are learning whether to become afraid.
Researchers at Yerkes National Primate Research Center, Emory University, have learned that the molecule Notch, critical in many processes during embryonic development, is also involved in fear memory formation. Understanding fear memory formation is critical to developing more effective treatments and preventions for anxiety disorders such as post-traumatic stress disorder (PTSD). The results are scheduled for publication online this week by the journal Neuron.
"We are finding that developmental pathways that appear to be quiescent during adulthood are transiently reactivated to allow new memory formation to occur," says Kerry Ressler, MD, PhD, professor of psychiatry and behavioral sciences at Emory University School of Medicine and Yerkes National Primate Research Center, and senior author of the paper.
The first author of the paper is postdoctoral fellow Brian Dias, PhD, and co-authors include undergraduates Jared Goodman, Ranbir Ahluwalia and Audrey Easton, and post-doctoral researcher Raul Andero, PhD.
The Notch signaling pathway, present in insects, worms and vertebrates, is involved in embryonic patterning as well as nervous system and cardiovascular development. It’s a way for cells to communicate and coordinate which cells are going to become what types of tissues.
Dias and Ressler probed the Notch pathway because they were examining many genes that are activated in the brains of mice after they learn to become afraid of a sound paired with a mild foot-shock. They were looking for changes in the amygdala, a region of the brain known to regulate fear learning.
The researchers were particularly interested in micro RNAs. MicroRNAs do not encode proteins but can inhibit other genes, often several at once in a coordinated way. Dias and Ressler found that levels of miRNA-34a are increased in the amygdala after fear learning occurs. A day after fear training, animals whose brains were injected with a virus engineered to carry a “sponge” against miRNA-34a froze less often than control animals.
The researchers found that miRNA-34a regulated several genes that encode components of the Notch pathway. They believe their study is the first to link miRNA-34a and Notch signaling to a role in memory consolidation.
Notch is under investigation as a target in the treatment of various cancers and some drugs that target Notch have been well-tolerated by humans.
"From a therapeutic perspective, our data suggest that relevant drugs that regulate Notch signaling could potentially be a starting point for preventing or treating PTSD," Dias says.
Older adults who are tested at their optimal time of day (the morning), not only perform better on demanding cognitive tasks but also activate the same brain networks responsible for paying attention and suppressing distraction as younger adults, according to Canadian researchers.

The study, published online July 7th in the journal Psychology and Aging (ahead of print publication), has yielded some of the strongest evidence yet that there are noticeable differences in brain function across the day for older adults.
“Time of day really does matter when testing older adults. This age group is more focused and better able to ignore distraction in the morning than in the afternoon,” said lead author John Anderson, a PhD candidate with the Rotman Research Institute at Baycrest Health Sciences and University of Toronto, Department of Psychology.
“Their improved cognitive performance in the morning correlated with greater activation of the brain’s attentional control regions – the rostral prefrontal and superior parietal cortex – similar to that of younger adults.”
Asked how his team’s findings may be useful to older adults in their daily activities, Anderson recommended that older adults try to schedule their most mentally-challenging tasks for the morning time. Those tasks could include doing taxes, taking a test (such as a driver’s license renewal), seeing a doctor about a new condition, or cooking an unfamiliar recipe.
In the study, 16 younger adults (aged 19 – 30) and 16 older adults (aged 60-82) participated in a series of memory tests during the afternoon from 1 – 5 p.m. The tests involved studying and recalling a series of picture and word combinations flashed on a computer screen. Irrelevant words linked to certain pictures and irrelevant pictures linked to certain words also flashed on the screen as a distraction. During the testing, participants’ brains were scanned with fMRI which allows researchers to detect with great precision which areas of the brain are activated. Older adults were 10 percent more likely to pay attention to the distracting information than younger adults who were able to successfully focus and block this information. The fMRI data confirmed that older adults showed substantially less engagement of the attentional control areas of the brain compared to younger adults. Indeed, older adults tested in the afternoon were “idling” – showing activations in the default mode (a set of regions that come online primarily when a person is resting or thinking about nothing in particular) indicating that perhaps they were having great difficulty focusing. When a person is fully engaged with focusing, resting state activations are suppressed.
When 18 older adults were morning tested (8:30 a.m. – 10:30 a.m.) they performed noticeably better, according to two separate behavioural measures of inhibitory control. They attended to fewer distracting items than their peers tested at off-peak times of day, closing the age difference gap in performance with younger adults. Importantly, older adults tested in the morning activated the same brain areas young adults did to successfully ignore the distracting information. This suggests that when older adults are tested is important for both how they perform and what brain activity one should expert to see.
“Our research is consistent with previous science reports showing that at a time of day that matches circadian arousal patterns, older adults are able to resist distraction,” said Dr. Lynn Hasher, senior author on the paper and a leading authority in attention and inhibitory functioning in younger and older adults.
The Baycrest findings offer a cautionary flag to those who study cognitive function in older adults. “Since older adults tend to be morning-type people, ignoring time of day when testing them on some tasks may create an inaccurate picture of age differences in brain function,” said Dr. Hasher, senior scientist at Baycrest’s Rotman Research Institute and Professor of Psychology at University of Toronto.
Researchers from The University of Western Australia have shown that electromagnetic stimulation can alter brain organisation which may make your brain work better.

In results from a study published today in the prestigious Journal of Neuroscience, researchers from The University of Western Australia and the Université Pierre et Marie Curie in France demonstrated that weak sequential electromagnetic pulses (repetitive transcranial magnetic stimulation - or rTMS) on mice can shift abnormal neural connections to more normal locations.
The discovery has important implications for treatment of many nervous system disorders related to abnormal brain organisation such as depression, epilepsy and tinnitus.
To better understand what magnetic stimulation does to the brain Research Associate Professor Jennifer Rodger from UWA’s School of Animal Biology and her colleagues tested a low-intensity version of the therapy - known as low-intensity repetitive transcranial magnetic stimulation (LI-rTMS) - on mice born with abnormal brain organisation.
Lead author, PhD candidate Kalina Makowiecki, said the research demonstrated that even at low intensities, pulsed magnetic stimulation could reduce abnormally located neural connections, shifting them towards their correct locations in the brain.
"This reorganisation is associated with changes in a specific brain chemical, and occurred in several brain regions, across a whole network. Importantly, this structural reorganisation was not seen in the healthy brain or the appropriate connections in the abnormal mice, suggesting that the therapy could have minimal side effects in humans.
"Our findings greatly increase our understanding of the specific cellular and molecular events that occur in the brain during this therapy and have implications for how best to use it in humans to treat disease and improve brain function," Ms Makowiecki said.
In a long-term, large-scale population-based study of individuals aged 55 years or older in the general population researchers found that those diagnosed with mild cognitive impairment (MCI) had a four-fold increased risk of developing dementia or Alzheimer’s disease (AD) compared to cognitively healthy individuals. Several risk factors including older age, positive APOE-ɛ4 status, low total cholesterol levels, and stroke, as well as specific MRI findings were associated with an increased risk of developing MCI. The results are published in a supplement to the Journal of Alzheimer’s Disease.
“Mild cognitive impairment has been identified as the transitional stage between normal aging and dementia,” comments M. Arfan Ikram, MD, PhD, a neuroepidemiologist at Erasmus MC University Medical Center (Rotterdam). “Identifying persons at a higher risk of dementia could postpone or even prevent dementia by timely targeting modifiable risk factors.”
Unlike a clinical trial, the Rotterdam study is an observational cohort study focusing on the general population, instead of persons referred to a memory clinic. The Rotterdam study began in 1990, when almost 8,000 inhabitants of Rotterdam aged 55 years or older agreed to participate in the study. Ten years later, another 3,000 individuals were added. Participants undergo home interviews and examinations every four years.
“This important prospective study adds to the accumulating evidence that strokes, presumably related to so called ‘vascular’ risk factors, also contribute to the appearance of dementia in Alzheimer’s disease. This leads to the conclusion that starting at midlife people should minimize those risk factors. The recent results of the Finish FINGER study corroborate this idea. It should be remembered that delaying the onset of dementia by five years will reduce the prevalence of the disease by half. And of course, since there is no cure for AD, prevention is the best approach at present,” explains Professor Emeritus Amos D Korczyn, Tel Aviv University, Ramat Aviv, Israel, and Guest Editor of the Supplement.
To be diagnosed with MCI in the study, individuals were required to meet three criteria: a self-reported awareness of having problems with memory or everyday functioning; deficits detected on a battery of cognitive tests; and no evidence of dementia. They were categorized into those with memory problems (amnestic MCI) and those with normal memory (non-amnestic MCI).
Of 4,198 persons found to be eligible for the study, almost 10% were diagnosed with MCI. Of these, 163 had amnestic MCI and 254 had non-amnestic MCI.
The risk of dementia was especially high for people with amnestic MCI. Similar results were observed regarding the risk for Alzheimer’s disease. Those with MCI also faced a somewhat higher risk of death.
The research team investigated possible determinants of MCI, considering factors such as age, APOE-ɛ status, waist circumference, hypertension, diabetes mellitus, total and HDL-cholesterol levels, smoking, and stroke. Only older age, being an APOE-ɛ4 carrier, low total cholesterol levels, and stroke at baseline were associated with developing MCI. Having the APOE-ɛ4 genotype and smoking were related only to amnestic MCI.
When the investigators analysed MRI studies of the brain, they found that participants with MCI, particularly those with non-amnestic MCI, had larger white matter lesion volumes and worse microstructural integrity of normal-appearing white matter compared to controls. They were also three-times more likely than controls to have lacunes (3 to 15 mm cerebrospinal fluid (CSF)-filled cavities in the basal ganglia or white matter, frequently observed when imaging older people). MCI was not associated with total brain volume, hippocampal volume, or cerebral microbleeds.
“Our results suggest that accumulating vascular damage plays a role in both amnestic and non-amnestic MCI,” says Dr. Ikram. “We propose that timely targeting of modifiable vascular risk factors might contribute to the prevention of MCI and dementia.”
Reference:
Determinants, MRI Correlates, and Prognosis of Mild Cognitive Impairment: The Rotterdam Study. Renée F.A.G. de Bruijn, Saloua Akoudad, Lotte G.M. Cremers, Albert Hofman, Wiro J. Niessen, Aad van der Lugt, Peter J. Koudstaal, Meike W. Vernooij, M. Arfan Ikram. Journal of Alzheimer’s Disease, Volume 42/Supplement 3 (August 2014): 2013 International Congress on Vascular Dementia (Guest Editor: Amos D. Korczyn)
Vitamin D deficiency is associated with a substantially increased risk of dementia and Alzheimer’s disease in older people, according to the most robust study of its kind ever conducted.

An international team, led by Dr David Llewellyn at the University of Exeter Medical School, found that study participants who were severely Vitamin D deficient were more than twice as likely to develop dementia and Alzheimer’s disease.
The team studied elderly Americans who took part in the Cardiovascular Health Study. They discovered that adults in the study who were moderately deficient in vitamin D had a 53 per cent increased risk of developing dementia of any kind, and the risk increased to 125 per cent in those who were severely deficient.
Similar results were recorded for Alzheimer’s disease, with the moderately deficient group 69 per cent more likely to develop this type of dementia, jumping to a 122 per cent increased risk for those severely deficient.
The study was part-funded by the Alzheimer’s Association, and is published in August 6 2014 online issue of Neurology, the medical journal of the American Academy of Neurology. It looked at 1,658 adults aged 65 and over, who were able to walk unaided and were free from dementia, cardiovascular disease and stroke at the start of the study. The participants were then followed for six years to investigate who went on to develop Alzheimer’s disease and other forms of dementia.
Dr Llewellyn said: “We expected to find an association between low Vitamin D levels and the risk of dementia and Alzheimer’s disease, but the results were surprising – we actually found that the association was twice as strong as we anticipated.
“Clinical trials are now needed to establish whether eating foods such as oily fish or taking vitamin D supplements can delay or even prevent the onset of Alzheimer’s disease and dementia. We need to be cautious at this early stage and our latest results do not demonstrate that low vitamin D levels cause dementia. That said, our findings are very encouraging, and even if a small number of people could benefit, this would have enormous public health implications given the devastating and costly nature of dementia.”
Research collaborators included experts from Angers University Hospital, Florida International University, Columbia University, the University of Washington, the University of Pittsburgh and the University of Michigan. The study was supported by the Alzheimer’s Association, the Mary Kinross Charitable Trust, the James Tudor Foundation, the Halpin Trust, the Age Related Diseases and Health Trust, the Norman Family Charitable Trust, and the National Institute for Health Research Collaboration for Leadership in Applied Research and Care South West Peninsula (NIHR PenCLAHRC).
Dementia is one of the greatest challenges of our time, with 44 million cases worldwide – a number expected to triple by 2050 as a result of rapid population ageing. A billion people worldwide are thought to have low vitamin D levels and many older adults may experience poorer health as a result.
The research is the first large study to investigate the relationship between vitamin D and dementia risk where the diagnosis was made by an expert multidisciplinary team, using a wide range of information including neuroimaging. Previous research established that people with low vitamin D levels are more likely to go on to experience cognitive problems, but this study confirms that this translates into a substantial increase in the risk of Alzheimer’s disease and dementia.
Vitamin D comes from three main sources – exposure of skin to sunlight, foods such as oily fish, and supplements. Older people’s skin can be less efficient at converting sunlight into Vitamin D, making them more likely to be deficient and reliant on other sources. In many countries the amount of UVB radiation in winter is too low to allow vitamin D production.
The study also found evidence that there is a threshold level of Vitamin D circulating in the bloodstream below which the risk of developing dementia and Alzheimer’s disease increases. The team had previously hypothesized that this might lie in the region of 25-50 nmol/L, and their new findings confirm that vitamin D levels above 50 nmol/L are most strongly associated with good brain health.
Commenting on the study, Dr Doug Brown, Director of Research and Development at Alzheimer’s Society said: “Shedding light on risk factors for dementia is one of the most important tasks facing today’s health researchers. While earlier studies have suggested that a lack of the sunshine vitamin is linked to an increased risk of Alzheimer’s disease, this study found that people with very low vitamin D levels were more than twice as likely to develop any kind of dementia.
“During this hottest of summers, hitting the beach for just 15 minutes of sunshine is enough to boost your vitamin D levels. However, we’re not quite ready to say that sunlight or vitamin D supplements will reduce your risk of dementia. Large scale clinical trials are needed to determine whether increasing vitamin D levels in those with deficiencies can help prevent the dementia from developing.”
A Japanese research group led by Prof Norihiro Sadato, a professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), has found that people with autism spectrum disorders (ASD) have decreased activity in an area in the brain critical for understanding if his/her movement was imitated by others. These results will be published in Neuroscience Research.

The research group of Norihiro Sadato, a professor of NIPS, Hirotaka Kosaka, a specially-assigned associate professor of the University of Fukui, and Toshio Munesue, a professor of Kanazawa University measured brain activity by functional magnetic resonance imaging (fMRI) when one’s movement was imitated by others. The group studied brain activity when a subject saw his/her finger movement imitated or not imitated by others. Normal subjects have increased activity in the extrastriate body area (EBA) when they are imitated compared to when they are not being imitated. The EBA is a region in the visual cortex for visual processing that responds powerfully during the perception of human body parts. On the other hand, because this kind of activity in the EBA of subjects with ASD was not observed, it shows that the EBA of subjects with ASD is not working properly when imitated.
Persons with ASD are known to have difficulty in interpersonal communication and have trouble noticing that their movement was imitated. Behavioral intervention research to alleviate ASD is proceeding and indicates that training utilizing imitation is useful. The result of the above research not only provided clues to ASD, but also can be used in the evaluation of behavioral intervention to alleviate the disorder.
Autism does not appear to be solely caused by a deficiency of oxytocin, but the hormone’s universal ability to boost social function may prove useful in treating a subset of children with the developmental disorder, according to new findings from the Stanford University School of Medicine and Lucile Packard Children’s Hospital Stanford.

Low levels of oxytocin, a hormone involved in social functioning, have for years been suspected of causing autism. Prior research seeking a link has produced mixed results. Now, in the largest-ever study to test the purported connection, the range of blood oxytocin levels has been shown to be the same in children with autism as that observed in two comparison groups: children with autistic siblings and children without autistic siblings. In other words, similar numbers of children with low, medium and high oxytocin levels were found in all three groups.
A paper describing the new findings was published online Aug. 4 in Proceedings of the National Academy of Sciences.
Although autism was not directly linked to oxytocin deficiency, the Stanford team found that higher oxytocin levels were linked to better social functioning in all groups. All children with autism have social deficits, but in the study these deficits were worst in those with the lowest blood oxytocin and mildest in those with the highest oxytocin. In the comparison groups, children’s social skills also fell across a range that correlated to their oxytocin levels.
Regulator of social functioning
“Oxytocin appears to be a universal regulator of social functioning in humans,” said Karen Parker, PhD, assistant professor of psychiatry and behavioral sciences and the lead author of the study. “That encompasses both typically developing children as well as those with the severe social deficits we see in children with autism.”
Autism is a developmental disorder that affects 1 of every 68 children in the United States. It is characterized by social and communication deficits, repetitive behaviors and sensory problems. The new study included 79 children with autism, 52 of their unaffected siblings and 62 unrelated children without autism. All of the children were between the ages of 3 and 12.
“It didn’t matter if you were a typically developing child, a sibling or an individual with autism: Your social ability was related to a certain extent to your oxytocin levels, which is very different from what people have speculated,” said Antonio Hardan, MD, professor of psychiatry and behavioral sciences and the study’s senior author. Hardan is a child and adolescent psychiatrist who treats children with autism at the hospital.
“The previous hypotheses saying that low oxytocin was linked to autism were maybe a little bit simplistic,” he said. “It’s much more complex: Oxytocin is a vulnerability factor that has to be accounted for, but it’s not the only thing leading to the development of autism.”
The researchers caution, however, that blood oxytocin measurements may be different than oxytocin levels in the cerebrospinal fluid bathing the brain, which they did not measure.
In addition to examining blood oxytocin levels, the researchers examined the importance of small variations in the gene coding for the oxytocin receptor. Certain receptor variants were correlated to higher scores on standard tests of social ability, the study found.
Inheriting social abilitiesThe team also discovered that blood levels of oxytocin are highly heritable: The levels are influenced by inheritance to about the same degree as adult height, which is often described as being strongly influenced by genetics.
"What our study hints at is that social function may be heritable in families," Parker said.
The study will help to guide future research to determine whether oxytocin is a useful autism treatment. The study’s findings suggest that some children with autism — such as the subset of kids with autism who have naturally low oxytocin levels, or those with oxytocin receptor gene variants associated with worse social functioning — might benefit most from oxytocin-like drugs.
“Autism is so heterogeneous,” Parker said. “If we can identify biomarkers that help us identify the patients most likely to benefit from a specific therapy, we expect that will be very useful.”
In some women abnormally high levels of a common and pervasive chemical may lead to adverse effects in their offspring. The study, published recently in the Journal of Clinical Endocrinology & Metabolism, is the first of its kind to shed light on the possible harmful side effects of perchlorate in mothers and their children.

Using data from the Controlled Antenatal Thyroid Study (CATS) cohort, researchers at Boston University School of Medicine (BUSM) and Cardiff University studied the effect of perchlorate, an environmental contaminant found in many foods and in some drinking water supplies, and its effects on children born to mothers with above average levels of this substance in their system. They studied 487 mother-child pairs from women with underactive thyroid glands and in the 50 women with the highest levels of perchlorate in their body, their offspring had below average IQ levels when compared to other children.
"The reason people really care about perchlorate is because it is ubiquitous. It’s everywhere," said Elizabeth Pearce, MD, MSc, associate professor of medicine at BUSM. "Prior studies have already shown perchlorate, at low levels, can be found in each and every one of us."
Perchlorate is a compound known to affect the thyroid gland, an organ needed to help regulate hormone levels in humans. According to Pearce previous studies have attempted to implicate this anti-thyroid activity in pregnant mothers as a possible cause of hypothyroidism, or an underactive thyroid gland. Hypothyroidism in newborns and children can lead to an array of unwelcome side effects, including below average intelligence.
Research at the University of Reading has provided a new understanding of how our brain processes information to change how we see the world.

Using a simple computer game, akin to a 3D version of the 80s game Pong, the researchers examined how the brain recalibrates its perception of slant in order to bounce a moving ball through a target hoop.
They found that the brain uses an internal simulation of the laws of physics to change its perception of slant in order to ‘score’ consistently.
The findings provide a unique insight into why humans are such an adaptable and skillful species. With the development of effective autonomous robots, engineers are starting to look at how humans’ sensory systems effortlessly achieve what is currently impossible for robotic systems.
The study, funded by the Engineering and Physical Sciences Research Council and the Wellcome Trust, saw participants play a 3D game where they had to adjust the slant of a surface so that a moving ball bounced off it and through a target hoop.
Part way through the game, without telling the participants, researchers altered the bounce of the ball so that the surface behaved differently to the slant signalled by visual cues.
When faced with the altered bounce, participants changed their behaviour to continue scoring points. At the same time, their brain recalibrated their perception of slant - simulating the laws of physics to actually change how the slant looked. In a separate group, making the ball spin eliminated this recalibration.
Dr. Peter Scarfe from the School of Psychology and Clinical Language Sciences, who conducted the study with colleague Prof. Andrew Glennerster, said: “We take for granted our amazing ‘adaptability’ which allows us to enjoy such past-times as DIY or playing ball sports. However, little is known about the brain mechanisms that enable us to do these activities. Our research shows how our brains appear to have an intimate understanding of the laws of physics. In addition to aiding skillful action, this can change how we perceive the world around us.”
The researchers say understanding the basic mechanisms that allow the brain to calibrate sensory information will prove vital in the design of future autonomous robots.
Dr. Scarfe continued: “The human brain exhibits expert skill in making predictions about how the world behaves. For example, a child can bounce a ball off a wall and understand how spinning the ball alters its bounce. However, many of the fine motor skills of a young child are currently way beyond the capability of modern robots. Understanding how sensory systems adapt to feedback about the consequences of actions is likely to be key in solving this problem.”
Humans Use Predictive Kinematic Models to Calibrate Visual Cues to Three-Dimensional Surface Slant is published in the Journal of Neuroscience
Researchers at Yale School of Medicine have discovered a new drug compound that reverses the brain deficits of Alzheimer’s disease in an animal model. Their findings are published in the Aug. 5 issue of the journal PLoS Biology.
The compound, TC-2153, inhibits the negative effects of a protein called STtriatal-Enriched tyrosine Phosphatase (STEP), which is key to regulating learning and memory. These cognitive functions are impaired in Alzheimer’s.
"Decreasing STEP levels reversed the effects of Alzheimer’s disease in mice," said lead author Paul Lombroso, M.D., professor in the Yale Child Study Center and in the Departments of Neurobiology and Psychiatry at Yale School of Medicine.
Lombroso and co-authors studied thousands of small molecules, searching for those that would inhibit STEP activity. Once identified, those STEP-inhibiting compounds were tested in brain cells to examine how effectively they could halt the effects of STEP. They examined the most promising compound in a mouse model of Alzheimer’s disease, and found a reversal of deficits in several cognitive exercises that gauged the animals’ ability to remember previously seen objects.
High levels of STEP proteins keep synapses in the brain from strengthening. Synaptic strengthening is a process that is required for people to turn short-term memories into long-term memories. When STEP is elevated in the brain, it depletes receptors from synaptic sites, and inactivates other proteins that are necessary for proper cognitive function. This disruption can result in Alzheimer’s disease or a number of neuropsychiatric and neurodegenerative disorders, all marked by cognitive deficits.
"The small molecule inhibitor is the result of a five-year collaborative effort to search for STEP inhibitors," said Lombroso. "A single dose of the drug results in improved cognitive function in mice. Animals treated with TC compound were indistinguishable from a control group in several cognitive tasks."
The team is currently testing the TC compound in other animals with cognitive defects, including rats and non-human primates. “These studies will determine whether the compound can improve cognitive deficits in other animal models,” said Lombroso. “Successful results will bring us a step closer to testing a drug that improves cognition in humans.”