Posts tagged learning

Posts tagged learning
Messy children make better learners
Attention, parents: The messier your child gets while playing with food in the high chair, the more he or she is learning.
Researchers at the University of Iowa studied how 16-month-old children learn words for nonsolid objects, from oatmeal to glue. Previous research has shown that toddlers learn more readily about solid objects because they can easily identify them due to their unchanging size and shape. But oozy, gooey, runny stuff? Not so much.
New research shows that changes if you put toddlers in a setting they know well. In those instances, word learning increases, because children at that age are “used to seeing nonsolid things in this context, when they’re eating,” says Larissa Samuelson, associate professor in psychology at the UI who has worked for years on how children learn to associate words with objects. “And, if you expose them to these things when they’re in a highchair, they do better. They’re familiar with the setting and that helps them remember and use what they already know about nonsolids.”
In a paper published in the journal Developmental Science, Samuelson and her team at the UI tested their idea by exposing 16-month-olds to 14 nonsolid objects, mostly food and drinks such as applesauce, pudding, juice, and soup. They presented the items and gave them made-up words, such as “dax” or “kiv.” A minute later, they asked the children to identify the same food in different sizes or shapes. The task required the youngsters to go beyond relying simply on shape and size and to explore what the substances were made of to make the correct identification and word choice.
Not surprisingly, many children gleefully dove into this task by poking, prodding, touching, feeling, eating—and yes, throwing—the nonsolids in order to understand what they were and make the correct association with the hypothetical names. The toddlers who interacted the most with the foods—parents, interpret as you want—were more likely to correctly identify them by their texture and name them, the study determined. For example, imagine you were a 16-month-old gazing at a cup of milk and a cup of glue. How would you tell the difference by simply looking?
“It’s the material that makes many nonsolids,” Samuelson notes, “and how children name them.”
The setting matters, too, it seems. Children in a high chair were more apt to identify and name the food than those in other venues, such as seated at a table, the researchers found.
“It turns out that being in a high chair makes it more likely you’ll get messy, because kids know they can get messy there,” says Samuelson, the senior author on the paper.
The authors say the exercise shows how children’s behavior, environment (or setting), and exploration help them acquire an early vocabulary—learning that is linked to better later cognitive development and functioning.
“It may look like your child is playing in the high chair, throwing things on the ground, and they may be doing that, but they are getting information out of (those actions),” Samuelson contends. “And, it turns out, they can use that information later. That’s what the high chair did. Playing with these foods there actually helped these children in the lab, and they learned the names better.”
“It’s not about words you know, but words you’re going to learn,” Samuelson adds.
The pauses that refresh the memory
Certain symptoms of schizophrenia may arise from uncontrolled activation of neurons that help to build memories during periods of rest
Sufferers of schizophrenia experience a broad gamut of symptoms, including hallucinations and delusions as well as disorientation and problems with learning and memory. This diversity of neurological deficits has made schizophrenia extremely difficult for scientists to understand, thwarting the development of effective treatments. A research team led by Susumu Tonegawa from the RIKEN–MIT Center for Neural Circuit Genetics has now revealed disruptions in the activity of particular clusters of neurons that might account for certain core symptoms of this disorder.
Tonegawa’s laboratory previously found that mice lacking the protein calcineurin in certain regions of the brain exhibit many behavioral deficits that are characteristic of schizophrenia. In their most recent study, the researchers sought out physiological alterations at the single-cell or circuit level that could connect the absence of the calcineurin protein in the brain with these behavioral impairments.
Their study focused on the hippocampus, a region of the brain associated with memory and spatial learning. Within the hippocampus, specialized ‘place cells’ switch on and off as an animal explores its environment. During subsequent periods of wakeful rest, these place cells continue to fire in patterns that essentially ‘replay’ recent wanderings, allowing the brain to build memories based on these experiences. The researchers used precisely positioned electrodes to measure differences in brain activity in these cells for normal mice and the calcineurin-deficient mouse model of schizophrenia.
Remarkably, essentially identical place-cell activity patterns were observed for both sets of mice during active exploration. Once the animals were at rest, however, the calcineurin-deficient mice displayed a dramatic increase in place-cell activity. In the normal hippocampus, the resting replay process depended on sequential activity from place cells corresponding to specific, real-world spatial coordinates. In contrast, this correlation was all but lost in the calcineurin-deficient mice. Instead, these neurons often seemed to fire indiscriminately, creating high levels of ‘noise’ that overwhelmed actual location information and thwarted memory formation.
“Our study provides the first potential evidence of disorganized thinking processes in a schizophrenia model at the single-cell and circuit level,” says Junghyup Suh, a member of Tonegawa’s research team. These findings fit with an emerging model that suggests that schizophrenic symptoms may arise from excess activation of brain regions within a ‘default mode network’—which includes the hippocampus—during wakeful rest. “Neurobiological approaches that can calm down the default mode network may therefore open up new avenues to alleviating symptoms or curing this mental disorder,” says Suh.
To flexibly deal with our ever-changing world, we need to learn from both the negative and positive consequences of our behaviour. In other words, from punishment and reward. Hanneke den Ouden from the Donders Institute in Nijmegen demonstrated that serotonin and dopamine related genes influence how we base our choices on past punishments or rewards. This influence depends on which gene variant you inherited from your parents. These results were published in Neuron on 20 November.
The brain chemicals dopamine and serotonin partly determine our sensitivity to reward and punishment. At least, this was a common assumption. Hanneke den Ouden and Roshan Cools investigated this assumption together with colleagues from the Donders Institute and New York University. Den Ouden explains: ‘We used a simple computer game to test the genetic influence of the genes DAT1 and SERT, as these genes influence dopamine and serotonin. We discovered that the dopamine gene affects how we learn from the long-term consequences of our choices, while the serotonin gene affects our choices in the short term.’
Online game
‘In nearly 700 people we analysed which variant of the SERT and the DAT1 genes they had’, Den Ouden describes. ‘Using an online game, we investigated how well people are able to adjust their choice strategy after receiving a reward or a punishment.’ The players would repeatedly choose one of two symbols. Symbol A usually resulted in a reward whereas symbol B usually resulted in punishment. Halfway through the game, these rules were reversed. The game allowed the researchers to measure how flexible people are in adjusting their choices when the rules change. But it also showed whether people impulsively change their choice when the computer happened to give misleading feedback.
Different genes, different strategies
Den Ouden: ‘Different players use different strategies, which depend on their genetic material. People’s tendency to change their choice immediately after receiving a punishment depends on which serotonin gene variant they inherited from their parents. The dopamine gene variant, on the other hand, exerts influence on whether people can stop themselves making the choice that was previously rewarded, but no longer is.’
This study shows that dopamine and serotonin are important for different forms of flexibility associated with receiving reward and punishment. Many neuropsychiatric disorders caused by abnormal dopamine and/or serotonin levels are associated with forms of inflexibility, for example addiction, anxiety, or Parkinson’s disease. So this study not only tells us more about the heritability of our choice behaviour; a better understanding of the relationship between brain chemicals and behaviour in healthy people will ultimately help to provide us with better insight into these neuropsychiatric disorders.
(Source: ru.nl)
Mindfulness Inhibits Implicit Learning — The Wellspring of Bad Habits
Being mindful appears to help prevent the formation of bad habits, but perhaps good ones too. Georgetown University researchers are trying to unravel the impact of implicit learning, and their findings might appear counterintuitive — at first.
Consider this: when testing who would do best on a task to find patterns among a bunch of dots many might think mindful people would score higher than those who are distracted, but researchers found the opposite — participants low on the mindfulness scale did much better on this test of implicit learning, the kind of learning that occurs without awareness.
This outcome might be surprising until one considers that behavioral and neuroimaging studies suggest that mindfulness can undercut the automatic learning processes — the kind that lead to development of good and bad habits, says the study’s lead author, Chelsea Stillman, a psychology PhD student. Stillman works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.
This study was aimed at examining how individual differences in mindfulness are related to implicit learning. “Our theory is that one learns habits — good or bad — implicitly, without thinking about them,” Stillman says. “So we wanted to see if mindfulness impeded implicit learning.”
That is what they found. Two samples of adult participants first completed a test that gauged their mindfulness character trait, and then they completed different tasks that measured implicit learning – either the Triplet-Learning Task or the Alternating Serial Reaction Time Task test. Both tasks used circles on a screen and participants were asked to respond to the location of certain colored circles. These tasks tested the ability of participants to learn complex, probabilistic patterns, although test takers would not be aware of that.
The researchers found that people reporting low on the mindfulness scale tended to learn more — their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often.
“The very fact of paying too much attention or being too aware of stimuli coming up in these tests might actually inhibit implicit learning,” Stillman says. “That suggests that mindfulness may help prevent formation of automatic habits — which is done through implicit learning — because a mindful person is aware of what they are doing.”
Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior
Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.
Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.
The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.
"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."
The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.
"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."
Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.
"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."
Monkeys “understand” rules underlying language musicality
Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.
Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)
Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.
Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.
Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.
"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".
Literacy depends on nurture, not nature
A University at Buffalo education professor has sided with the environment in the timeless “nurture vs. nature” debate after his research found that a child’s ability to read depends mostly on where that child is born, rather than on his or her individual qualities.
“Individual characteristics explain only 9 percent of the differences in children who can read versus those who cannot,” says Ming Ming Chiu, lead author of an international study that explains this connection and a professor in the Department of Learning and Instruction in UB’s Graduate School of Education.
“In contrast, country differences account for 61 percent and school differences account for 30 percent,” Chiu says.
Therefore, he concludes, the country in which a child is born largely determines whether he or she will have at least basic reading skills. It’s clearly a case where “nurture” — the environment and surroundings of the child — is more important than “nature” — the child’s inherited, individual qualities, according to Chiu.
More than 99 percent of fourth-graders in the Netherlands can read, but only 19 percent of fourth-graders in South Africa can read, Chiu notes.
“Although the richest countries typically have high literacy rates exceeding 97 percent,” he says, “some rich countries, such as Qatar and Kuwait, have low literacy rates — 33 percent and 28 percent, respectively.”
The study, “Ecological, Psychological and Cognitive Components of Reading Difficulties: Testing the Component Model of Reading in Fourth-graders Across 38 Countries,” analyzed reading test scores of 186,725 fourth-graders from 38 countries, including more than 4,000 children from the U.S. Chiu and co-authors Catherine McBride-Chang of the Chinese University of Hong Kong and Dan Lin of the Hong Kong Institute of Education published the study in the winter 2013 issue of the Journal of Learning Disabilities.
The educators used data from the Organization for Economic Cooperation and Development’s Program for International Student Assessment.
Besides showing that the country of origin was a better predictor of reading skills than individual traits, the study also showed that other attributes at the child, school and country levels were all related to reading.
First, girls were more likely than boys to have basic reading skills, Chiu says. Children with greater early-literacy skills, better attitudes about reading or greater self-confidence in their reading ability also were more likely to have strong basic reading skills.
“Children were more likely to have basic reading skills if they were from privileged families, as measured through socioeconomic status, number of books at home and parent attitudes about reading,” says Chiu. “Also, children attending schools with better school climate and more resources were more likely to have basic reading skills.
“Our U.S. culture values ‘can-do’ individualism, but we forget how much depends on being lucky enough to be born in the right place,” he says.
All animals have to make decisions every day. Where will they live and what will they eat? How will they protect themselves? They often have to make these decisions as a group, too, turning what may seem like a simple choice into a far more nuanced process. So, how do animals know what’s best for their survival?

For the first time, Arizona State University researchers have discovered that at least in ants, animals can change their decision-making strategies based on experience. They can also use that experience to weigh different options.
The findings are featured today in the early online edition of the scientific journal Biology Letters, as well as in its Dec. 23 edition.
Co-authors Taka Sasaki and Stephen Pratt, both with ASU’s School of Life Sciences, have studied insect collectives, such as ants, for years. Sasaki, a postdoctoral research associate, specializes in adapting psychological theories and experiments that are designed for humans to ants, hoping to understand how the collective decision-making process arises out of individually ignorant ants.
“The interesting thing is we can make decisions and ants can make decisions – but ants do it collectively,” said Sasaki. “So how different are we from ant colonies?”
To answer this question, Sasaki and Pratt gave a number of Temnothorax rugatulus ant colonies a series of choices between two nests with differing qualities. In one treatment, the entrances of the nests had varied sizes, and in the other, the exposure to light was manipulated. Since these ants prefer both a smaller entrance size and a lower level of light exposure, they had to prioritize.
“It’s kind of like a humans and buying a house,” said Pratt, an associate professor with the school. “There’s so many options to consider – the size, the number of rooms, the neighborhood, the price, if there’s a pool. The list goes on and on. And for the ants it’s similar, since they live in cavities that can be dark or light, big or small. With all of these things, just like with a human house, it’s very unlikely to find a home that has everything you want.”
Pratt continued to explain that because it is impossible to find the perfect habitat, ants make various tradeoffs for certain qualities, ordering them in a queue of most important aspects. But, when faced with a decision between two different homes, the ants displayed a previously unseen level of intelligence.
According to their data, the series of choices the ants faced caused them to reprioritize their preferences based on the type of decision they faced. Ants that had to choose a nest based on light level prioritized light level over entrance size in the final choice. On the other hand, ants that had to choose a nest based on entrance size ranked light level lower in the later experiment.
This means that, like people, ants take the past into account when weighing options while making a choice. The difference is that ants somehow manage to do this as a colony without any dissent. While this research builds on groundwork previously laid down by Sasaki and Pratt, the newest experiments have already raised more questions.
“You have hundreds of these ants, and somehow they have to reach a consensus,” Pratt said. “How do they do it without anyone in charge to tell them what to do?”
Pratt likened individual ants to individual neurons in the human brain. Both play a key role in the decision-making process, but no one understands how every neuron influences a decision.
Sasaki and Pratt hope to delve deeper into the realm of ant behavior so that one day, they can understand how individual ants influence the colony. Their greater goal is to apply what they discover to help society better understand how humanity can make collective decisions with the same ease ants display.
“This helps us learn how collective decision-making works and how it’s different from individual decision-making,” said Pratt. “And ants aren’t the only animals that make collective decisions – humans do, too. So maybe we can gain some general insight.”
(Source: asunews.asu.edu)
A study in The Journal of Cell Biology describes how neurons activate the protein PP1, providing key insights into the biology of learning and memory.
PP1 is known to be a key regulator of synaptic plasticity, the phenomenon in which neurons remodel their synaptic connections in order to store and relay information—the foundation of learning and memory. But how PP1 is controlled has been unclear. Now, a team led by researchers from the LSU Health Science Center describes several mechanisms for PP1 regulation that close some major gaps in our understanding of its role in neuronal signaling.
Among the novel findings, the researchers describe how the neurotransmitter NMDA leads to activation of PP1. They show that, when NMDA activates neuronal synapses, it switches off an enzyme, Cdk5, that would otherwise inhibit PP1. This allows PP1 to activate itself and promote synaptic remodeling. In addition, the researchers suggest that, despite its name, a regulatory protein called inhibitor-2 helps promote PP1 activity in neurons. Together, these findings significantly extend our understanding of how PP1 is regulated in the context of synaptic plasticity.
(Source: eurekalert.org)
It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.
While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.
Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.
The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.
“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”
The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.
Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.
“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”
The nickelate system is also well positioned for seamless integration into existing silicon-based systems.
“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.
“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”
In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”
He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.
“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.
For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.
“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”
“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”
(Source: seas.harvard.edu)