Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

97 notes

Detecting Unidentified Changes
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.
Full Article

Detecting Unidentified Changes

Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.

Full Article

Filed under attention blindness visual awareness eye movements visual perception psychology neuroscience science

657 notes

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane
Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.
They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.
“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.
“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  
Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.
“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.
To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.
Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.
“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”
In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.
For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane

Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.

They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.

“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.

“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  

Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.

“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.

To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.

Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.

“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”

In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.

For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Filed under visual perception continuity field visual system perceptual serial dependence neuroscience science

280 notes

The circadian clock is like an orchestra with many conductors
You’ve switched to the night shift and your weight skyrockets, or you wake at 7 a.m. on weekdays but sleep until noon on weekends—a social jet lag that can fog your Saturday and Sunday.
Life runs on rhythms driven by circadian clocks, and disruption of these cycles is associated with serious physical and emotional problems, says Orie Shafer, a University of Michigan assistant professor of molecular, cellular and developmental biology.
Now, new findings from Shafer and U-M doctoral student Zepeng Yao challenge the prevailing wisdom about how our body clocks are organized, and suggest that interactions among neurons that govern circadian rhythms are more complex than originally thought.
Yao and Shafer looked at the circadian clock neuron network in fruit flies, which is functionally similar to that of mammals, but at only 150 clock neurons is much simpler. Previously, scientists thought that a master group of eight clock neurons acted as pacemaker for the remaining 142 clock neurons—think of a conductor leading an orchestra—thus imposing the rhythm for the fruit fly circadian clock. It is thought that the same principle applies to mammals.
Interactions among clock neurons determine the strength and speed of circadian rhythms, Yao says. So, when researchers genetically changed the clock speeds of only the group of eight master pacemakers they could examine how well the conductor alone governed the orchestra. They found that without the environmental cues, the orchestra didn’t follow the conductor as closely as previously thought.
Some of the fruit flies completely lost sense of time, and others simultaneously demonstrated two different sleep cycles, one following the group of eight neurons and the other following some other set of neurons.
"The finding shows that instead of the entire orchestra following a single conductor, part of the orchestra is following a different conductor or not listening at all," Shafer said.
The findings suggest that instead of a group of master pacemaker neurons, the clock network consists of many independent clocks, each of which drives rhythms in activity. Shafer and Yao suspect that a similar organization will be found in mammals, as well.
"A better understanding of the circadian clock mechanisms will be critical for attempts to alleviate the adverse effects associated with circadian disorders," Yao said.
Disrupting the circadian clock through shift work is associated with diabetes, obesity, stress, heart disease, mood disorders and cancer, among other disorders, Yao says. The International Agency for Research on Cancer classified shift work that disrupts circadian rhythms as a human carcinogen equal to cancer-causing ultraviolet radiation.

The circadian clock is like an orchestra with many conductors

You’ve switched to the night shift and your weight skyrockets, or you wake at 7 a.m. on weekdays but sleep until noon on weekends—a social jet lag that can fog your Saturday and Sunday.

Life runs on rhythms driven by circadian clocks, and disruption of these cycles is associated with serious physical and emotional problems, says Orie Shafer, a University of Michigan assistant professor of molecular, cellular and developmental biology.

Now, new findings from Shafer and U-M doctoral student Zepeng Yao challenge the prevailing wisdom about how our body clocks are organized, and suggest that interactions among neurons that govern circadian rhythms are more complex than originally thought.

Yao and Shafer looked at the circadian clock neuron network in fruit flies, which is functionally similar to that of mammals, but at only 150 clock neurons is much simpler. Previously, scientists thought that a master group of eight clock neurons acted as pacemaker for the remaining 142 clock neurons—think of a conductor leading an orchestra—thus imposing the rhythm for the fruit fly circadian clock. It is thought that the same principle applies to mammals.

Interactions among clock neurons determine the strength and speed of circadian rhythms, Yao says. So, when researchers genetically changed the clock speeds of only the group of eight master pacemakers they could examine how well the conductor alone governed the orchestra. They found that without the environmental cues, the orchestra didn’t follow the conductor as closely as previously thought.

Some of the fruit flies completely lost sense of time, and others simultaneously demonstrated two different sleep cycles, one following the group of eight neurons and the other following some other set of neurons.

"The finding shows that instead of the entire orchestra following a single conductor, part of the orchestra is following a different conductor or not listening at all," Shafer said.

The findings suggest that instead of a group of master pacemaker neurons, the clock network consists of many independent clocks, each of which drives rhythms in activity. Shafer and Yao suspect that a similar organization will be found in mammals, as well.

"A better understanding of the circadian clock mechanisms will be critical for attempts to alleviate the adverse effects associated with circadian disorders," Yao said.

Disrupting the circadian clock through shift work is associated with diabetes, obesity, stress, heart disease, mood disorders and cancer, among other disorders, Yao says. The International Agency for Research on Cancer classified shift work that disrupts circadian rhythms as a human carcinogen equal to cancer-causing ultraviolet radiation.

Filed under circadian rhythms fruit flies clock neurons sleep cycle psychology neuroscience science

133 notes

Researchers Close In On The Most Important Question In Neuroscience With Fly Study
By scrutinizing the twists, turns, wiggles and squirms of 37,780 fruit fly larvae, neuroscientists have created an unprecedented view of how brain cells create behavior. The results, published March 27 in Science, draw direct connections between neurons and specific movements.
"Understanding how neural activity gives rise to behavior is the most important question in neuroscience," says neuroscientist Kay Tye of MIT, who was not involved in the research. The new study provides a way for scientists to start answering that question, she says. "I think this is a really important approach that ‘s going to be very influential."
Scientists led by Marta Zlatic of the Howard Hughes Medical Institute ‘s Janelia Farm Research Campus in Ashburn, Va., took advantage of an existing set of specially mutated flies. In each animal, small groups of neurons, usually between 2 and 15 cells, were engineered to respond to blue light. By activating handfuls of neurons with light and analyzing videos of the resulting behaviors, the researchers systematically explored most of the 10,000 neurons in Drosophila melanogaster larvae’s brain.
Read more

Researchers Close In On The Most Important Question In Neuroscience With Fly Study

By scrutinizing the twists, turns, wiggles and squirms of 37,780 fruit fly larvae, neuroscientists have created an unprecedented view of how brain cells create behavior. The results, published March 27 in Science, draw direct connections between neurons and specific movements.

"Understanding how neural activity gives rise to behavior is the most important question in neuroscience," says neuroscientist Kay Tye of MIT, who was not involved in the research. The new study provides a way for scientists to start answering that question, she says. "I think this is a really important approach that ‘s going to be very influential."

Scientists led by Marta Zlatic of the Howard Hughes Medical Institute ‘s Janelia Farm Research Campus in Ashburn, Va., took advantage of an existing set of specially mutated flies. In each animal, small groups of neurons, usually between 2 and 15 cells, were engineered to respond to blue light. By activating handfuls of neurons with light and analyzing videos of the resulting behaviors, the researchers systematically explored most of the 10,000 neurons in Drosophila melanogaster larvae’s brain.

Read more

Filed under fruit flies neural activity neurons optogenetics neuroscience science

219 notes

Silicon-based probe microstructure could underpin safer neural implants

Neural probe arrays are expected to significantly benefit the lives of amputees and people affected by spinal cord injuries or severe neuromotor diseases. By providing a direct route of communication between the brain and artificial limbs, these arrays record and stimulate neurons in the cerebral cortex.

image

(Image caption: The compact neural probe array consists of a three-dimensional probe array, a custom 100-channel neural recording chip and a flexible polyimide polymer cable. Credit: A*STAR Institute of Microelectronics)

The need for neural probe arrays that are compact, reliable and deliver high performance has prompted researchers to use microfabrication techniques to manufacture probe arrays. Now, a team led by Ming-Yuan Cheng from the A*STAR Institute of Microelectronics, Singapore, has developed a three-dimensional probe array for chronic and long-term implantation in the brain. This array is compact enough to freely float along with the brain when implanted on the cortex.

The neural probe array needs to be implanted in the subarachnoid space of the brain, a narrow region of 1–2.5 millimeters in depth that lies between the pia mater and dura mater brain meninges. “A high-profile array may touch the skull and damage the tissue when relative micromotions occur between the brain and the probes,” explains Cheng. To avoid this problem, the array should be as thin as possible.

Read more

Filed under neural probe arrays neural implants prosthetics cerebral cortex neuroscience science

223 notes

Artificial intelligence lie detector
Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.
However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.
While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.
Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”
Artificially intelligent
The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.
The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.
But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.
Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.
Coming to an interview near you
Dr O’Shea says the uses for such a device are numerous.
“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”
While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.
However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.
“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Artificial intelligence lie detector

Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.

However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.

While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.

Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”

Artificially intelligent

The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.

The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.

But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.

Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.

Coming to an interview near you

Dr O’Shea says the uses for such a device are numerous.

“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”

While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.

However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.

“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Filed under AI lie detector machine learning silent talker ANNs pattern recognition technology neuroscience psychology science

347 notes

Facebook’s facial recognition software is now as accurate as the human brain, but what now?
Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.
The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.
Read more

Facebook’s facial recognition software is now as accurate as the human brain, but what now?

Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.

The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.

Read more

Filed under DeepFace facial recognition AI neural networks deep learning facebook technology neuroscience science

94 notes

(Figure 1: Fluorescent labeling reveals mossy fibers (red) projecting from the dentate gyrus (green) into the CA2 subregion (orange). Credit: Keigo Kohara, RIKEN–MIT Center for Neural Circuit Genetics)  
Novel combination of techniques reveals new details about the neuronal networks for memory
Learning and memory are believed to occur as a result of the strengthening of synaptic connections among neurons in a brain structure called the hippocampus. The hippocampus consists of five subregions, and a circuit formed between four of these is thought to be particularly important for memory formation. Keigo Kohara and colleagues from the RIKEN–MIT Center for Neural Circuit Genetics and RIKEN BioResource Center have now identified a previously unknown circuit involving the fifth subregion.
For a hundred years, memory research has typically focused on the main circuit, which projects from layer II of the entorhinal cortex via the dentate gyrus to subregion CA3 and then CA1. Subregion CA2 lies between CA3 and CA1 but its cells are less elaborate than those of its neighbors and were thought not to receive inputs from the dentate gyrus.
Kohara and his colleagues combined anatomical, genetic and physiological techniques to analyze the connections formed by neurons in the CA2 subregion of the hippocampus in unprecedented detail. First, they identified the CA2 subregion by examining the expression of three genes that encode proteins called RGS14, PCP4 and STEP using a fluorescent marker to label nerve fibers—a technique called fluorescent immunohistochemistry. They were surprised to discover that, contrary to expectations, CA2 neurons receive extensive inputs from cells in the dentate gyrus (Fig.1).
Read more

(Figure 1: Fluorescent labeling reveals mossy fibers (red) projecting from the dentate gyrus (green) into the CA2 subregion (orange). Credit: Keigo Kohara, RIKEN–MIT Center for Neural Circuit Genetics)

Novel combination of techniques reveals new details about the neuronal networks for memory

Learning and memory are believed to occur as a result of the strengthening of synaptic connections among neurons in a brain structure called the hippocampus. The hippocampus consists of five subregions, and a circuit formed between four of these is thought to be particularly important for memory formation. Keigo Kohara and colleagues from the RIKEN–MIT Center for Neural Circuit Genetics and RIKEN BioResource Center have now identified a previously unknown circuit involving the fifth subregion.

For a hundred years, memory research has typically focused on the main circuit, which projects from layer II of the entorhinal cortex via the dentate gyrus to subregion CA3 and then CA1. Subregion CA2 lies between CA3 and CA1 but its cells are less elaborate than those of its neighbors and were thought not to receive inputs from the dentate gyrus.

Kohara and his colleagues combined anatomical, genetic and physiological techniques to analyze the connections formed by neurons in the CA2 subregion of the hippocampus in unprecedented detail. First, they identified the CA2 subregion by examining the expression of three genes that encode proteins called RGS14, PCP4 and STEP using a fluorescent marker to label nerve fibers—a technique called fluorescent immunohistochemistry. They were surprised to discover that, contrary to expectations, CA2 neurons receive extensive inputs from cells in the dentate gyrus (Fig.1).

Read more

Filed under hippocampus dentate gyrus memory formation optogenetics fluorescent immunohistochemistry neuroscience science

141 notes

Researchers demonstrate information processing using a light-based chip inspired by our brain
In a recent paper in Nature Communications, researchers from Ghent University report on a novel paradigm to do optical information processing on a chip, using techniques inspired by the way our brain works.
Neural networks have been employed in the past to solve pattern recognition problems like speech recognition or image recognition, but so far, these bio-inspired techniques have been implemented mostly in software on a traditional computer. What UGent researchers have done is implemented a small (16 nodes) neural network directly in hardware, using a silicon photonics chip. Such a chip is fabricated using the same technology as traditional computer chips, but uses light rather than electricity as the information carrier. This approach has many benefits including the potential for extremely high speeds and low power consumption.
The UGent researchers have experimentally shown that the same chip can be used for a large variety of tasks, like arbitrary calculations with memory on a bit stream or header recognition (an operation relevant in telecom networks: the header is an address indicating where the data needs to be sent). Additionally, simulations have shown that the same chip can perform a limited form of speech recognition, by recognising individual spoken digits (“one”, “two”, …).

Researchers demonstrate information processing using a light-based chip inspired by our brain

In a recent paper in Nature Communications, researchers from Ghent University report on a novel paradigm to do optical information processing on a chip, using techniques inspired by the way our brain works.

Neural networks have been employed in the past to solve pattern recognition problems like speech recognition or image recognition, but so far, these bio-inspired techniques have been implemented mostly in software on a traditional computer. What UGent researchers have done is implemented a small (16 nodes) neural network directly in hardware, using a silicon photonics chip. Such a chip is fabricated using the same technology as traditional computer chips, but uses light rather than electricity as the information carrier. This approach has many benefits including the potential for extremely high speeds and low power consumption.

The UGent researchers have experimentally shown that the same chip can be used for a large variety of tasks, like arbitrary calculations with memory on a bit stream or header recognition (an operation relevant in telecom networks: the header is an address indicating where the data needs to be sent). Additionally, simulations have shown that the same chip can perform a limited form of speech recognition, by recognising individual spoken digits (“one”, “two”, …).

Filed under neural networks pattern recognition speech recognition neuroscience science

1,105 notes

A good trip: Researchers are giving psychedelics to cancer patients to help alleviate their despair — and it’s working
On a bone-chilling morning in February last year, Nick Fernandez bundled up and took the subway from his Manhattan apartment to the Bluestone Center for Clinical Research, which is located in an art deco-style building on the Upper East Side. A 27-year-old graduate student in psychology with dark, wavy hair and delicate, bird-like features, Fernandez was excited and nervous. He had eaten a light breakfast consisting of a bagel and industrial-strength coffee in preparation for another journey he was about to take. Fernandez had signed up to be a subject in a New York University study into the use of psilocybin, the psychoactive ingredient in hallucinogenic mushrooms, to relieve mental anguish in people with terminal or recurrent cancer.
Fernandez hoped that the drug would lift the shroud of melancholy and free-floating anxiety that had enveloped him ever since he was diagnosed with leukemia in 2004 during his senior year in high school. Two and a half years of almost continuous chemotherapy vanquished the disease, but left him drained and traumatised. The former soccer star dropped more than 50 lbs from an already lean frame. ‘It was pretty brutal and forces you to grow up fast,’ said Fernandez, who became intensely interested in spiritual philosophy during this period, and went on to dabble in psychedelics in college. For years afterward, every sneeze and sniffle, every day that he felt tired or out of sorts, filled him with an unshakeable dread that the cancer had returned. When he heard the study mentioned on a radio show, he immediately signed up.
Jeffrey Guss and Erin Zerbo, the two NYU psychiatrists who would quietly monitor Fernandez’s progress throughout the day, greeted him when he arrived. After they took his vital signs, Fernandez changed into sweat pants and a shirt, and settled into a converted dental exam room that had been transformed into a hippie-style sanctum: tricked out with fresh flowers and fruits, a comfy sofa littered with plush pillows, Buddhist and shamanistic totems, and a high-tech sound system. Stephen Ross, an associate professor of psychiatry at NYU and the lead investigator for the study, made a brief appearance in the trip room. He was holding a glass vial that had been retrieved earlier that morning from a massive safe located inside a high-security storage room. It contained a single white capsule, and no one could be sure if it was a placebo – a dummy pill – or a 30 milligram dose of synthesised psilocybin.
Read more

A good trip: Researchers are giving psychedelics to cancer patients to help alleviate their despair — and it’s working

On a bone-chilling morning in February last year, Nick Fernandez bundled up and took the subway from his Manhattan apartment to the Bluestone Center for Clinical Research, which is located in an art deco-style building on the Upper East Side. A 27-year-old graduate student in psychology with dark, wavy hair and delicate, bird-like features, Fernandez was excited and nervous. He had eaten a light breakfast consisting of a bagel and industrial-strength coffee in preparation for another journey he was about to take. Fernandez had signed up to be a subject in a New York University study into the use of psilocybin, the psychoactive ingredient in hallucinogenic mushrooms, to relieve mental anguish in people with terminal or recurrent cancer.

Fernandez hoped that the drug would lift the shroud of melancholy and free-floating anxiety that had enveloped him ever since he was diagnosed with leukemia in 2004 during his senior year in high school. Two and a half years of almost continuous chemotherapy vanquished the disease, but left him drained and traumatised. The former soccer star dropped more than 50 lbs from an already lean frame. ‘It was pretty brutal and forces you to grow up fast,’ said Fernandez, who became intensely interested in spiritual philosophy during this period, and went on to dabble in psychedelics in college. For years afterward, every sneeze and sniffle, every day that he felt tired or out of sorts, filled him with an unshakeable dread that the cancer had returned. When he heard the study mentioned on a radio show, he immediately signed up.

Jeffrey Guss and Erin Zerbo, the two NYU psychiatrists who would quietly monitor Fernandez’s progress throughout the day, greeted him when he arrived. After they took his vital signs, Fernandez changed into sweat pants and a shirt, and settled into a converted dental exam room that had been transformed into a hippie-style sanctum: tricked out with fresh flowers and fruits, a comfy sofa littered with plush pillows, Buddhist and shamanistic totems, and a high-tech sound system. Stephen Ross, an associate professor of psychiatry at NYU and the lead investigator for the study, made a brief appearance in the trip room. He was holding a glass vial that had been retrieved earlier that morning from a massive safe located inside a high-security storage room. It contained a single white capsule, and no one could be sure if it was a placebo – a dummy pill – or a 30 milligram dose of synthesised psilocybin.

Read more

Filed under psilocybin psychoactive drugs psychedelics cancer psychology neuroscience science

free counters