Neuroscience

Articles and news from the latest research reports.

Posts tagged visual system

231 notes

Despite what you may think, your brain is a mathematical genius
The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having “turned off your brain.”
"Actually, you’ve just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you’re resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?
The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk’s Vision Center Laboratory. “It’s a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?”
Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences (Correction). It may reconcile the puzzling contradictions in many previous studies.
Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. “Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus,” says Albright, holder of Salk’s Conrad T. Prebys Chair in Vision Research.
The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system’s perspective?
It turns out something’s got to give.
"It’s as if the brain’s on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you’re attuned to high speeds, you’ll be able to see faster moving things that you couldn’t see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."
Summing up, Albright says, “Simply put, it’s a tradeoff: The price of getting better at one thing is getting worse at another.”
Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician’s point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.
Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn’t his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.
"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.
The location of a signal is simply that: where is the signal at what point in time. The content-the “what” of a signal-is “written” in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.
The challenge comes when you’re trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).
The obvious answer is that you’re stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you’ve come up with the best possible compromise? Gabor’s answer was what’s become known as a “Gabor Filter” that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.
"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."
In essence, the first stages of the visual process act like a filter. “It describes which stimuli get in, and which do not,” Gepshtein says. “When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in.”
"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."
From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously ‘pay attention’ to the change in scene.
Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.
(Image: Gary Meader)

Despite what you may think, your brain is a mathematical genius

The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having “turned off your brain.”

"Actually, you’ve just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you’re resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?

The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk’s Vision Center Laboratory. “It’s a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?”

Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences (Correction). It may reconcile the puzzling contradictions in many previous studies.

Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. “Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus,” says Albright, holder of Salk’s Conrad T. Prebys Chair in Vision Research.

The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system’s perspective?

It turns out something’s got to give.

"It’s as if the brain’s on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you’re attuned to high speeds, you’ll be able to see faster moving things that you couldn’t see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."

Summing up, Albright says, “Simply put, it’s a tradeoff: The price of getting better at one thing is getting worse at another.”

Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician’s point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.

Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn’t his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.

"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.

The location of a signal is simply that: where is the signal at what point in time. The content-the “what” of a signal-is “written” in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.

The challenge comes when you’re trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).

The obvious answer is that you’re stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you’ve come up with the best possible compromise? Gabor’s answer was what’s become known as a “Gabor Filter” that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.

"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."

In essence, the first stages of the visual process act like a filter. “It describes which stimuli get in, and which do not,” Gepshtein says. “When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in.”

"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."

From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously ‘pay attention’ to the change in scene.

Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.

(Image: Gary Meader)

Filed under brain visual system visual adaptation signal processing neuroscience science

167 notes

Researchers identify new vision of how we explore our world
Brain researchers at Barrow Neurological Institute have discovered that we explore the world with our eyes in a different way than previously thought. Their results advance our understanding of how healthy observers and neurological patients interact and glean critical information from the world around them.
The research team was led by Dr. Susana Martinez-Conde, Director of the Laboratory of Visual Neuroscience at Barrow, in collaboration with fellow Barrow Neurological Institute researchers Jorge Otero-Millan, Rachel Langston, and Dr. Stephen Macknik, Director of the Laboratory of Behavioral Neurophysiology. The study, titled “An oculomotor continuum from exploration to fixation”, was published in the Proceedings of the National Academy of Sciences.
Previously, scientists thought that we sample visual information from the world in two main different modes: exploration and fixation. “We used to think that we make large eye movements to search for objects of interest, and then fix our gaze to see them with high detail,” says Martinez-Conde. “But now we know that’s not quite right.”
The discovery shows that even during visual fixation, we are actually scanning visual details with small eye movements — just like we explore visual scenes with big eye movements, but on a smaller scale. This means that exploration and fixation are two ends of the same continuum of oculomotor scanning.
Subjects viewed natural images while the team measured their eye movements with high-speed eye tracking. The images could range in size from the massive, presented on a room-sized video monitor in the Barrow Neurological Institute’s Eller Telepresence Room, normally used for Barrow’s surgeons to collaborate in brain surgeries with colleagues around the world, to images that are just half the width of your thumb nail.
In all cases, the researchers found that subjects’ eyes scanned the scenes with the same general strategy, along a smooth continuum of dynamical changes. “There was no abrupt change in the characteristics of the eye movements, whether the visual scenes were huge or tiny, or even when the subjects were fixing their gaze. That means that the brain controls eye movements in the same way when we explore and when we fixate,” said Dr. Martinez-Conde.
Scientists have studied how the brain controls eye movements for over 100 years, and the idea —challenged here—that fixation and exploration are fundamentally different behaviors has been central to the field. This new perspective will affect future research and bring focus to the study of neurological diseases that impact oculomotor behavior.
(Image: Getty Images)

Researchers identify new vision of how we explore our world

Brain researchers at Barrow Neurological Institute have discovered that we explore the world with our eyes in a different way than previously thought. Their results advance our understanding of how healthy observers and neurological patients interact and glean critical information from the world around them.

The research team was led by Dr. Susana Martinez-Conde, Director of the Laboratory of Visual Neuroscience at Barrow, in collaboration with fellow Barrow Neurological Institute researchers Jorge Otero-Millan, Rachel Langston, and Dr. Stephen Macknik, Director of the Laboratory of Behavioral Neurophysiology. The study, titled “An oculomotor continuum from exploration to fixation”, was published in the Proceedings of the National Academy of Sciences.

Previously, scientists thought that we sample visual information from the world in two main different modes: exploration and fixation. “We used to think that we make large eye movements to search for objects of interest, and then fix our gaze to see them with high detail,” says Martinez-Conde. “But now we know that’s not quite right.”

The discovery shows that even during visual fixation, we are actually scanning visual details with small eye movements — just like we explore visual scenes with big eye movements, but on a smaller scale. This means that exploration and fixation are two ends of the same continuum of oculomotor scanning.

Subjects viewed natural images while the team measured their eye movements with high-speed eye tracking. The images could range in size from the massive, presented on a room-sized video monitor in the Barrow Neurological Institute’s Eller Telepresence Room, normally used for Barrow’s surgeons to collaborate in brain surgeries with colleagues around the world, to images that are just half the width of your thumb nail.

In all cases, the researchers found that subjects’ eyes scanned the scenes with the same general strategy, along a smooth continuum of dynamical changes. “There was no abrupt change in the characteristics of the eye movements, whether the visual scenes were huge or tiny, or even when the subjects were fixing their gaze. That means that the brain controls eye movements in the same way when we explore and when we fixate,” said Dr. Martinez-Conde.

Scientists have studied how the brain controls eye movements for over 100 years, and the idea —challenged here—that fixation and exploration are fundamentally different behaviors has been central to the field. This new perspective will affect future research and bring focus to the study of neurological diseases that impact oculomotor behavior.

(Image: Getty Images)

Filed under vision visual system visual fixation visual exploration eye movements neuroscience science

184 notes

Hallucinations of musical notation: new paper for neurology journal Brain by Oliver Sacks
Professor of neurology, physician, and author Oliver Sacks M.D. has outlined case studies of hallucinations of musical notation, and commented on the neural basis of such hallucinations, in a new paper for the neurology journal Brain.
In this paper, Dr Sacks is building on work done by Dominic ffytche et al in 2000, which delineates more than a dozen types of hallucinations, particularly in relation to people with Charles Bonnet syndrome (a condition that causes patients with visual loss to have complex visual hallucinations). While ffytche believes that hallucinations of musical notation are rarer than some other types of visual hallucination, Sacks says that his own experience is different.
“Perhaps because I have investigated various musical syndromes,” writes Dr Sacks, “and people often write to me about these… I have seen or corresponded with a dozen or more people whose hallucinations include – and sometimes consist exclusively of – musical notation.”
Sacks goes on to detail eight fascinating case studies of people who have reported experiencing hallucinations of musical notation, including:
A 77 year old woman with glaucoma who wrote of her “musical eyes”. She saw “music, lines, spaces, notes, clefs – in fact written music on everything [she] looked at.”
A surgeon and pianist suffering from macular degeneration, who saw unreadable and unplayable music on a white background.
A Sanskrit scholar who developed Parkinson’s disease in his 60s and later reported hallucinating ornately-written music, occurring with a Sanskrit script. “Despite the exotic nature of the script the result is still western music,” he said.
A woman who reported seeing musical notation on her ceiling upon waking in the morning.
A woman who said she wasn’t a musician, but would hallucinate when she had high fevers as a child. She said that the notes were “angry, and [she] felt unease. The lines and notes were out of control and at times in a ball.”
It is striking that, of Dr Sacks’ eight case studies, seven were gifted musicians. Sacks comments, “This is perhaps a coincidence, but it makes one wonder whether there is something about musical scores that is radically different from verbal texts.” Musical scores are far more visually complex than standard (English) text, with not just a variety of notes, but also many symbols that indicate how the notes should be played.
Dr Sacks also says that he has a mild form of Charles Bonnet syndrome himself, in which he sees a variety of simple forms whenever he gazes at a blank surface. “When I recently returned to playing the piano and to studying scores minutely, I began to ‘see’ showers of flat signs along with the letters and runes on blank surfaces.”
Another striking feature of these hallucinations is that – like text hallucinations – they are generally unreadable. They can seem playable at first, but on closer inspection it transpires that the music is often nonsensical or impossible to play, such as an example reported in one of the case studies: a melody line three or more octaves above middle C, and so may have half a dozen or more ledger lines above the treble staff.
Usually, the early visual system analyses forms and sends the information it has extracted to higher areas, where it gains coherence and meaning. Normally, in the act of perception, the entire visual system is engaged. Paradoxically, according to Sacks, “one may have to study disorders of the visual system to see how complex perceptual and cognitive processes are analysed and delegated to different levels… and hallucinations of musical notation can provide a very rich field of study here.”

Hallucinations of musical notation: new paper for neurology journal Brain by Oliver Sacks

Professor of neurology, physician, and author Oliver Sacks M.D. has outlined case studies of hallucinations of musical notation, and commented on the neural basis of such hallucinations, in a new paper for the neurology journal Brain.

In this paper, Dr Sacks is building on work done by Dominic ffytche et al in 2000, which delineates more than a dozen types of hallucinations, particularly in relation to people with Charles Bonnet syndrome (a condition that causes patients with visual loss to have complex visual hallucinations). While ffytche believes that hallucinations of musical notation are rarer than some other types of visual hallucination, Sacks says that his own experience is different.

“Perhaps because I have investigated various musical syndromes,” writes Dr Sacks, “and people often write to me about these… I have seen or corresponded with a dozen or more people whose hallucinations include – and sometimes consist exclusively of – musical notation.”

Sacks goes on to detail eight fascinating case studies of people who have reported experiencing hallucinations of musical notation, including:

  • A 77 year old woman with glaucoma who wrote of her “musical eyes”. She saw “music, lines, spaces, notes, clefs – in fact written music on everything [she] looked at.”
  • A surgeon and pianist suffering from macular degeneration, who saw unreadable and unplayable music on a white background.
  • A Sanskrit scholar who developed Parkinson’s disease in his 60s and later reported hallucinating ornately-written music, occurring with a Sanskrit script. “Despite the exotic nature of the script the result is still western music,” he said.
  • A woman who reported seeing musical notation on her ceiling upon waking in the morning.
  • A woman who said she wasn’t a musician, but would hallucinate when she had high fevers as a child. She said that the notes were “angry, and [she] felt unease. The lines and notes were out of control and at times in a ball.”

It is striking that, of Dr Sacks’ eight case studies, seven were gifted musicians. Sacks comments, “This is perhaps a coincidence, but it makes one wonder whether there is something about musical scores that is radically different from verbal texts.” Musical scores are far more visually complex than standard (English) text, with not just a variety of notes, but also many symbols that indicate how the notes should be played.

Dr Sacks also says that he has a mild form of Charles Bonnet syndrome himself, in which he sees a variety of simple forms whenever he gazes at a blank surface. “When I recently returned to playing the piano and to studying scores minutely, I began to ‘see’ showers of flat signs along with the letters and runes on blank surfaces.”

Another striking feature of these hallucinations is that – like text hallucinations – they are generally unreadable. They can seem playable at first, but on closer inspection it transpires that the music is often nonsensical or impossible to play, such as an example reported in one of the case studies: a melody line three or more octaves above middle C, and so may have half a dozen or more ledger lines above the treble staff.

Usually, the early visual system analyses forms and sends the information it has extracted to higher areas, where it gains coherence and meaning. Normally, in the act of perception, the entire visual system is engaged. Paradoxically, according to Sacks, “one may have to study disorders of the visual system to see how complex perceptual and cognitive processes are analysed and delegated to different levels… and hallucinations of musical notation can provide a very rich field of study here.”

Filed under hallucinations music musical notation Charles Bonnet syndrome Oliver Sacks visual system neurology neuroscience science

131 notes

Neanderthal brains focussed on vision and movement
Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.
Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.
The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.
Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.
The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.
Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.
'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.
‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’
'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.
'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'
Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’
The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Neanderthal brains focussed on vision and movement

Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.

Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.

The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.

Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.

The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.

Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.

'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.

‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’

'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.

'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'

Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’

The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Filed under brain Neanderthals brain structure cognitive functions visual system neuroscience psychology evolution science

562 notes


Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.
Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.
I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.
Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.

Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.

I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.

Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Filed under vision visual system computerized eyewear augmented reality technology science

98 notes

Ectopic Eyes Function Without Connection to Brain
For the first time, scientists have shown that transplanted eyes located far outside the head in a vertebrate animal model can confer vision without a direct neural connection to the brain.
Biologists at Tufts University School of Arts and Sciences used a frog model to shed new light – literally – on one of the major questions in regenerative medicine, bioengineering, and sensory augmentation research.
"One of the big challenges is to understand how the brain and body adapt to large changes in organization," says Douglas J. Blackiston, Ph.D., first author of the paper "Ectopic Eyes Outside the Head in Xenopus Tadpoles Provide Sensory Data For Light-Mediated Learning," in the February 27 issue of the Journal of Experimental Biology. “Here, our research reveals the brain’s remarkable ability, or plasticity, to process visual data coming from misplaced eyes, even when they are located far from the head.”
Blackiston is a post-doctoral associate in the laboratory of co-author Michael Levin, Ph.D., professor of biology and director of the Center for Regenerative and Developmental Biology at Tufts University.
Levin notes, “A primary goal in medicine is to one day be able to restore the function of damaged or missing sensory structures through the use of biological or artificial replacement components. There are many implications of this study, but the primary one from a medical standpoint is that we may not need to make specific connections to the brain when treating sensory disorders such as blindness.”
In this experiment, the team surgically removed donor embryo eye primordia, marked with fluorescent proteins, and grafted them into the posterior region of recipient embryos. This induced the growth of ectopic eyes. The recipients’ natural eyes were removed, leaving only the ectopic eyes.
Fluorescence microscopy revealed various innervation patterns but none of the animals developed nerves that connected the ectopic eyes to the brain or cranial region.
To determine if the ectopic eyes conveyed visual information, the team developed a computer-controlled visual training system in which quadrants of water were illuminated by either red or blue LED lights. The system could administer a mild electric shock to tadpoles swimming in a particular quadrant. A motion tracking system outfitted with a camera and a computer program allowed the scientists to monitor and record the tadpoles’ motion and speed.
Eyes See Without Wiring to Brain
The team made exciting discoveries: Just over 19 percent of the animals with optic nerves that connected to the spine demonstrated learned responses to the lights. They swam away from the red light while the blue light stimulated natural movement.
Their response to the lights elicited during the experiments was no different from that of a control group of tadpoles with natural eyes intact. Furthermore, this response was not demonstrated by eyeless tadpoles or tadpoles that did not receive any electrical shock.
"This has never been shown before," says Levin. "No one would have guessed that eyes on the flank of a tadpole could see, especially when wired only to the spinal cord and not the brain."The findings suggest a remarkable plasticity in the brain’s ability to incorporate signals from various body regions into behavioral programs that had evolved with a specific and different body plan.
"Ectopic eyes performed visual function," says Blackiston. "The brain recognized visual data from eyes that impinged on the spinal cord. We still need to determine if this plasticity in vertebrate brains extends to different ectopic organs or organs appropriate in different species."
One of the most fascinating areas for future investigation, according to Blackiston and Levin, is the question of exactly how the brain recognizes that the electrical signals coming from tissue near the gut is to be interpreted as visual data.
In computer engineering, notes Levin, who majored in computer science and biology as a Tufts undergraduate, this problem is usually solved by a “header”—a piece of metadata attached to a packet of information that indicates its source and type. Whether electric signals from eyes impinging on the spinal cord carry such an identifier of their origin remains a hypothesis to be tested.

Ectopic Eyes Function Without Connection to Brain

For the first time, scientists have shown that transplanted eyes located far outside the head in a vertebrate animal model can confer vision without a direct neural connection to the brain.

Biologists at Tufts University School of Arts and Sciences used a frog model to shed new light – literally – on one of the major questions in regenerative medicine, bioengineering, and sensory augmentation research.

"One of the big challenges is to understand how the brain and body adapt to large changes in organization," says Douglas J. Blackiston, Ph.D., first author of the paper "Ectopic Eyes Outside the Head in Xenopus Tadpoles Provide Sensory Data For Light-Mediated Learning," in the February 27 issue of the Journal of Experimental Biology. “Here, our research reveals the brain’s remarkable ability, or plasticity, to process visual data coming from misplaced eyes, even when they are located far from the head.”

Blackiston is a post-doctoral associate in the laboratory of co-author Michael Levin, Ph.D., professor of biology and director of the Center for Regenerative and Developmental Biology at Tufts University.

Levin notes, “A primary goal in medicine is to one day be able to restore the function of damaged or missing sensory structures through the use of biological or artificial replacement components. There are many implications of this study, but the primary one from a medical standpoint is that we may not need to make specific connections to the brain when treating sensory disorders such as blindness.”

In this experiment, the team surgically removed donor embryo eye primordia, marked with fluorescent proteins, and grafted them into the posterior region of recipient embryos. This induced the growth of ectopic eyes. The recipients’ natural eyes were removed, leaving only the ectopic eyes.

Fluorescence microscopy revealed various innervation patterns but none of the animals developed nerves that connected the ectopic eyes to the brain or cranial region.

To determine if the ectopic eyes conveyed visual information, the team developed a computer-controlled visual training system in which quadrants of water were illuminated by either red or blue LED lights. The system could administer a mild electric shock to tadpoles swimming in a particular quadrant. A motion tracking system outfitted with a camera and a computer program allowed the scientists to monitor and record the tadpoles’ motion and speed.

Eyes See Without Wiring to Brain

The team made exciting discoveries: Just over 19 percent of the animals with optic nerves that connected to the spine demonstrated learned responses to the lights. They swam away from the red light while the blue light stimulated natural movement.

Their response to the lights elicited during the experiments was no different from that of a control group of tadpoles with natural eyes intact. Furthermore, this response was not demonstrated by eyeless tadpoles or tadpoles that did not receive any electrical shock.

"This has never been shown before," says Levin. "No one would have guessed that eyes on the flank of a tadpole could see, especially when wired only to the spinal cord and not the brain."
The findings suggest a remarkable plasticity in the brain’s ability to incorporate signals from various body regions into behavioral programs that had evolved with a specific and different body plan.

"Ectopic eyes performed visual function," says Blackiston. "The brain recognized visual data from eyes that impinged on the spinal cord. We still need to determine if this plasticity in vertebrate brains extends to different ectopic organs or organs appropriate in different species."

One of the most fascinating areas for future investigation, according to Blackiston and Levin, is the question of exactly how the brain recognizes that the electrical signals coming from tissue near the gut is to be interpreted as visual data.

In computer engineering, notes Levin, who majored in computer science and biology as a Tufts undergraduate, this problem is usually solved by a “header”—a piece of metadata attached to a packet of information that indicates its source and type. Whether electric signals from eyes impinging on the spinal cord carry such an identifier of their origin remains a hypothesis to be tested.

Filed under animal model visual system brain plasticity ectopic eyes regenerative medicine neuroscience science

70 notes

Professional athletes have extraordinary skills for rapidly learning complex and neutral dynamic visual scenes
Evidence suggests that an athlete’s sports-related perceptual-cognitive expertise is a crucial element of top-level competitive sports. When directly assessing whether such experience-related abilities correspond to fundamental and non-specific cognitive laboratory measures such as processing speed and attention, studies have shown moderate effects leading to the conclusion that their special abilities are context-specific. We trained 308 observers on a complex dynamic visual scene task void of context and motor control requirements3 and demonstrate that professionals as a group dramatically differ from high-level amateur athletes, who dramatically differ from non-athlete university students in their capacity to learn such stimuli. This demonstrates that a distinguishing factor explaining the capacities of professional athletes is their ability to learn how to process complex dynamic visual scenes. This gives us an insight as to what is so special about the elite athletes’ mental abilities, which allows them to express great prowess in action.
Full article
(Image: Getty)

Professional athletes have extraordinary skills for rapidly learning complex and neutral dynamic visual scenes

Evidence suggests that an athlete’s sports-related perceptual-cognitive expertise is a crucial element of top-level competitive sports. When directly assessing whether such experience-related abilities correspond to fundamental and non-specific cognitive laboratory measures such as processing speed and attention, studies have shown moderate effects leading to the conclusion that their special abilities are context-specific. We trained 308 observers on a complex dynamic visual scene task void of context and motor control requirements3 and demonstrate that professionals as a group dramatically differ from high-level amateur athletes, who dramatically differ from non-athlete university students in their capacity to learn such stimuli. This demonstrates that a distinguishing factor explaining the capacities of professional athletes is their ability to learn how to process complex dynamic visual scenes. This gives us an insight as to what is so special about the elite athletes’ mental abilities, which allows them to express great prowess in action.

Full article

(Image: Getty)

Filed under professional athletes visual system motion perception perception performance psychology neuroscience

254 notes

Researchers Find Causality in the Eye of the Beholder
We rely on our visual system more heavily than previously thought in determining the causality of events. A team of researchers has shown that, in making judgments about causality, we don’t always need to use cognitive reasoning. In some cases, our visual brain—the brain areas that process what the eyes sense—can make these judgments rapidly and automatically.
The study appears in the latest issue of the journal Current Biology.
“Our study reveals that causality can be computed at an early level in the visual system,” said Martin Rolfs, who conducted much of the research as a post-doctoral fellow in NYU’s Department of Psychology. “This finding ends a long-standing debate over how some visual events are processed: we show that our eyes can quickly make assessments about cause-and-effect—without the help of our cognitive systems.”
Rolfs is currently a research group leader at the Bernstein Center for Computational Neuroscience and the Department of Psychology of Berlin’s Humboldt University. The study’s other co-authors were Michael Dambacher, post-doctoral researcher at the universities of Potsdam and Konstanz, and Patrick Cavanagh, professor at Université Paris Descartes.
We frequently make rapid judgments of causality (“The ball knocked the glass off the table”), animacy (“Look out, that thing is alive!”), or intention (“He meant to help her”). These judgments are complex enough that many believe that substantial cognitive reasoning is required—we need our brains to tell us what our eyes have seen. However, some judgments are so rapid and effortless that they “feel” perceptual – we can make them using only our visual systems, with no thinking required.
It is not yet clear which judgments require significant cognitive processing and which may be mediated solely by our visual system. In the Current Biology study, the researchers investigated one of these—causality judgments—in an effort to better understand the division of labor between visual and cognitive processes.

Researchers Find Causality in the Eye of the Beholder

We rely on our visual system more heavily than previously thought in determining the causality of events. A team of researchers has shown that, in making judgments about causality, we don’t always need to use cognitive reasoning. In some cases, our visual brain—the brain areas that process what the eyes sense—can make these judgments rapidly and automatically.

The study appears in the latest issue of the journal Current Biology.

“Our study reveals that causality can be computed at an early level in the visual system,” said Martin Rolfs, who conducted much of the research as a post-doctoral fellow in NYU’s Department of Psychology. “This finding ends a long-standing debate over how some visual events are processed: we show that our eyes can quickly make assessments about cause-and-effect—without the help of our cognitive systems.”

Rolfs is currently a research group leader at the Bernstein Center for Computational Neuroscience and the Department of Psychology of Berlin’s Humboldt University. The study’s other co-authors were Michael Dambacher, post-doctoral researcher at the universities of Potsdam and Konstanz, and Patrick Cavanagh, professor at Université Paris Descartes.

We frequently make rapid judgments of causality (“The ball knocked the glass off the table”), animacy (“Look out, that thing is alive!”), or intention (“He meant to help her”). These judgments are complex enough that many believe that substantial cognitive reasoning is required—we need our brains to tell us what our eyes have seen. However, some judgments are so rapid and effortless that they “feel” perceptual – we can make them using only our visual systems, with no thinking required.

It is not yet clear which judgments require significant cognitive processing and which may be mediated solely by our visual system. In the Current Biology study, the researchers investigated one of these—causality judgments—in an effort to better understand the division of labor between visual and cognitive processes.

Filed under visual system cognitive reasoning causality cognitive systems neuroscience science

92 notes

The end of a dogma: Bipolar cells generate action potentials
To make information transmission to the brain reliable, the retina first has to “digitize” the image. Until now, it was widely believed that this step takes place in the retinal ganglion cells, the output neurons of the retina. Scientists in the lab of Thomas Euler at the University of Tübingen, the Werner Reichardt Centre for Integrative Neuroscience and the Bernstein Center Tübingen were now able to show that already bipolar cells can generate “digital” signals. At least three types of mouse BC showed clear evidence of fast and stereotypic action potentials, so called “spikes”. These results show that the retina is by no means as well understood as is commonly believed. 
The retina in our eyes is not just a sheet of light sensors that – like a camera chip – faithfully transmits patterns of light to the brain. Rather, it performs complex computations, extracting several features from the visual stimuli, e.g., whether the light intensity at a certain place increases or decreases, in which direction a light source moves or whether there is an edge in the image. To transmit this information reliably across the optic nerve - acting as a kind of a cable - to the brain, the retina reformats it into a succession of stereotypic action potentials – it “digitizes” it. Classical textbook knowledge holds that this digital code – similar to the one employed by computers – is applied only in the retina’s ganglion cells, which send the information to the brain. Almost all other cells in the retina were believed to employ graded, analogue signals. But the Tübingen scientists could now show that, in mammals, already the bipolar cells, which are situated right after the photoreceptors within the retinal network, are able to work in a “digital mode” as well.
Using a new experimental technique, Tom Baden and colleagues recorded signals in the synaptic terminals of bipolar cells in the mouse retina. Based on the responses of these cells to simple light stimuli, they were able to separate the neurons into eight different response types. These types closely resembled those expected from physiological and anatomical studies. But surprisingly, the responses of the fastest cell types looked quite different than expected: they were fast, stereotypic and occurred in an all-or-nothing instead of a graded fashion. All these are typical features of action potentials. Such “digital” signals had occasionally been observed in bipolar cells before, but these were believed to be rare exceptional cases. Studies from the past two years on the fish retina had already cast doubt on the long-held belief that BCs do not spike. The new data from Tübingen clearly show that these “digital” signals are systematically generated in certain types of mammalian bipolar cells. Action potentials allow for much faster and temporally more precise signal transmission than graded potentials, thus offering advantages in certain situations. The results from Tübingen call a widely held dogma of neuroscience into question - and open up many new questions.

The end of a dogma: Bipolar cells generate action potentials

To make information transmission to the brain reliable, the retina first has to “digitize” the image. Until now, it was widely believed that this step takes place in the retinal ganglion cells, the output neurons of the retina. Scientists in the lab of Thomas Euler at the University of Tübingen, the Werner Reichardt Centre for Integrative Neuroscience and the Bernstein Center Tübingen were now able to show that already bipolar cells can generate “digital” signals. At least three types of mouse BC showed clear evidence of fast and stereotypic action potentials, so called “spikes”. These results show that the retina is by no means as well understood as is commonly believed. 

The retina in our eyes is not just a sheet of light sensors that – like a camera chip – faithfully transmits patterns of light to the brain. Rather, it performs complex computations, extracting several features from the visual stimuli, e.g., whether the light intensity at a certain place increases or decreases, in which direction a light source moves or whether there is an edge in the image. To transmit this information reliably across the optic nerve - acting as a kind of a cable - to the brain, the retina reformats it into a succession of stereotypic action potentials – it “digitizes” it. Classical textbook knowledge holds that this digital code – similar to the one employed by computers – is applied only in the retina’s ganglion cells, which send the information to the brain. Almost all other cells in the retina were believed to employ graded, analogue signals. But the Tübingen scientists could now show that, in mammals, already the bipolar cells, which are situated right after the photoreceptors within the retinal network, are able to work in a “digital mode” as well.

Using a new experimental technique, Tom Baden and colleagues recorded signals in the synaptic terminals of bipolar cells in the mouse retina. Based on the responses of these cells to simple light stimuli, they were able to separate the neurons into eight different response types. These types closely resembled those expected from physiological and anatomical studies. But surprisingly, the responses of the fastest cell types looked quite different than expected: they were fast, stereotypic and occurred in an all-or-nothing instead of a graded fashion. All these are typical features of action potentials. Such “digital” signals had occasionally been observed in bipolar cells before, but these were believed to be rare exceptional cases. Studies from the past two years on the fish retina had already cast doubt on the long-held belief that BCs do not spike. The new data from Tübingen clearly show that these “digital” signals are systematically generated in certain types of mammalian bipolar cells. Action potentials allow for much faster and temporally more precise signal transmission than graded potentials, thus offering advantages in certain situations. The results from Tübingen call a widely held dogma of neuroscience into question - and open up many new questions.

Filed under bipolar cells retina spikes visual system neuron ganglion cells neuroscience science

53 notes


Sharks see world as 50 shades of grey
Sharks are colour blind, a new molecular study by Australian scientists has confirmed, filling a gap in our knowledge about the evolution of colour vision. Dr Susan Theiss, from the University of Queensland, and colleagues, report their findings in the journal Biology Letters.
The evolution of colour vision has been studied in most vertebrates, but until recently, elasmobranchs (sharks, skates and rays) had been overlooked. Previous physiological research has shown some rays have colour vision but it suggested sharks were colour blind.
These previous studies looked at opsins, which are light-sensitive proteins found in the photoreceptor cells of the retina. Rod opsins are used in low light and produce a black and white image, while cone opsins are used in bright light, and often to see colours. Two or more different types of cone opsins are needed for colour vision.
While some ray species have multiple cone opsins as well as rods, studies in various shark species suggested they had only a single cone visual pigment.
To check whether this really was the case, Theiss and colleagues isolated the visual opsin genes from two wobbegong shark species: the spotted wobbegong Orectolobus maculatus and the ornate wobbegong O. ornatus.
Their findings confirm that wobbegongs possess only one cone opsin, meaning they see the world in shades of grey. The findings help fill in the picture of how colour vision evolved in different species.
"We know the earliest vertebrates had colour vision, but it has been lost by some groups over the course of evolution," says co-author Associate Professor Nathan Hart, a neuroecologist at the University of Western Australia.

Sharks see world as 50 shades of grey

Sharks are colour blind, a new molecular study by Australian scientists has confirmed, filling a gap in our knowledge about the evolution of colour vision. Dr Susan Theiss, from the University of Queensland, and colleagues, report their findings in the journal Biology Letters.

The evolution of colour vision has been studied in most vertebrates, but until recently, elasmobranchs (sharks, skates and rays) had been overlooked. Previous physiological research has shown some rays have colour vision but it suggested sharks were colour blind.

These previous studies looked at opsins, which are light-sensitive proteins found in the photoreceptor cells of the retina. Rod opsins are used in low light and produce a black and white image, while cone opsins are used in bright light, and often to see colours. Two or more different types of cone opsins are needed for colour vision.

While some ray species have multiple cone opsins as well as rods, studies in various shark species suggested they had only a single cone visual pigment.

To check whether this really was the case, Theiss and colleagues isolated the visual opsin genes from two wobbegong shark species: the spotted wobbegong Orectolobus maculatus and the ornate wobbegong O. ornatus.

Their findings confirm that wobbegongs possess only one cone opsin, meaning they see the world in shades of grey. The findings help fill in the picture of how colour vision evolved in different species.

"We know the earliest vertebrates had colour vision, but it has been lost by some groups over the course of evolution," says co-author Associate Professor Nathan Hart, a neuroecologist at the University of Western Australia.

Filed under vision visual system color vision color blind sharks evolution neuroscience science

free counters