Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

74 notes

Futurist Ray Kurzweil believes that the cloud will help expand the capacity of the human brain beyond its current limitations.
Futurist and author Ray Kurzweil predicts the cloud will eventually do more than store our emails or feed us streaming movies on demand: it’s going to help expand our brain capacity beyond its current limits.
In a question-and-answer session following a speech to the DEMO technology conference in Santa Clara, California last week, Kurzweil described the human brain as impressive but limited in its capacity to hold information. “By the time we’re even 20, we’ve filled it up,” he said, adding that the only way to add information after that point is to “repurpose our neocortex to learn something new.” (Computerworld has posted up the full video of the talk.)
The solution to overcoming the brain’s limitations, he added, involves “basically expanding our brains into the cloud.”
Kurzweil is one of the more prominent advocates of the technological Singularity, or the idea that computers will become super-intelligent and self-replicating, essentially reducing human progress to a sideshow. He is an optimist in this scenario, arguing in talks and books that the Singularity will effectively make humanity immortal by allowing us to transfer our consciousness into non-organic systems.

Futurist Ray Kurzweil believes that the cloud will help expand the capacity of the human brain beyond its current limitations.

Futurist and author Ray Kurzweil predicts the cloud will eventually do more than store our emails or feed us streaming movies on demand: it’s going to help expand our brain capacity beyond its current limits.

In a question-and-answer session following a speech to the DEMO technology conference in Santa Clara, California last week, Kurzweil described the human brain as impressive but limited in its capacity to hold information. “By the time we’re even 20, we’ve filled it up,” he said, adding that the only way to add information after that point is to “repurpose our neocortex to learn something new.” (Computerworld has posted up the full video of the talk.)

The solution to overcoming the brain’s limitations, he added, involves “basically expanding our brains into the cloud.”

Kurzweil is one of the more prominent advocates of the technological Singularity, or the idea that computers will become super-intelligent and self-replicating, essentially reducing human progress to a sideshow. He is an optimist in this scenario, arguing in talks and books that the Singularity will effectively make humanity immortal by allowing us to transfer our consciousness into non-organic systems.

Filed under brain brain limitations technology singularity Ray Kurzweil computer science neuroscience science

5 notes


Worldwide patent for a Spanish stroke rehabilitation robot
Robotherapist 3D, a robot which aids stroke patients’ recovery, is to be brought to market by its worldwide patent holder, a spin-off company from the Miguel Hernández University of Elche (Alicante, Spain). It is the first robot to enable patients to start doing exercises while supine, allowing them to begin shortly after the stroke and expediting recovery.
The company, a leader in this field in Spain, already has two robots: Robotherapist 2D and Robotherapist 3D. For the latter, it has a worldwide patent. Both are actuated by pneumatic technology and have been designed to improve arm movement in stroke patients.
According to the researcher, Robotherapist 2D is a planar robot which allows movement in two dimensions and includes sensors to determine the patient’s condition and a sound feedback system. “With this robot, certain tasks are carried out. The patient’s arm is moved parallel to the table: to the right, to the left and in a straight line. They are exercises to improve coordination,” he says.

Worldwide patent for a Spanish stroke rehabilitation robot

Robotherapist 3D, a robot which aids stroke patients’ recovery, is to be brought to market by its worldwide patent holder, a spin-off company from the Miguel Hernández University of Elche (Alicante, Spain). It is the first robot to enable patients to start doing exercises while supine, allowing them to begin shortly after the stroke and expediting recovery.

The company, a leader in this field in Spain, already has two robots: Robotherapist 2D and Robotherapist 3D. For the latter, it has a worldwide patent. Both are actuated by pneumatic technology and have been designed to improve arm movement in stroke patients.

According to the researcher, Robotherapist 2D is a planar robot which allows movement in two dimensions and includes sensors to determine the patient’s condition and a sound feedback system. “With this robot, certain tasks are carried out. The patient’s arm is moved parallel to the table: to the right, to the left and in a straight line. They are exercises to improve coordination,” he says.

Filed under neuroscience robotherapist robotics robots stroke stroke rehabilitation technology science

141 notes


A wireless low-power, high-quality EEG headset
Imec, Holst Centre and Panasonic have developed a new prototype of a wireless EEG (electroencephalogram, or brain waves) headset designed to be a reliable, high-quality and wearable EEG monitoring system.
The system combines ease-of-use with ultra-low power electronics. Continuous impedance monitoring and the use of active electrodes increases the quality of EEG signal recording compared to former versions of the system.
How it works
The EEG data is transmitted to a receiver located up to 10 meters away. The headset integrates active electrodes (reduce the susceptibility of the system to power-line interference and cable motion artifacts to improve signal quality), EEG amplifier, microcontroller, and low-power wireless transmitter.
The receiver can continuously record 8-channel EEG signals while concurrently recording electrode-tissue contact impedance (ETI), a measure of contact quality.
The system has a high  (>92 dB) common-mode rejection ratio (to reduce interference from power lines and other sources) and low noise (<6 µVpp, 0.5-100Hz), with configurable cut-off frequency (to filter out high or low frequencies).
The heart of the system is the low-power (750µW) 8-channel EEG monitoring chipset. Each EEG channel consists of two active electrodes and a low-power analog signal processor. The EEG channels are designed to extract high-quality EEG signals under a large amount of common-mode interference. The active electrode chips have buffer functionality with high input impedance (1.4GΩ at 10Hz), enabling recordings from dry electrodes, and low output impedance reducing the power-line interference without using shielded wires
The system is integrated into imec’s EEG headset with dry electrodes, which enables EEG recordings with minimal set-up time. The small size of the electronics system, measuring only 35mm x 30mm x 5mm (excluding battery), allows easy integration in any other product.

A wireless low-power, high-quality EEG headset

Imec, Holst Centre and Panasonic have developed a new prototype of a wireless EEG (electroencephalogram, or brain waves) headset designed to be a reliable, high-quality and wearable EEG monitoring system.

The system combines ease-of-use with ultra-low power electronics. Continuous impedance monitoring and the use of active electrodes increases the quality of EEG signal recording compared to former versions of the system.

How it works

The EEG data is transmitted to a receiver located up to 10 meters away. The headset integrates active electrodes (reduce the susceptibility of the system to power-line interference and cable motion artifacts to improve signal quality), EEG amplifier, microcontroller, and low-power wireless transmitter.

The receiver can continuously record 8-channel EEG signals while concurrently recording electrode-tissue contact impedance (ETI), a measure of contact quality.

The system has a high  (>92 dB) common-mode rejection ratio (to reduce interference from power lines and other sources) and low noise (<6 µVpp, 0.5-100Hz), with configurable cut-off frequency (to filter out high or low frequencies).

The heart of the system is the low-power (750µW) 8-channel EEG monitoring chipset. Each EEG channel consists of two active electrodes and a low-power analog signal processor. The EEG channels are designed to extract high-quality EEG signals under a large amount of common-mode interference. The active electrode chips have buffer functionality with high input impedance (1.4GΩ at 10Hz), enabling recordings from dry electrodes, and low output impedance reducing the power-line interference without using shielded wires

The system is integrated into imec’s EEG headset with dry electrodes, which enables EEG recordings with minimal set-up time. The small size of the electronics system, measuring only 35mm x 30mm x 5mm (excluding battery), allows easy integration in any other product.

Filed under brain EEG wireless EEG signal recording neuroscience psychology technology science

16 notes

New scanning technology aims to achieve quicker diagnosis of disease

Groundbreaking research taking place at the University of York could lead to Alzheimer’s disease being diagnosed in minutes using a simple brain scan.

Scientists are working on new technology that could revolutionise the way in which Magnetic Resonance Imaging (MRI) scans are used to view the molecular events behind diseases like Alzheimer’s, without invasive procedure, by increasing the sensitivity of an average hospital scanner by 200,000 times.

The technology underpinning this project, SABRE (Signal Amplification by Reversible Exchange), has received a £3.6m Strategic Award from the Wellcome Trust to fund a team of seven post-doctoral researchers from this month.

The new grant brings the total support for SABRE from the Wellcome Trust, the Wolfson Foundation, Bruker Biospin, the University of York and the Engineering and Physical Sciences Research Council (EPSRC) to over £12.5m in the last three years.

A new Centre for Hyperpolarisation in Magnetic Resonance (CHyM) is being purpose-built at York to house the project. The building, which is nearing completion at York Science Park, includes a chemical laboratory, four high field nuclear magnetic resonance systems and space for 30 research scientists.

The SABRE project is led by Professor Simon Duckett, from the Department of Chemistry at York, Professor Gary Green, from the York Neuroimaging Centre (YNiC) and Professor Hugh Perry, from the Centre for Biological Sciences, University of Southampton.

Professor Duckett said: “While MRI has completely changed modern healthcare, its value is greatly limited by its low sensitivity. As well as tailoring treatments more accurately to the needs of individual patients, our hope is that in the future doctors will be able to accurately make diagnoses that currently take days, weeks and sometimes months, in just minutes.”

Professor Green added: “SABRE has the potential to revolutionise clinical MRI and related MR methods by providing a huge improvement in the sensitivity of scanners. This will ultimately produce a step change in the use and type of information available to scientists and clinicians through MRI, allowing the diagnosis, treatment and clinical monitoring of diverse neurodegenerative diseases.”

(Source: alphagalileo.org)

Filed under alzheimer alzheimer's disease brain brain scan neuroscience SABRE technology science

61 notes

Google simulates brain networks to recognize speech and images
This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“).
That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports.
Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.
Read more

Google simulates brain networks to recognize speech and images

This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“).

That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports.

Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.

Read more

Filed under virtual brain google image recognition speech recognition AI learning neural networks neuroscience technology science

913 notes

The £90,000 &#8216;robolegs&#8217; that got me out of my wheelchair: How one woman stood on her own feet nine years after she was paralysed
It is an extraordinary sight. From the waist up, 27-year-old Sophie Morgan is every inch the pretty blonde girl-next-door. But from the waist down, with her legs encased in £90,000 of motorised carbon-fibre, she is RoboCop.
Sophie’s thumb manipulates a joystick built into the armrests of her suit, causing the legs to hiss and whirr into life, before she takes three slow but sure steps. Her face breaks into a broad grin.
Five minutes earlier, Sophie was in her wheelchair. She was left paralysed from the chest down in a car crash nine years ago that shattered her spine. Over the years, Sophie, an aspiring television presenter who appeared in Channel 4’s Paralympics coverage, had come to accept that she would never walk again.

The £90,000 ‘robolegs’ that got me out of my wheelchair: How one woman stood on her own feet nine years after she was paralysed

It is an extraordinary sight. From the waist up, 27-year-old Sophie Morgan is every inch the pretty blonde girl-next-door. But from the waist down, with her legs encased in £90,000 of motorised carbon-fibre, she is RoboCop.

Sophie’s thumb manipulates a joystick built into the armrests of her suit, causing the legs to hiss and whirr into life, before she takes three slow but sure steps. Her face breaks into a broad grin.

Five minutes earlier, Sophie was in her wheelchair. She was left paralysed from the chest down in a car crash nine years ago that shattered her spine. Over the years, Sophie, an aspiring television presenter who appeared in Channel 4’s Paralympics coverage, had come to accept that she would never walk again.

Filed under bionic legs bionics exoskeleton Rex Bionics robots robotics neuroscience technology science

552 notes


Artificial cornea gives the gift of vision
Blindness is often caused by corneal diseases. The established treatment is a corneal transplant, but in many cases this is not possible and donor corneas are often hard to come by. In the future, an artificial cornea could make up for this deficiency and save the vision of those affected.
“We are in the process of developing two different types of artificial corneas. One of them can be used as an alternative to a donor cornea in cases where the patient would not tolerate a donor cornea, let alone the issue of donor material shortage,” says IAP project manager Dr. Joachim Storsberg.
The scientist has considerable expertise in developing and testing of next-generation biomaterials. Between 2005 and 2009 he collaborated with interdisciplinary teams and private companies to successfully develop an artificial cornea specifically for patients whose cornea had become clouded – a condition that is extremely difficult to treat. Such patients are unable to accept a donor cornea either due to their illness or because they have already been through several unsuccessful transplantation attempts. Dr. Storsberg was awarded the Josef-von-Fraunhofer Prize 2010 for this achievement. “A great many patients suffering from a range of conditions will be able to benefit from our new implant, which we’ve named ArtCornea®. We have already registered ArtCornea® as a trademark,” reports Storsberg.

Artificial cornea gives the gift of vision

Blindness is often caused by corneal diseases. The established treatment is a corneal transplant, but in many cases this is not possible and donor corneas are often hard to come by. In the future, an artificial cornea could make up for this deficiency and save the vision of those affected.

“We are in the process of developing two different types of artificial corneas. One of them can be used as an alternative to a donor cornea in cases where the patient would not tolerate a donor cornea, let alone the issue of donor material shortage,” says IAP project manager Dr. Joachim Storsberg.

The scientist has considerable expertise in developing and testing of next-generation biomaterials. Between 2005 and 2009 he collaborated with interdisciplinary teams and private companies to successfully develop an artificial cornea specifically for patients whose cornea had become clouded – a condition that is extremely difficult to treat. Such patients are unable to accept a donor cornea either due to their illness or because they have already been through several unsuccessful transplantation attempts. Dr. Storsberg was awarded the Josef-von-Fraunhofer Prize 2010 for this achievement. “A great many patients suffering from a range of conditions will be able to benefit from our new implant, which we’ve named ArtCornea®. We have already registered ArtCornea® as a trademark,” reports Storsberg.

Filed under artificial cornea blindness corneal diseases implants neuroscience science technology transplants vision ArtCornea

50 notes

 Training computers to understand the human brain 
Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can ‘think’ and ‘see’ in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.
The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).
After ‘training’ the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

Training computers to understand the human brain

Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can ‘think’ and ‘see’ in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).

After ‘training’ the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

Filed under brain fMRI semantics technology multi-voxel pattern analysis neuroscience psychology science

111 notes


Philosophy will be the key that unlocks artificial intelligence
To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial. The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.
But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of &#8220;artificial general intelligence&#8221; or AGI – has made no progress whatever during the entire six decades of its existence.
Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

Read more

Philosophy will be the key that unlocks artificial intelligence

To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial. The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of “artificial general intelligence” or AGI – has made no progress whatever during the entire six decades of its existence.

Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

Read more

Filed under brain AI artificial general intelligence self-awareness neuroscience technology science

free counters