Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

57 notes

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) project
The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).
Research questions
How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
How can we control such traits in a piece of music in order to induce specific affective states in a participant? 
How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
 How can we build an intelligent adaptive system for monitoring and inducing affective states through music on-line?

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) project

The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).

Research questions

  • How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
  • How can we control such traits in a piece of music in order to induce specific affective states in a participant?
  • How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
  • How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
  • How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
  • How can we build an intelligent adaptive system for monitoring and inducing affective states through music on-line?

(Source: cmr.soc.plymouth.ac.uk)

Filed under BCMI EEG brain brain activity mood music technology neuroscience science

42 notes

At this year’s Tokyo Games Show, Japanese purveyor of electronically-augmented fashion Neurowear unveiled the successor to its Necomimi brain-activated cat ears. It’s called Shippo, and it’s a brain-controlled motorized tail that responds to the user’s current emotional state with corresponding wagging.
Shippo requires a NeuroSky electroencephalograph (EEG) headset, alongside a clip-on heart monitor, in order to observe brain activity and pick up on the user’s emotional state. This information is then translated to wagging, which will be soft and slow or hard and fast, depending on whether one is relaxing or excited/anxious. The EEG headset communicates with the fluffy appendage via a Bluetooth connection.

At this year’s Tokyo Games Show, Japanese purveyor of electronically-augmented fashion Neurowear unveiled the successor to its Necomimi brain-activated cat ears. It’s called Shippo, and it’s a brain-controlled motorized tail that responds to the user’s current emotional state with corresponding wagging.

Shippo requires a NeuroSky electroencephalograph (EEG) headset, alongside a clip-on heart monitor, in order to observe brain activity and pick up on the user’s emotional state. This information is then translated to wagging, which will be soft and slow or hard and fast, depending on whether one is relaxing or excited/anxious. The EEG headset communicates with the fluffy appendage via a Bluetooth connection.

Filed under shippo EEG brain brain activity emotion technology neuroscience science

16 notes

Imaging the network traffic in our brains

MRI brain scans no longer just show the various regions of brain activity; nowadays the networks in the brain can now be imaged with ever greater precision. This will make functional MRI (fMRI) increasingly powerful in the coming years, leading to tools that can be used in cognitive neuroscience. This is the claim made by Prof. David Norris in his inaugural lecture as Professor of Neuroimaging at the University of Twente on 13 September.

During the twenty years since the invention of fMRI (functional Magnetic Resonance Imaging) developments have come thick and fast, from initially identifying active brain regions to more complex analysis of the connections and hubs in the brain. In his inaugural lecture Norris describes how this has been achieved thanks to not only a growing understanding of the underlying biophysics but also rapid technological developments: scanners with larger magnetic fields, better image-processing techniques and algorithms. His aim is to go beyond merely localizing which parts of the brain are active. The challenge is to answer two questions: How are the various regions interconnected, structurally and functionally? What do the networks in our brains look like?

Faster and more powerful

Back in the 19th century scientists observed increased blood flow in brain regions that are functionally active. fMRI enables the change in oxygen content to be seen. Haemoglobin, the substance that transports oxygen in the blood, can take the form of oxyhaemoglobin (when it is still combined with oxygen) and deoxyhaemoglobin (when the oxygen has been released), each of which has different magnetic properties. One of the complicating factors when interpreting the scans is that various physiological mechanisms are at work simultaneously, causing the deoxyhaemoglobin level to rise and fall. One of the remedies to increase accuracy, Norris explains, has been to increase the magnetic field strength: there are now MRI scanners operating at 7 Tesla. At the same time the speed at which laminae can be imaged has gone up by leaps and bounds: the entire brain can be scanned in three seconds with a precision of 1 millimetre.

Hubs

The functional connections between parts of the brain can be registered by means of blood flow, but MRI also enables the structural and anatomical connections to be seen. This involves measuring the movement of water molecules caused by the ‘white matter’ in nerve fibres. This technology is known as diffusion-weighted imaging (DWI). Combining these technologies provides a wealth of fresh information on the networks in the brain and the places where many connections come together, the ‘hubs’. Not only have ‘known networks’ thus been proven, so have networks that neuroscience posits as plausible but that have never been measured.


Image showing the distribution of connector hubs on the surface of a flattened brain. The top two figures show the medial views of each hemisphere, the bottom two show the external views.

CMI

The new Centre for Medical Imaging that is to come to the University of Twente campus will soon provide extensive facilities for collaborating in the field of fMRI, says Norris, who is also on the staff of the Donders Institute in Nijmegen.

(Source: utwente.nl)

Filed under MRI brain fMRI neuroimaging neuroscience psychology technology science

32 notes

How artificial intelligence is changing our lives 
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”
"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”

How artificial intelligence is changing our lives

The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.

"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”

"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”

Filed under AI robotics robots neuroscience computer science machine learning technology science

56 notes

Giving a voice to the voiceless has been a cause that many have championed throughout history, but it’s safe to say that none of those efforts involved packing a bunch of sensors into a glove. A team of Ukrainian students has done just that in order to translate sign language into vocalized speech via a smartphone.
The inspiration for the gloves came from observing fellow college students who were deaf have difficulty communicating with other students, which results in them being excluded from activities. Initially, the team looked at commercially available gloves that could be modified to interpret a range of signs, but in the end, they opted to develop their own.
In their glove, a total of 15 flex sensors in the fingers measure the degree of bending while a compass, accelerometer, and gyroscope determine the motion of the glove through space. The sensor data are processed by a microcontroller on the glove then sent via Bluetooth to a mobile device, which translates the positions of the hand and fingers into text when the pattern is recognized. Using Microsoft APIs for Speech and Bing, the text is spoken by the phone running Windows Phone 7. The glove can also plug into a PC for data syncing and charging of its battery.

Giving a voice to the voiceless has been a cause that many have championed throughout history, but it’s safe to say that none of those efforts involved packing a bunch of sensors into a glove. A team of Ukrainian students has done just that in order to translate sign language into vocalized speech via a smartphone.

The inspiration for the gloves came from observing fellow college students who were deaf have difficulty communicating with other students, which results in them being excluded from activities. Initially, the team looked at commercially available gloves that could be modified to interpret a range of signs, but in the end, they opted to develop their own.

In their glove, a total of 15 flex sensors in the fingers measure the degree of bending while a compass, accelerometer, and gyroscope determine the motion of the glove through space. The sensor data are processed by a microcontroller on the glove then sent via Bluetooth to a mobile device, which translates the positions of the hand and fingers into text when the pattern is recognized. Using Microsoft APIs for Speech and Bing, the text is spoken by the phone running Windows Phone 7. The glove can also plug into a PC for data syncing and charging of its battery.

Filed under hearing loss sign language technology speech vocalization neuroscience psychology science

39 notes

Nanoengineers at the University of California, San Diego have developed a novel technology that can fabricate, in mere seconds, microscale three dimensional (3D) structures out of soft, biocompatible hydrogels. Near term, the technology could lead to better systems for growing and studying cells, including stem cells, in the laboratory. Long-term, the goal is to be able to print biological tissues for regenerative medicine. For example, in the future, doctors may repair the damage caused by heart attack by replacing it with tissue that rolled off of a printer.
The biofabrication technique uses a computer projection system and precisely controlled micromirrors to shine light on a selected area of a solution containing photo-sensitive biopolymers and cells. This photo-induced solidification process forms one layer of solid structure at a time, but in a continuous fashion.

Nanoengineers at the University of California, San Diego have developed a novel technology that can fabricate, in mere seconds, microscale three dimensional (3D) structures out of soft, biocompatible hydrogels. Near term, the technology could lead to better systems for growing and studying cells, including stem cells, in the laboratory. Long-term, the goal is to be able to print biological tissues for regenerative medicine. For example, in the future, doctors may repair the damage caused by heart attack by replacing it with tissue that rolled off of a printer.

The biofabrication technique uses a computer projection system and precisely controlled micromirrors to shine light on a selected area of a solution containing photo-sensitive biopolymers and cells. This photo-induced solidification process forms one layer of solid structure at a time, but in a continuous fashion.

Filed under biofabrication technique brain cells neuroscience stem cells technology tissue science

39 notes

As humans, we create life. And we’re all familiar with the idea of artificial intelligence. But what about artificial life? What is it, and why should we care?
Artificial Life is a recently labelled but truly ancient field in which technology is used to imitate biological life. From the earliest stone and clay figurines, to puppets, through hydraulic and pneumatic creations, on to clockwork, through electrical robots and even to flesh, artificial life has a long history that now also extends into the abstract computational realm.
My own interest is as much in the current examples of this phenomenon as in its earliest examples, a prevailing fascination with not only “life-as-we-know-it”, but “life-as-we-have-interpretted-it”.
Since the very earliest days of humankind, we have represented life using whatever technology was available. This has allowed us to observe the traits of life, even our own, in devices over which we have control.
In this way we have embodied our theories of life’s vital principles in artefacts, and tinkered like any Creator from poetry and fiction.
In short, artificial life is central to our attempts to understand who we are.

As humans, we create life. And we’re all familiar with the idea of artificial intelligence. But what about artificial life? What is it, and why should we care?

Artificial Life is a recently labelled but truly ancient field in which technology is used to imitate biological life. From the earliest stone and clay figurines, to puppets, through hydraulic and pneumatic creations, on to clockwork, through electrical robots and even to flesh, artificial life has a long history that now also extends into the abstract computational realm.

My own interest is as much in the current examples of this phenomenon as in its earliest examples, a prevailing fascination with not only “life-as-we-know-it”, but “life-as-we-have-interpretted-it”.

Since the very earliest days of humankind, we have represented life using whatever technology was available. This has allowed us to observe the traits of life, even our own, in devices over which we have control.

In this way we have embodied our theories of life’s vital principles in artefacts, and tinkered like any Creator from poetry and fiction.

In short, artificial life is central to our attempts to understand who we are.

Filed under A-Life artificial life mechanical devices philosophy technology science

16 notes

The MIT and University of Pennsylvania team decided that mimicking animal behaviour in robotics was not enough — by mimicking the genetic materials that allow those behaviours, they could make a giant leap towards feasible biorobots. It is the first time skeletal muscle has ever been manipulated to react to light, with past studies focusing only on cardiac muscle cells.

"With bio-inspired designs, biology is a metaphor, and robotics is the tool to make it happen," said MIT engineering professor Harry Asada, who has co-authored a paper on the study, due to appear in the journal Lab on a Chip. “With bio-integrated designs, biology provides the materials, not just the metaphor. This is a new direction we’re pushing in biorobotics.”

Filed under biorobotics engineering neuroscience robotics science technology muscle cells

free counters