Neuroscience

Articles and news from the latest research reports.

Posts tagged supercomputer

148 notes

NSF-funded Superhero Supercomputer Helps Battle Autism
'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders
When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.
Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.
The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.
Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.
Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.
"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”
Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.
The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.
The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.
"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.
"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

NSF-funded Superhero Supercomputer Helps Battle Autism

'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders

When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.

Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.

The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.

Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.

Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.

"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”

Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.

The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.

The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.

"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.

"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

Filed under mental disorders ASD autism supercomputer Gordon technology neuroscience science

19 notes

Watson turns medic: Supercomputer to diagnose disease

22 August 2012 by Jim Giles

More than a year after it won the quiz show Jeopardy!, IBM’s supercomputer is learning how to help doctors diagnose patients

IT IS more than a year since Watson, IBM’s famous supercomputer, opened a new frontier for artificial intelligence by beating human champions of the quiz show Jeopardy!. Now Watson is learning to use its language skills to help doctors diagnose patients.

Progress is most advanced in cancer care, where IBM is working with several US hospitals to build a virtual physicians’ assistant. “It’s a machine that can read everything and forget nothing,” says Larry Norton, a doctor at the Memorial Sloan-Kettering Cancer Center in New York, who is collaborating with IBM.

When playing Jeopardy!, Watson analysed each question in a bid to guess what it was about. Then it looked for possible answers in its database, made up of sources such as encyclopaedias, scoring each according to the evidence associated with it and answering with the highest rated answer. The system takes a similar approach when dealing with medical questions, although in this case it draws on information from medical journals and clinical guidelines.

To test the system, Watson was first tasked with answering questions taken from Doctor’s Dilemma, a competition for trainee doctors that takes place at the annual meeting of the American College of Physicians. Watson was given 188 questions that it had not seen before and achieved around 50 per cent accuracy - not bad for an early test, but hardly ideal (Artificial Intelligence, doi.org/h6m).

To improve, Watson is now absorbing records - tens of thousands at Sloan-Kettering alone - of treatments and outcomes associated with individual patients. Given data on a new patient, Watson looks for information on those with similar symptoms, as well as the treatments that have been the most successful. The idea is it will give doctors a range of possible diagnoses and treatment options, each with an associated level of confidence. The result will be a system that its creators say can suggest nuanced treatment plans that take into account factors like drug interactions and a patient’s medical history.

William Audeh, a doctor at Cedars-Sinai Medical Center in Los Angeles, who is working with IBM, says the last few months have involved “filling Watson’s brain” with medical data. Watson is answering basic questions based on the treatment guidelines that are published by medical societies and is showing “very positive” results, he adds.

The technology is particularly useful in oncology because doctors struggle to keep up with the explosion of genomic and molecular data generated about each cancer type. This means it can take years for findings to translate into medical practice. By contrast, Watson can absorb new results and relay them to doctors quickly, together with an estimate of their potential usefulness. “Watson really has great potential,” says Audeh. “Cancer needs it most because it’s becoming so complicated so quickly.”

The IBM system could also approve treatment requests more quickly. At WellPoint, one of the largest insurers in the US, nurses use guidelines and patient history to determine if a request is in line with company policy. Nurses are now training Watson by feeding it test requests and observing the answers. Progress is good and the system could be deployed next year, says WellPoint’s Cindy Wakefield. “Now it can take up to a couple of days,” she says. “We hope Watson can return the accurate recommendation in a matter of minutes.”

Source: NewScientist

Filed under Watson diagnosis disease neuroscience science supercomputer technology AI

29 notes

Low-Power Chips to Model a Billion Neurons

It’s a little sobering, actually. The average human brain packs a hundred billion or so neurons—connected by a quadrillion (1015) constantly changing synapses—into a space the size of a cantaloupe. It consumes a paltry 20 watts, much less than a typical incandescent lightbulb. But simulating this mess of wetware with traditional digital circuits would require a supercomputer that’s a good 1000 times as powerful as the best ones we have available today. And we’d need the output of an entire nuclear power plant to run it.

Fortunately, we don’t have to rely on traditional, power-hungry computers to get us there. Scattered around the world are at least half a dozen projects dedicated to building brain models using specialized analog circuits. Unlike the digital circuits in traditional computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can model brain activity as fast as or even faster than it really occurs, and they consume a fraction of the power. But analog chips do have one serious drawback—they aren’t very programmable. The equations used to model the brain in an analog circuit are physically hardwired in a way that affects every detail of the design, right down to the placement of every analog adder and multiplier. This makes it hard to overhaul the model, something we’d have to do again and again because we still don’t know what level of biological detail we’ll need in order to mimic the way brains behave.

To help things along, my colleagues and I are building something a bit different: the first low-power, large-scale digital model of the brain. Dubbed SpiNNaker, for Spiking Neural Network Architecture, our machine looks a lot like a conventional parallel computer, but it boasts some significant changes to the way chips communicate. We expect it will let us model brain activity with speeds matching those of biological systems but with all the flexibility of a supercomputer.

Another team, led by Dharmendra Modha at IBM Almaden Research Center, in San Jose, Calif., works on supercomputer models of the cortex, the outer, information-processing layer of the brain, using simpler neuron models. In 2009, team members at IBM and Lawrence Livermore National Laboratory showed they could simulate the activity of 900 million neurons connected by 9 trillion synapses, more than are in a cat’s cortex. But as has been the case for all such models, its simulations were quite slow. The computer needed many minutes to model a second’s worth of brain activity.

One way to speed things up is by using custom-made analog circuits that directly mimic the operation of the brain. Traditional analog circuits—like the chips being developed by the BrainScaleS project at the Kirchhoff Institute for Physics, in Heidelberg, Germany—can run 10 000 times as fast as the corresponding parts of the brain. They’re also fabulously energy efficient. A digital logic circuit may need thousands of transistors to perform a multiplication, but analog circuits need only a few. When you break it down to the level of modeling the transmission of a single neural signal, these circuits consume about 0.001 percent as much energy as a supercomputer would need to perform the same task. Considering you’d need to perform that operation 10 quadrillion times a second, that translates into some significant energy savings. While a whole brain model built using today’s digital technology could easily consume more than US $10 billion a year in electricity, the power bill for a similar-scale analog system would likely come to less than $1 million.

Read more

Filed under SpiNNaker brain modelling neural networks supercomputer neuron neuroscience science simulation tech

free counters