Posts tagged technology

Posts tagged technology
Robot uses steerable needles to treat brain clots
Surgery to relieve the damaging pressure caused by hemorrhaging in the brain is a perfect job for a robot.
That is the basic premise of a new image-guided surgical system under development at Vanderbilt University. It employs steerable needles about the size of those used for biopsies to penetrate the brain with minimal damage and suction away the blood clot that has formed.
The system is described in an article accepted for publication in the journal IEEE Transactions on Biomedical Engineering. It is the product of an ongoing collaboration between a team of engineers and physicians headed by Assistant Professor Robert J. Webster III and Assistant Professor of Neurological Surgery Kyle Weaver.
Brain clots are leading cause of death, disability
The odds of a person getting an intracerebral hemorrhage are one in 50 over his or her lifetime. When it does occur, 40 percent of the individuals die within a month. Many of the survivors have serious brain damage.
“When I was in college, my dad had a brain hemorrhage,” said Webster. “Fortunately, he was one of the lucky few who survived and recovered fully. I’m glad I didn’t know how high his odds of death or severe brain damage were at the time, or else I would have been even more scared than I already was.”
Steerable needle could prevent “collateral damage” during surgery
Operations to “debulk” intracerebral hemorrhages are not popular among neurosurgeons: They know their efforts are not likely to make a difference, except when the clots are small and lie on the brain’s surface where they are easy to reach. Surgeons generally agree that there is a clinical benefit from removing 25-50 percent of a clot but that benefit can be offset by the damage that is done to the surrounding tissue when the clot is removed. Therefore, when a serious clot is detected in the brain, doctors take a “watchful waiting” approach – administering drugs that decrease the swelling around the clot in hopes that this will be enough to make the patient improve without surgery.
For the last four years, Webster’s team has been developing a steerable needle system for “transnasal” surgery: operations to remove tumors in the pituitary gland and at the skull base that traditionally involve cutting large openings in a patient’s skull and/or face. Studies have shown that using an endoscope to go through the nasal cavity is less traumatic, but the procedure is so difficult that only a handful of surgeons have mastered it.
Last summer, Webster attended a conference in Italy where one of the speakers, Marc Simard, a neurosurgeon at the University of Maryland School of Medicine, ran through his wish list of useful imaginary neurosurgical devices, hoping that some engineer in the audience might one day be able to build one of them. When he described his wish to have a needle-sized robot arm to reach deep into the brain to remove clots, Webster couldn’t help smiling because the steerable needle system he had been developing was perfect for the job.
Webster’s design, which he calls an active cannula, consists of a series of thin, nested tubes. Each tube has a different intrinsic curvature. By precisely rotating, extending and retracting these tubes, an operator can steer the tip in different directions, allowing it to follow a curving path through the body. The single needle system required for removing brain clots was actually much simpler than the multi-needle transnasal system.
When Webster returned, he told Weaver about the potential new application. The neurosurgeon was quite supportive: “I think this can save a lot of lives. There are a tremendous number of intracerebral hemorrhages and the number is certain to increase as the population ages.”
Graduate student Philip Swaney, who is working on the system, likes the fact it is closest to commercialization of all the projects in Webster’s Medical and Electromechanical Design Laboratory. “I like the idea of working on something that will begin saving lives in the very near future,” he said.
Active cannula removed 92 percent of clots in simulations
The brain-clot system only needs two tubes: a straight outer tube and a curved inner tube. Both are less than one twentieth of an inch in diameter. When a CT scan has determined the location of the blood clot, the surgeon determines the best point on the skull and the proper insertion angle for the probe. The angle is dialed into a fixture, called a trajectory stem, which is attached to the skull immediately above a small hole that has been drilled to enable the needle to pass into the patient’s brain.
The surgeon positions the robot so it can insert the straight outer tube through the trajectory stem and into the brain. He also selects the small inner tube with the curvature that best matches the size and shape of the clot, attaches a suction pump to its external end and places it in the outer tube.
Guided by the CT scan, the robot inserts the outer tube into the brain until it reaches the outer surface of the clot. Then it extends the curved, inner tube into the clot’s interior. The pump is turned on and the tube begins acting like a tiny vacuum cleaner, sucking out the material. The robot moves the tip around the interior of the clot, controlling its motion by rotating, extending and retracting the tubes. According to the feasibility studies the researchers have performed, the robot can remove up to 92 percent of simulated blood clots.
“The trickiest part of the operation comes after you have removed a substantial amount of the clot. External pressure can cause the edges of the clot to partially collapse making it difficult to keep track of the clot’s boundaries,” said Webster.
The goal of a future project is to add ultrasound imaging combined with a computer model of how brain tissue deforms to ensure that all of the desired clot material can be removed safely and effectively.
Researchers Develop Traffic Light-Inspired Caffeine Detector
While caffeine has become essential for a large portion of the workforce, researchers have developed a new instrument that will be of interest to anyone concerned they might be consuming too much of the popular stimulant on a daily basis.
The instrument in question is known as Caffeine Orange, and according to its creators, it is a fluorescent caffeine sensor that is used in combination with a detection kit. When the stimulant is present in various drinks and/or solutions, the detection kit lights up in much the same way that a traffic light does, they added.
Caffeine Orange was developed by a team of researchers led by Professor Young-Tae Chang from the National University of Singapore and Professor Yoon-Kyoung Cho from Ulsan National Institute of Science and Technology (UNIST) in Korea. A paper detailing their research appears in the July 23 edition of the journal Scientific Reports.
“Caffeine has attracted abundant attention due to its extensive existence in beverages and medicines. However, to detect it sensitively and conveniently remains a challenge, especially in resource-limited regions,” the authors wrote in their study. They explain that their device is a “novel aqueous phase fluorescent caffeine sensor” which exhibits 250-fold fluorescence enhancement upon caffeine activation and high selectivity.”
The caffeine sensor and its companion detection kit are non-toxic and can be used with just the naked eye, the researchers said. It can sense various caffeine concentrations, reporting its findings based on color changes upon irradiation with the detection kit, then emitting a light to the beverage with a green-colored laser pointer.
If a drink or solution has a high concentration of caffeine, it turns red. Beverages with moderate caffeine concentrations turn yellow, and those with low amounts of the stimulant turn green, they said.
While there are health benefits linked to caffeine, overdosing on the substance could lead to caffeine intoxication, the authors said. Symptoms of caffeine intoxication include anxiety, irregular heartbeat, insomnia, and in severe cases, hallucinations, depression, or even death could result.
“Prior to this caffeine ‘traffic-light’ designator, no practically applicable and customer-friendly caffeine detection methods have been reported,” the research team wrote. They added their detection kit had several advantages over other such devices in that it is easy to construct, easy to use, safe, fast and consumer friendly.
“The whole kit requires just one syringe equipped with reverse-phase materials and several washing solutions. Its incorporation into automated system has enhanced the handling even greater,” the authors said. No organic solvent is used in the extraction process, the procedure takes less than one minute, and it can be used to extract caffeine from different beverages that are both chemically and physically complicated, they added.
Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Largest neuronal network simulation achieved using K computer
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.
The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.
Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer. The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.
Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.
This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.
“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.
Memory of 250.000 PCs
Simulating a large neuronal network and a process like learning requires large amounts of computing memory. Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.
More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.
In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.
NEST
NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.
A large, international project
Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”
Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”
Paving the way for future projects
Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”
The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.
The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.
Novel technology seen as new, more accurate way to diagnose and treat autism
Researchers at Indiana University School of Medicine and Rutgers University have developed a new quantitative screening method for diagnosing and longitudinal tracking of autism in children after age 3. The studies are published as part of a special collection of papers in the open-access journal Frontiers in Neuroscience titled “Autism: The Movement Perspective.”
The technique involves tracking a person’s random movements in real time with a sophisticated computer program that produces 240 images a second and detects systematic signatures unique to each person. The traditional assessment for diagnosing autism involves primarily subjective opinions of a person’s social interaction, deficits in communication, and repetitive and restricted behaviors and interests.
The new screening tool is a collaboration between Jorge V. José, Ph.D., vice president of research at Indiana University and the James H. Rudy Distinguished Professor of Physics in the IU Bloomington College of Arts and Sciences; Elizabeth Torres, Ph.D., the principal investigator for the study and an assistant professor in the Department of Psychology in the School of Arts and Sciences at Rutgers University; and Dimitri Metaxas, Ph.D., a Distinguished Professor of computer science at Rutgers. The research was funded by a $670,000 grant from the National Science Foundation.
"This research may open doors for the autistic community by offering the option of a dynamic diagnosis at a much earlier age and possibly enabling the start of therapy sooner in the child’s development," said Dr. José, who also is a professor of cellular and integrative physiology at the Indiana University School of Medicine.
The new technique provides an earlier, more objective and more accurate diagnosis of autism. It factors the importance of changes in movements and movement sensing, thus enabling the identification of inherent capabilities in each child, rather than just highlighting impairments of the child’s movement systems. It measures tiny fluctuations in movement as the individual moves through space and can determine the exact degree to which these patterns of motion differ from more typically developing individuals, and to what degree they can turn into predictive, reliable and anticipatory movements.
Even in nonverbal children and adults with autism, the method can diagnose autism subtypes, identify gender differences and track individual progress in development and treatment. The method may also be applied to infants.
Dr. José said statistical properties of how people move and the speed and random nature of the movements produce a quantitative measurement that can be applied to individuals when the new technology captures their movements.
“We can estimate the cognitive abilities of people just from the variability of how they move,” Dr. José said. “This may lead to a complementary way to develop therapies for autistic children at an early age.”
In a second paper in the collection, the new method can be applied to interventions. The researchers say it could change the way autistic children learn and communicate by helping them develop self-motivation, rather than relying exclusively on external cues and commands, which are the basis of behavioral therapy for children with autism.
Torres and her team created a digital set-up that works much like a Wii. Children with autism were exposed to onscreen media — such as videos of themselves, cartoons, a music video or a favorite TV show — and learned to communicate what they like with a simple motion.
"Every time the children cross a certain region in space, the media they like best goes on," Dr. Torres said. "They start out randomly exploring their surroundings. They seek where in space that interesting spot is which causes the media to play, and then they do so more systematically. Once they see a cause and effect connection, they move deliberately. The action becomes an intentional behavior."
Researchers found that all 25 children in the study, most of whom were nonverbal, spontaneously learned how to choose their favorite media. They also retained this knowledge over time even without practice.
The children independently learned that they could control their bodies to convey and procure what they want. “Children had to search for the magic spot themselves,” Dr. Torres said. “We didn’t instruct them.”
Torres believes that traditional forms of therapy, which place more emphasis on socially acceptable behavior, can actually hinder children with autism by discouraging mechanisms they have developed to cope with their sensory and motor differences, which vary greatly from individual to individual.
It is too early to tell whether the research will translate into publicly available methods for therapy and diagnosis, Dr. Torres said. But she is confident that parents of children with autism would find it easy to adopt her computer-aided technique to help their children.
'Out-of-body' virtual experience could help social anxiety
New virtual imaging technology could be used as part of therapy to help people get over social anxiety according to new research from the University of East Anglia (UEA).
Research published today investigated for the first time whether people with social anxiety could benefit from seeing themselves interacting in social situations via video capture.
The experiment gave participants the chance to experience social interaction in the safety of a virtual environment by seeing their own life-size image projected into specially scripted real-time video scenes.
UEA researchers, led by Dr Lina Gega from UEA’s Norwich Medical School and MHCO’s Northumberland Talking Therapies, worked with Xenodu Virtual Environments to create more than 100 different social scenarios – such as using public transport, buying a drink at a bar, socialising at a party, shopping, and talking to a stranger in an art gallery.
The researchers tested whether this sort of experience could become a valuable part of Cognitive Behavioural Therapy (CBT) by including an hour-long session midway through a 12-week CBT course.
Dr Gega said: “People with social anxiety are afraid that they will draw attention to themselves and be negatively judged by others in social situations. Many will either avoid public places and social gatherings altogether, or use safety behaviours to cope – such as not making eye contact and being guarded or hyper-vigilant towards others.
“Paradoxically, this sort of behaviour draws attention to people with social anxiety and feeds into their beliefs that they don’t fit in.
“We wanted to see whether practising social situations in a virtual environment could help.”
Paul Strickland from Xenodu, the company behind the virtual environment system, said: “Our system uses video capture to project a user’s life-size image on screen so that they can watch themselves interacting with custom-scripted and digitally edited video clips.
“It isn’t a head-mounted display – which anxious people may find uncomfortable,” he added. “Instead, the user observes from an out-of-body perspective. They can then simultaneously view themselves and interact with the characters of the film.”
Dr Gega’s project focused on six socially anxious young men recovering from psychosis who also have debilitating social anxiety. The participants engaged with a range of scenarios, some of which were designed to feature rude and hostile people. The virtual environments encouraged participants to practice small-talk, maintain eye contact, test beliefs that they wouldn’t know what to say, and resist safety behaviour such as looking at the floor or being hyper-vigilant.
The main benefits of using these virtual environments in therapy was that it helped participants notice and change anxious behaviours in a safe, controlled environment which could be rehearsed over and over again. Participants were found to drop safety behaviours and take greater social risks. And while realistic to an extent, the ‘fake’ feeling of staged scenarios in itself proved to be a virtue.
“It helped the participants question their interpretation of social cues,” said Dr Gega. “For example, if they thought that one of the characters was looking at them ‘funny’ they could immediately see that there must be an alternative explanation because the scenarios were artificial.
“Another useful aspect of the system is that it can be tailored to address specific fears in social situations - for example a fear of performance, intimacy, or crowds,” she added.
“Two of the patients said that the system felt “weird and surreal”, so the element of having an out-of-body experience is something to study further in future – particularly because psychosis itself is defined by a distorted perception of reality.
“This research explored the feasibility and potential added value of using virtual environments as part of CBT. The next stage would be to carry out a randomised, controlled comparison of CBT with and without the virtual environment system to test whether using the system as a therapy tool leads to greater or quicker symptom improvement.”
Mr Strickland added: “I hope our technology can help make a difference to the lives of people experiencing social anxiety and other specific anxiety conditions for which controlled exposure to feared situations is part of therapy. It is particularly versatile because it doesn’t need technical expertise to set up and use. And the library of scenarios can be built on to capture different types of exposure environments needed in day-to-day clinical practice.”
‘Virtual Environments Using Video Capture for Social Phobia with Psychosis’ is published by the journal Cyberpsychology, Behaviour and Social Networking.
MACH system from MIT can coach those with social anxiety
Plenty of people out there have a serious phobia of public speaking and there are tons of other disorders, such as Asperger’s, that severely limit a person’s ability to handle even simple social interactions. M. Ehsan Hoque, a student at the MIT Media Lab, has made these subjects the focus of her latest project: MACH (My Automated Conversation coacH). At the heart of MACH is a complex system of facial and speech recognition algorithms that can detect subtle nuances in intonation while tracking smiles, head nods and eye movement. The latter is especially important since the front end of MACH is a computer generated avatar that can tell when you break eye contact and shift your attention elsewhere.
The software then provides feedback about your performance, helping to prep you for that big presentation or just guide you out of your shell. Experimental data suggests that coaching from MACH could even help you perform better in a job interview. What’s particularly exciting is that the program requires no special hardware; it’s designed to be used with a standard webcam and microphone on a laptop. So it might not be too long before we start seeing apps designed to help users through social awkwardness.
Bionic eye prototype unveiled by Victorian scientists and designers
A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.
It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.
If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.
"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.
"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."
Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.
"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.
"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.
"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."
A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.
"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.
"We don’t want a Heath Robinson wire springs affair on somebody’s head.
"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."
The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.
The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.
Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.
"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.
"But I’ve been proven completely wrong.
"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."
4 Hurdles to Making a Digital Human Brain
Futurists warn of a technological singularity on the not-too-distant horizon when artificial intelligence will equal and eventually surpass human intelligence. But before engineers can make a machine that truly mimics a human mind, scientists still have a long way to go in modeling the brain’s 100 billion neurons and their 100 trillion connections.
Already in Europe, neuroscientist Henry Markram and his team established the controversial but ambitious Human Brain Project that’s seeking to build a virtual brain from scratch. Earlier this year, U.S. President Barack Obama announced that millions of federal dollars will be put toward efforts to map the brain’s activity through the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative.
Friday night (May 31), a panel of experts at the World Science Festival here in New York parsed through challenges such undertakings pose for science and technology. The following are four of the hurdles to making a digital brain discussed during the session “Architects of the Mind: A Blueprint for the Human Brain.”
1. The brain isn’t a computer
Perhaps scientists could build computers that are like brains, but brains don’t run like computers. Humans have a tendency to compare the brain to the most advanced machinery of the day, said developmental neurobiologist Douglas Fields, of the National Institute of Child Health and Human Development. Though our best analogy is a computer right now, “it’s humbling to realize the brain may not work like that at all,” Fields added.
The brain, in part, communicates through electrical impulses, but it’s a biological organ made of billions of cells, and cells are essentially just “bags of seawater,” Fields said. The brain has no wires, no digital code and no programs. Even if scientists could aptly use the analogy of computer code, they wouldn’t know what language the brain was written in.
2. Scientists need better technology
Kristen Harris, a neuroscientist at the University of Texas at Austin, slipped into a computer analogy herself, saying that researchers tend to think a single brain cell has the equivalent power of a laptop. That’s just one way of illustrating the daunting complexity of the processes at work in each individual cell.
Scientists have been able to look at the connections between individual neurons in amazing detail, but only by way of a painstaking process. They finely slice neural tissue, scan hundreds of those slices under an electron microscope, and then put those slices back together again in a computer reconstruction, explained Murray Shanahan, a professor of cognitive robotics at Imperial College London.
To repeat that process for an entire brain would take lifetimes using current technology. And to get an idea of the average brain, scientists would have to compare these trillions of connections across many different brains.
"The big challenge is giving me — the scientist — the tools to do that analysis at a faster level," Harris said. She added that physicists and engineers might be able to help scientists scale up, and she is hopeful the BRAIN initiative will spur such collaboration.
3. It’s not all about neurons
Even if newer machines could efficiently map all of the trillions of neuron connections in the brain, scientists would still have to decipher what all of those links mean for human consciousness and behavior.
What’s more, neurons only make up 15 percent of the cells in the brain, Fields said. The other cells are called glia, which is the Greek word for “glue.” It was long thought that these cells provided structural and nutritional support for the neurons, but Fields said glia might be involved in vital background communication in the brain that’s neither electric nor synaptic.
Scientists have detected changes in glial cells in patients with amyotrophic lateral sclerosis (ALS), epilepsy and Parkinson’s disease, Fields said. A 2011 study found abnormalities in glial cells known as astrocytes in the brains of depressed people who had committed suicide. Fields also pointed out the neurons in Einstein’s brain were not remarkable, but his glial cells were bigger and more complicated than those found in an average brain.
4. The brain is part of a bigger body
The brain is constantly responding to input from the rest of the body. Studying the brain in an isolated way inherently ignores the signals coming in through those pathways, warned Gregory Wheeler, a logician, philosopher and computer scientist at Carnegie Mellon University.
"Brains evolved in order to make the body move around in the world," Wheeler said. Instead of modeling the brain in a disembodied way, scientists should put it in a body — a robot body, that is.
There are already some examples of the kind of machine Wheeler has in mind. He showed the audience a video of Shrewbot, a robot modeled after the Etruscan pygmy shrew created by researchers at the Bristol Robotics Lab in the United Kingdom. The signals coming in from the robot’s sensitive “whiskers” influence its next moves.
This beer-pouring robot is programmed to anticipate human actions
A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.
Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.
Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.
"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.
Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.
In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.
"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."