Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

94 notes

The Brain Activity Map
Researchers explain the goals and structure of a new brain-mapping project
A proposed effort to map brain activity on a large scale, expected to be announced by the White House later this month, could help neuroscientists understand the origins of cognition, perception, and other phenomena. These brain activities haven’t been well understood to date, in part because they arise from the interaction of large sets of neurons whose coördinated efforts scientists cannot currently track.
“There are all kinds of remarkable tools to study the microscopic world of individual cells,” says John Donoghue, a neuroscientist at Brown and a participant in the project. “And on the macroscopic end, we have tools like MRI and EEG that tell us about the function of the brain and its structure, but at a low resolution. There is a gap in the middle. We need to record many, many neurons exactly as they operate with temporal precision and in large areas,” he says.
An article published Thursday in Science online expands the project’s already ambitious goals beyond just recording the activity of all individual neurons in a brain circuit simultaneously. Researchers should also find ways to manipulate the neurons within those circuits and understand circuit function through new methods of data analysis and modeling, the authors write.
Understanding how neurons communicate with one another across large regions of the brain will be critical to understanding how the brain works, according to participants in the project. Other efforts to map out the physical connections in the brain are already under way (see “TR10: Connectomics” and “Mapping the Brain on a Massive Scale”), but these projects look at static brains or can only get a rough view of how regions of the brain communicate. The new project will probably start applying its novel and yet unknown technologies on simpler brains, such as those of flies, and will probably take decades to achieve its goals.
Numerous leaders from the fields of neuroscience, nanotechnology, and synthetic biology are expected to collaborate on the effort. “We need something large scale to try to build tools for the future,” says Rafael Yuste, a neurobiologist at Columbia University and a member of the project. “We view ourselves as tool builders. I think we could provide to the scientific community the methods that could be used for the next stage in neuroscience.”
In addition to deepening fundamental understanding of the brain, the project may also lead to new treatments for psychiatric and neurological disorders. “If we truly understand how things like thoughts, cognition, and other features of the brain emerge, then we should have a better understanding of mood disorders, Parkinson’s, epilepsy and other conditions that are thought to arise from brain-wide circuitry problems,” says Donoghue.
Details about which technology ideas will be given the green light and how much money will support their development are expected to be revealed in the White House announcement that is still to come. The project is likely to be supported by the National Institutes of Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Science and Technology Policy, and private foundations, participants say. It’s not yet clear how much money will be needed or which technologies will be given priority.
Whichever particular technologies emerge, nanotechnology is likely to be involved, in part because of the need for smaller and faster sensors to record neuronal activity across the brain. Existing sensors can record the electrical activity of neurons, but these chips can typically monitor fewer than 100 neurons at a time and can’t record activity from neighboring neurons, which would be necessary to understand how neurons interact with one another. Paul Weiss, director of the California NanoSystems Institute at the University of California, Los Angeles, a participant in the project, says that nanofabrication techniques could address this problem, with smaller chips bearing smaller electrical and even chemical probes. “We’ve had over a decade a fairly substantial investment in science and technology to develop the capability … to control how what we make interacts with the chemical, physical, and biological worlds,” he says.
Novel optical techniques could also aid the mapping project. Currently, many research groups use calcium-sensitive fluorescent dyes to study neuron firing, but Yuste wants to develop an optical technique that uses voltage-sensitive fluorescent dyes for a faster readout. “Neurons communicate using voltage,” he says. “We would like to develop voltage imaging so we will be able to measure neuronal activity directly.”
While many things about the project are uncertain, one thing is clear—there is going to be a lot of data to store, share, and analyze. “We have just begun to scratch the surface of how you deal with data in high-dimensional spaces,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute. “If you are talking about one million neurons, no one can even imagine what that looks like–it is way beyond what we can perceive in three dimensions.”
The Science article also sketches out a rough time line. Within five years, it should be possible to monitor tens of thousands of neurons; in 15 years, one million neurons should be possible. A fly’s brain has about 100,000 neurons, a mouse’s about 75 million, and a human’s about 85 billion. “With one million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse,” the authors write.

The Brain Activity Map

Researchers explain the goals and structure of a new brain-mapping project

A proposed effort to map brain activity on a large scale, expected to be announced by the White House later this month, could help neuroscientists understand the origins of cognition, perception, and other phenomena. These brain activities haven’t been well understood to date, in part because they arise from the interaction of large sets of neurons whose coördinated efforts scientists cannot currently track.

“There are all kinds of remarkable tools to study the microscopic world of individual cells,” says John Donoghue, a neuroscientist at Brown and a participant in the project. “And on the macroscopic end, we have tools like MRI and EEG that tell us about the function of the brain and its structure, but at a low resolution. There is a gap in the middle. We need to record many, many neurons exactly as they operate with temporal precision and in large areas,” he says.

An article published Thursday in Science online expands the project’s already ambitious goals beyond just recording the activity of all individual neurons in a brain circuit simultaneously. Researchers should also find ways to manipulate the neurons within those circuits and understand circuit function through new methods of data analysis and modeling, the authors write.

Understanding how neurons communicate with one another across large regions of the brain will be critical to understanding how the brain works, according to participants in the project. Other efforts to map out the physical connections in the brain are already under way (see “TR10: Connectomics” and “Mapping the Brain on a Massive Scale”), but these projects look at static brains or can only get a rough view of how regions of the brain communicate. The new project will probably start applying its novel and yet unknown technologies on simpler brains, such as those of flies, and will probably take decades to achieve its goals.

Numerous leaders from the fields of neuroscience, nanotechnology, and synthetic biology are expected to collaborate on the effort. “We need something large scale to try to build tools for the future,” says Rafael Yuste, a neurobiologist at Columbia University and a member of the project. “We view ourselves as tool builders. I think we could provide to the scientific community the methods that could be used for the next stage in neuroscience.”

In addition to deepening fundamental understanding of the brain, the project may also lead to new treatments for psychiatric and neurological disorders. “If we truly understand how things like thoughts, cognition, and other features of the brain emerge, then we should have a better understanding of mood disorders, Parkinson’s, epilepsy and other conditions that are thought to arise from brain-wide circuitry problems,” says Donoghue.

Details about which technology ideas will be given the green light and how much money will support their development are expected to be revealed in the White House announcement that is still to come. The project is likely to be supported by the National Institutes of Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Science and Technology Policy, and private foundations, participants say. It’s not yet clear how much money will be needed or which technologies will be given priority.

Whichever particular technologies emerge, nanotechnology is likely to be involved, in part because of the need for smaller and faster sensors to record neuronal activity across the brain. Existing sensors can record the electrical activity of neurons, but these chips can typically monitor fewer than 100 neurons at a time and can’t record activity from neighboring neurons, which would be necessary to understand how neurons interact with one another. Paul Weiss, director of the California NanoSystems Institute at the University of California, Los Angeles, a participant in the project, says that nanofabrication techniques could address this problem, with smaller chips bearing smaller electrical and even chemical probes. “We’ve had over a decade a fairly substantial investment in science and technology to develop the capability … to control how what we make interacts with the chemical, physical, and biological worlds,” he says.

Novel optical techniques could also aid the mapping project. Currently, many research groups use calcium-sensitive fluorescent dyes to study neuron firing, but Yuste wants to develop an optical technique that uses voltage-sensitive fluorescent dyes for a faster readout. “Neurons communicate using voltage,” he says. “We would like to develop voltage imaging so we will be able to measure neuronal activity directly.”

While many things about the project are uncertain, one thing is clear—there is going to be a lot of data to store, share, and analyze. “We have just begun to scratch the surface of how you deal with data in high-dimensional spaces,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute. “If you are talking about one million neurons, no one can even imagine what that looks like–it is way beyond what we can perceive in three dimensions.”

The Science article also sketches out a rough time line. Within five years, it should be possible to monitor tens of thousands of neurons; in 15 years, one million neurons should be possible. A fly’s brain has about 100,000 neurons, a mouse’s about 75 million, and a human’s about 85 billion. “With one million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse,” the authors write.

Filed under brain brain activity Brain Activity Map brain-mapping neuroimaging technology neuroscience science

459 notes

Clever Battery Completes Stretchable Electronics Package
Northwestern University’s Yonggang Huang and the University of Illinois’ John A. Rogers are the first to demonstrate a stretchable lithium-ion battery — a flexible device capable of powering their innovative stretchable electronics.
No longer needing to be connected by a cord to an electrical outlet, the stretchable electronic devices now could be used anywhere, including inside the human body. The implantable electronics could monitor anything from brain waves to heart activity, succeeding where flat, rigid batteries would fail.
Huang and Rogers have demonstrated a battery that continues to work — powering a commercial light-emitting diode (LED) — even when stretched, folded, twisted and mounted on a human elbow. The battery can work for eight to nine hours before it needs recharging, which can be done wirelessly.
The new battery enables true integration of electronics and power into a small, stretchable package. Details are published by the online journal Nature Communications.
“We start with a lot of battery components side by side in a very small space, and we connect them with tightly packed, long wavy lines,” said Huang, a corresponding author of the paper. “These wires provide the flexibility. When we stretch the battery, the wavy interconnecting lines unfurl, much like yarn unspooling. And we can stretch the device a great deal and still have a working battery.”
Huang led the portion of the research focused on theory, design and modeling. He is the Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern’s McCormick School of Engineering and Applied Science.
The power and voltage of the stretchable battery are similar to a conventional lithium-ion battery of the same size, but the flexible battery can stretch up to 300 percent of its original size and still function.

Clever Battery Completes Stretchable Electronics Package

Northwestern University’s Yonggang Huang and the University of Illinois’ John A. Rogers are the first to demonstrate a stretchable lithium-ion battery — a flexible device capable of powering their innovative stretchable electronics.

No longer needing to be connected by a cord to an electrical outlet, the stretchable electronic devices now could be used anywhere, including inside the human body. The implantable electronics could monitor anything from brain waves to heart activity, succeeding where flat, rigid batteries would fail.

Huang and Rogers have demonstrated a battery that continues to work — powering a commercial light-emitting diode (LED) — even when stretched, folded, twisted and mounted on a human elbow. The battery can work for eight to nine hours before it needs recharging, which can be done wirelessly.

The new battery enables true integration of electronics and power into a small, stretchable package. Details are published by the online journal Nature Communications.

“We start with a lot of battery components side by side in a very small space, and we connect them with tightly packed, long wavy lines,” said Huang, a corresponding author of the paper. “These wires provide the flexibility. When we stretch the battery, the wavy interconnecting lines unfurl, much like yarn unspooling. And we can stretch the device a great deal and still have a working battery.”

Huang led the portion of the research focused on theory, design and modeling. He is the Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern’s McCormick School of Engineering and Applied Science.

The power and voltage of the stretchable battery are similar to a conventional lithium-ion battery of the same size, but the flexible battery can stretch up to 300 percent of its original size and still function.

Filed under battery stretchable battery BCI implantable electronics implants technology science

562 notes


Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.
Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.
I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.
Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.

Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.

I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.

Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Filed under vision visual system computerized eyewear augmented reality technology science

167 notes

Researchers build robotic bat wing
Researchers at Brown University have developed a robotic bat wing that is providing valuable new information about dynamics of flapping flight in real bats.
The robot, which mimics the wing shape and motion of the lesser dog-faced fruit bat, is designed to flap while attached to a force transducer in a wind tunnel. As the lifelike wing flaps, the force transducer records the aerodynamic forces generated by the moving wing. By measuring the power output of the three servo motors that control the robot’s seven movable joints, researchers can evaluate the energy required to execute wing movements.
Testing showed the robot can match the basic flight parameters of bats, producing enough thrust to overcome drag and enough lift to carry the weight of the model species.
A paper describing the robot and presenting results from preliminary experiments is published in the journal Bioinspiration and Biomimetics. The work was done in labs of Brown professors Kenneth Breuer and Sharon Swartz, who are the senior authors on the paper. Breuer, an engineer, and Swartz, a biologist, have studied bat flight and anatomy for years.
The faux flapper generates data that could never be collected directly from live animals, said Joseph Bahlman, a graduate student at Brown who led the project. Bats can’t fly when connected to instruments that record aerodynamic forces directly, so that isn’t an option — and bats don’t take requests.
“We can’t ask a bat to flap at a frequency of eight hertz then raise it to nine hertz so we can see what difference that makes,” Bahlman said. “They don’t really cooperate that way.”
But the model does exactly what the researchers want it to do. They can control each of its movement capabilities — kinematic parameters — individually. That way they can adjust one parameter while keeping the rest constant to isolate the effects.
“We can answer questions like, ‘Does increasing wing beat frequency improve lift and what’s the energetic cost of doing that?’” Bahlman said. “We can directly measure the relationship between these kinematic parameters, aerodynamic forces, and energetics.”
Detailed experimental results from the robot will be described in future research papers, but this first paper includes some preliminary results from a few case studies.

Researchers build robotic bat wing

Researchers at Brown University have developed a robotic bat wing that is providing valuable new information about dynamics of flapping flight in real bats.

The robot, which mimics the wing shape and motion of the lesser dog-faced fruit bat, is designed to flap while attached to a force transducer in a wind tunnel. As the lifelike wing flaps, the force transducer records the aerodynamic forces generated by the moving wing. By measuring the power output of the three servo motors that control the robot’s seven movable joints, researchers can evaluate the energy required to execute wing movements.

Testing showed the robot can match the basic flight parameters of bats, producing enough thrust to overcome drag and enough lift to carry the weight of the model species.

A paper describing the robot and presenting results from preliminary experiments is published in the journal Bioinspiration and Biomimetics. The work was done in labs of Brown professors Kenneth Breuer and Sharon Swartz, who are the senior authors on the paper. Breuer, an engineer, and Swartz, a biologist, have studied bat flight and anatomy for years.

The faux flapper generates data that could never be collected directly from live animals, said Joseph Bahlman, a graduate student at Brown who led the project. Bats can’t fly when connected to instruments that record aerodynamic forces directly, so that isn’t an option — and bats don’t take requests.

“We can’t ask a bat to flap at a frequency of eight hertz then raise it to nine hertz so we can see what difference that makes,” Bahlman said. “They don’t really cooperate that way.”

But the model does exactly what the researchers want it to do. They can control each of its movement capabilities — kinematic parameters — individually. That way they can adjust one parameter while keeping the rest constant to isolate the effects.

“We can answer questions like, ‘Does increasing wing beat frequency improve lift and what’s the energetic cost of doing that?’” Bahlman said. “We can directly measure the relationship between these kinematic parameters, aerodynamic forces, and energetics.”

Detailed experimental results from the robot will be described in future research papers, but this first paper includes some preliminary results from a few case studies.

Filed under robobat bats robotics robots wing movements neuroscience technology science

85 notes

Lessons from cockroaches could inform robotics
Running cockroaches start to recover from being shoved sideways before their dawdling nervous system kicks in to tell their legs what to do, researchers have found. These new insights on how biological systems stabilize could one day help engineers design steadier robots and improve doctors’ understanding of human gait abnormalities.
In experiments, the roaches were able to maintain their footing mechanically—using their momentum and the spring-like architecture of their legs, rather than neurologically, relying on impulses sent from their central nervous system to their muscles.
"The response time we observed is more than three times longer than you’d expect," said Shai Revzen, an assistant professor of electrical engineering and computer science, as well as ecology and evolutionary biology, at the University of Michigan. Revzen is the lead author of a paper on the findings published online in Biological Cybernetics. It will appear in a forthcoming print edition.
"What we see is that the animals’ nervous system is working at a substantial delay," he said. "It could potentially act a lot sooner, within about a thirtieth of a second, but instead, it kicks in after about a step and a half or two steps—about a tenth of a second. For some reason, the nervous system is waiting and seeing how it shapes out."
Revzen said the new findings might imply that the biological brain, at least in cockroaches, adjusts the gait only at whole-step intervals rather than at any point in a step. Periodic, rather than continuous, feedback systems might lead to more stable (not to mention energy-efficient) walking robots—whether they travel on two feet or six.
Robot makers often look to nature for inspiration. As animals move through the world, they have to respond to unexpected disturbances like rocky, uneven ground or damaged limbs. Revzen and his team believe that patterns in how they move as they adjust could give away how their machinery and neurology work together.
"The fundamental question is, ‘What can you do with a mechanical suspension versus one that requires electronic feedback?" Revzen said. "The animals obviously have much better mechanical designs than anything we know how to build. But if we could learn how they do it, we might be able to reproduce it."

Lessons from cockroaches could inform robotics

Running cockroaches start to recover from being shoved sideways before their dawdling nervous system kicks in to tell their legs what to do, researchers have found. These new insights on how biological systems stabilize could one day help engineers design steadier robots and improve doctors’ understanding of human gait abnormalities.

In experiments, the roaches were able to maintain their footing mechanically—using their momentum and the spring-like architecture of their legs, rather than neurologically, relying on impulses sent from their central nervous system to their muscles.

"The response time we observed is more than three times longer than you’d expect," said Shai Revzen, an assistant professor of electrical engineering and computer science, as well as ecology and evolutionary biology, at the University of Michigan. Revzen is the lead author of a paper on the findings published online in Biological Cybernetics. It will appear in a forthcoming print edition.

"What we see is that the animals’ nervous system is working at a substantial delay," he said. "It could potentially act a lot sooner, within about a thirtieth of a second, but instead, it kicks in after about a step and a half or two steps—about a tenth of a second. For some reason, the nervous system is waiting and seeing how it shapes out."

Revzen said the new findings might imply that the biological brain, at least in cockroaches, adjusts the gait only at whole-step intervals rather than at any point in a step. Periodic, rather than continuous, feedback systems might lead to more stable (not to mention energy-efficient) walking robots—whether they travel on two feet or six.

Robot makers often look to nature for inspiration. As animals move through the world, they have to respond to unexpected disturbances like rocky, uneven ground or damaged limbs. Revzen and his team believe that patterns in how they move as they adjust could give away how their machinery and neurology work together.

"The fundamental question is, ‘What can you do with a mechanical suspension versus one that requires electronic feedback?" Revzen said. "The animals obviously have much better mechanical designs than anything we know how to build. But if we could learn how they do it, we might be able to reproduce it."

Filed under robots robotics cockroaches gait disorders neuroscience technology science

87 notes

Nano-machines for “Bionic Proteins”
Physicists of the University of Vienna together with researchers from the University of Natural Resources and Life Sciences Vienna developed nano-machines which recreate principal activities of proteins. They present the first versatile and modular example of a fully artificial protein-mimetic model system, thanks to the Vienna Scientific Cluster (VSC), a high performance computing infrastructure. These “bionic proteins” could play an important role in innovating pharmaceutical research. The results have now been published in the renowned journal “Physical Review Letters”.
Proteins are the fundamental building blocks of all living organism we currently know. Because of the large number and complexity of bio-molecular processes they are capable of, proteins are often referred to as “molecular machines”. Take for instance the proteins in your muscles: At each contraction stimulated by the brain, an uncountable number of proteins change their structures to create the collective motion of the contraction. This extraordinary process is performed by molecules which have a size of only about a nanometer, a billionth of a meter. Muscle contraction is just one of the numerous activities of proteins: There are proteins that transport cargo in the cells, proteins that construct other proteins, there are even cages in which proteins that “mis-behave” can be trapped for correction, and the list goes on and on. “Imitating these astonishing bio-mechanical properties of proteins and transferring them to a fully artificial system is our long term objective”, says Ivan Coluzza from the Faculty of Physics of the University of Vienna, who works on this project together with colleagues of the University of Natural Resources and Life Sciences Vienna.
Simulations thanks to Vienna Scientific Cluster (VSC)In a recent paper in Physical Review Letters, the team presented the first example of a fully artificial bio-mimetic model system capable of spontaneously self-knotting into a target structure. Using computer simulations, they reverse engineered proteins by focusing on the key elements that give them the ability to execute the program written in the genetic code. The computationally very intensive simulations have been made possible by access to the powerful Vienna Scientific Cluster (VSC), a high performance computing infrastructure operated jointly by the University of Vienna, the Vienna University of Technology and the University of Natural Resources and Life Sciences Vienna.Artificial proteins in the laboratoryThe team now works on realizing such artificial proteins in the laboratory using specially functionalized nanoparticles. The particles will then be connected into chains following the sequence determined by the computer simulations, such that the artificial proteins fold into the desired shapes. Such knotted nanostructures could be used as new stable drug delivery vehicles and as enzyme-like, but more stable, catalysts.

Nano-machines for “Bionic Proteins”

Physicists of the University of Vienna together with researchers from the University of Natural Resources and Life Sciences Vienna developed nano-machines which recreate principal activities of proteins. They present the first versatile and modular example of a fully artificial protein-mimetic model system, thanks to the Vienna Scientific Cluster (VSC), a high performance computing infrastructure. These “bionic proteins” could play an important role in innovating pharmaceutical research. The results have now been published in the renowned journal “Physical Review Letters”.

Proteins are the fundamental building blocks of all living organism we currently know. Because of the large number and complexity of bio-molecular processes they are capable of, proteins are often referred to as “molecular machines”. Take for instance the proteins in your muscles: At each contraction stimulated by the brain, an uncountable number of proteins change their structures to create the collective motion of the contraction. This extraordinary process is performed by molecules which have a size of only about a nanometer, a billionth of a meter. Muscle contraction is just one of the numerous activities of proteins: There are proteins that transport cargo in the cells, proteins that construct other proteins, there are even cages in which proteins that “mis-behave” can be trapped for correction, and the list goes on and on. “Imitating these astonishing bio-mechanical properties of proteins and transferring them to a fully artificial system is our long term objective”, says Ivan Coluzza from the Faculty of Physics of the University of Vienna, who works on this project together with colleagues of the University of Natural Resources and Life Sciences Vienna.

Simulations thanks to Vienna Scientific Cluster (VSC)
In a recent paper in Physical Review Letters, the team presented the first example of a fully artificial bio-mimetic model system capable of spontaneously self-knotting into a target structure. Using computer simulations, they reverse engineered proteins by focusing on the key elements that give them the ability to execute the program written in the genetic code. The computationally very intensive simulations have been made possible by access to the powerful Vienna Scientific Cluster (VSC), a high performance computing infrastructure operated jointly by the University of Vienna, the Vienna University of Technology and the University of Natural Resources and Life Sciences Vienna.

Artificial proteins in the laboratory
The team now works on realizing such artificial proteins in the laboratory using specially functionalized nanoparticles. The particles will then be connected into chains following the sequence determined by the computer simulations, such that the artificial proteins fold into the desired shapes. Such knotted nanostructures could be used as new stable drug delivery vehicles and as enzyme-like, but more stable, catalysts.

Filed under artificial proteins AI bionics robotics technology neuroscience science

117 notes

Cyborg Possibilities – The Arms and Legs

The most recent advancements in bionic arms seem to be included in the BeBionic prosthetic arms. This arm can detect signals in the nerves that exist in whatever amount of the arm remains and then uses those signals to drive the prosthetic’s functions. Essentially, operation ought to work much like the user’s original arm did: The person thinks about moving their arm in a certain way and the arm responds.

Despite looking cooler, the BeBionic hand is still a ways away from a human hand. Yet, the improvements are impressive. Grip strength has improved from about 17 pounds to about 31. It can hold about 100 pounds of weight, up from about 70. It also comes in a range of designs. The hand isn’t exorbitantly expensive, but at $25,000 to $35,000 it isn’t exactly cheap either. At that price range, concerns that future human enhancement technology will be a possibility only for the well to do seem likely.

Read more

Filed under bionics robotics prosthetic limbs prosthetics technology science

40 notes

Million dollar B.R.A.I.N. Prize applications open until March 15, 2013

If you have an exciting advancement in neurotechnology, a million dollar award could help take your product from great idea to world-changing application. Israel Brain Technologies (IBT), a non-profit organization dedicated to the development of brain-related science, is now seeking applicants for its $1,000,000 Global B.R.A.I.N. Prize competition. Applications will be accepted until March 15, 2013.

The Global B.R.A.I.N (Breakthrough Research And Innovation in Neurotechnology) Prize is an international award that was announced in 2011 to be granted to an individual, group or organization for a recent breakthrough in the field of brain technology.

The goal of the prize is best described by Dr. Rafi Gidron, Founder and current Chairman of IBT: “The B.R.A.I.N. Prize will bring together the best minds across geographic boundaries to create the next generation of brain-related innovation, from Brain Machine Interface to Brain Inspired Computing to urgently-needed solutions for brain disease. It’s a global brain-gain. Our aim is to open minds… quite literally.”

The international judging committee for the Global B.R.A.I.N. Prize is composed of distinguished leaders in neuroscience, technology and business, including three Nobel laureates: Profs. Eric Kandel, Daniel Kahneman and Bert Sakmann. IBT is a non-profit organization inspired by the vision of Israeli President Shimon Peres to foster the next global breakthrough in neurotechnology.

Filed under brain B.R.A.I.N. Prize neurotechnology neuroscience technology science

215 notes

New 3D printing technique could speed up progress towards creation of artificial organs

In the more immediate future it could be used to generate biopsy-like tissue samples for drug testing. The technique relies on an adjustable “microvalve” to build up layers of human embryonic stem cells (hESCs).


Altering the nozzle diameter precisely controls the rate at which cells are dispensed.


Lead scientist Dr Will Shu, from Heriot-Watt University in Edinburgh, said: “We found that the valve-based printing is gentle enough to maintain high stem cell viability, accurate enough to produce spheroids of uniform size, and most importantly, the printed hESCs maintained their pluripotency - the ability to differentiate into any other cell type.”


Embryonic stem cells, which originate from early stage embryos, are blank slates with the potential to become any type of tissue in the body.

The research is reported in the journal Biofabrication.
In the long term, the new printing technique could pave the way for hESCs being incorporated into transplant-ready laboratory-made organs and tissues, said the researchers.
The 3D structures will also enable scientists to create more accurate human tissue models for drug testing.
Cloning technology can produce embryonic stem cells, or cells with ESC properties, containing a patient’s own genetic programming.
Artificial tissue and organs made from such cells could be implanted into the patient from which they are derived without triggering a dangerous immune response.
Jason King, business development manager of stem cell biotech company Roslin Cellab, which took part in the research, said: “Normally laboratory grown cells grow in 2D but some cell types have been printed in 3D. However, up to now, human stem cell cultures have been too sensitive to manipulate in this way.
"This is a scientific development which we hope and believe will have immensely valuable long-term implications for reliable, animal-free, drug testing, and, in the longer term, to provide organs for transplant on demand, without the need for donation and without the problems of immune suppression and potential organ rejection."

New 3D printing technique could speed up progress towards creation of artificial organs

In the more immediate future it could be used to generate biopsy-like tissue samples for drug testing. The technique relies on an adjustable “microvalve” to build up layers of human embryonic stem cells (hESCs).

Altering the nozzle diameter precisely controls the rate at which cells are dispensed.

Lead scientist Dr Will Shu, from Heriot-Watt University in Edinburgh, said: “We found that the valve-based printing is gentle enough to maintain high stem cell viability, accurate enough to produce spheroids of uniform size, and most importantly, the printed hESCs maintained their pluripotency - the ability to differentiate into any other cell type.”

Embryonic stem cells, which originate from early stage embryos, are blank slates with the potential to become any type of tissue in the body.

The research is reported in the journal Biofabrication.

In the long term, the new printing technique could pave the way for hESCs being incorporated into transplant-ready laboratory-made organs and tissues, said the researchers.

The 3D structures will also enable scientists to create more accurate human tissue models for drug testing.

Cloning technology can produce embryonic stem cells, or cells with ESC properties, containing a patient’s own genetic programming.

Artificial tissue and organs made from such cells could be implanted into the patient from which they are derived without triggering a dangerous immune response.

Jason King, business development manager of stem cell biotech company Roslin Cellab, which took part in the research, said: “Normally laboratory grown cells grow in 2D but some cell types have been printed in 3D. However, up to now, human stem cell cultures have been too sensitive to manipulate in this way.

"This is a scientific development which we hope and believe will have immensely valuable long-term implications for reliable, animal-free, drug testing, and, in the longer term, to provide organs for transplant on demand, without the need for donation and without the problems of immune suppression and potential organ rejection."

Filed under stem cells embryonic stem cells artificial tissue regenerative medicine health technology science

190 notes

AR Goggles Restore Depth Perception To People Blind in One Eye
People who’ve lost sight in one eye can still see with the other, but they lack binocular depth perception.
Some of them could benefit from a pair of augmented reality glasses being built at the University of Yamanashi in Japan, that artificially introduces a feeling of depth in a person’s healthy eye.
The group, led by Xiaoyang Mao, started out with a pair of commercially available 3D glasses, the daintily named Wrap 920AR, manufactured by Vuzix Corporation. (Vuzix is also building another AR headset called the M100 that on first sight looks like quite the competitor to to Google Glass.)
The Wrap 920AR looks like a pair of regular tinted glasses, but with small cameras poking out of each lens. The lenses are transparent and the device, Vuzix explains on its website, both captures and projects images, giving the wearer of the device front-row seats to a 2D or 3D AR show transmitted from a computer.
The group at Yamanashi have created software that makes use of the twin cameras. When a person puts the glasses on, each camera scopes out the scene that each eye would see. The images are funneled into software on a computer, which combines the perspective of both cameras and creates a “defocus” effect. That is, some objects to stay in focus while others stay out of focus, resulting in a feeling of depth. That version of the scene in front of them is projected to the single healthy eye of the wearer.
The system isn’t quite ready to be taken for spin around town yet. It’s bulky still, the creators write, and needs a computer by its side, creating and projecting images in real time. But the creators admit such computing power is likely to be found on mobile devices soon, and when it is, they’ll be ready.

AR Goggles Restore Depth Perception To People Blind in One Eye

People who’ve lost sight in one eye can still see with the other, but they lack binocular depth perception.

Some of them could benefit from a pair of augmented reality glasses being built at the University of Yamanashi in Japan, that artificially introduces a feeling of depth in a person’s healthy eye.

The group, led by Xiaoyang Mao, started out with a pair of commercially available 3D glasses, the daintily named Wrap 920AR, manufactured by Vuzix Corporation. (Vuzix is also building another AR headset called the M100 that on first sight looks like quite the competitor to to Google Glass.)

The Wrap 920AR looks like a pair of regular tinted glasses, but with small cameras poking out of each lens. The lenses are transparent and the device, Vuzix explains on its website, both captures and projects images, giving the wearer of the device front-row seats to a 2D or 3D AR show transmitted from a computer.

The group at Yamanashi have created software that makes use of the twin cameras. When a person puts the glasses on, each camera scopes out the scene that each eye would see. The images are funneled into software on a computer, which combines the perspective of both cameras and creates a “defocus” effect. That is, some objects to stay in focus while others stay out of focus, resulting in a feeling of depth. That version of the scene in front of them is projected to the single healthy eye of the wearer.

The system isn’t quite ready to be taken for spin around town yet. It’s bulky still, the creators write, and needs a computer by its side, creating and projecting images in real time. But the creators admit such computing power is likely to be found on mobile devices soon, and when it is, they’ll be ready.

Filed under blindness depth perception Wrap 920AR goggles technology science

free counters