Neuroscience

Articles and news from the latest research reports.

98 notes

Centers throughout the brain work together to make reading possible
A combination of brain scans and reading tests has revealed that several regions in the brain are responsible for allowing humans to read.
The findings open up the possibility that individuals who have difficulty reading may only need additional training for specific parts of the brain — targeted therapies that could more directly address their individual weaknesses.
“Reading is a complex task. No single part of the brain can do all the work,” said Qinghua He, postdoctoral research associate at the USC Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences, and first author of a study on this research that was published in The Journal of Neuroscience on July 31.
The study looked at the correlation between reading ability and brain structure revealed by high-resolution magnetic resonance imaging (MRI) scans of more than 200 participants.
To control for external factors, the participants were about the same age and education level (college students); right-handed (lefties use the opposite hemisphere of their brain for reading); and all had about the same language skills (Chinese-speaking, with English as a second language for more than nine years). Their IQ, response speed and memory were also tested.
The study first collected data for seven different reading tests of a sample of more than 400 participants. These tests were intended to explore three aspects of their reading ability: phonological decoding ability (the ability to sound out printed words); form-sound association (how well participants could make connections between a new word and sound); and naming speed (how quickly participants were able to read out loud).
Each of these aspects, it turned out, was related to the gray matter volume — the amount of neurons — in different parts of the brain.
The MRI analysis showed that phonological decoding ability was strongly connected with gray matter volume in the left superior parietal lobe (around the top/rear of the brain); form-sound association was strongly connected with the hippocampus and cerebellum; and naming speed lit up a variety of locations around the brain.
“Our results strongly suggest that reading consists of unique capacities and is supported by distinct neural systems that are relatively independent of general cognitive abilities,” said Gui Xue, corresponding author of the study. Xue was formerly a research assistant professor at USC and now is a professor and director of the Center for Brain and Learning Sciences at Beijing Normal University.
“Although there is no doubt that reading has to build up existing neural systems due to the short history of written language in human evolution, years of reading experiences might have finely tuned the system to accommodate the specific requirement of a given written system,” Xue said.
He and Xue collaborated with Chunhui Chen and Qi Dong of Beijing Normal University; Chuansheng Chen of the University of California, Irvine; and Zhong-Lin Lu of Ohio State University.
One of the top features of this study was its unusually wide sample size, according to researchers. Typically, MRI studies test a relatively small sample of individuals — perhaps around 20 to 30 — because of the high cost of using the MRI machine. Testing a single individual can cost about $500, depending on the nature of the research.
The team had the good fortune of receiving access to Beijing Normal University’s new MRI center — the BNU Imaging Center for Brain Research — just before it opened to the public. With support from several grants, the researchers were able to conduct MRI tests on 233 individuals.
Next, the group will explore how to combine data from other factors, such as white matter, resting and task functional MRI, as well as more powerful machine-learning techniques, to improve the accuracy of individuals’ reading abilities.
“Research along this line will enable the early diagnosis of reading difficulties and the development of more targeted therapies,” Xue said.

Centers throughout the brain work together to make reading possible

A combination of brain scans and reading tests has revealed that several regions in the brain are responsible for allowing humans to read.

The findings open up the possibility that individuals who have difficulty reading may only need additional training for specific parts of the brain — targeted therapies that could more directly address their individual weaknesses.

“Reading is a complex task. No single part of the brain can do all the work,” said Qinghua He, postdoctoral research associate at the USC Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences, and first author of a study on this research that was published in The Journal of Neuroscience on July 31.

The study looked at the correlation between reading ability and brain structure revealed by high-resolution magnetic resonance imaging (MRI) scans of more than 200 participants.

To control for external factors, the participants were about the same age and education level (college students); right-handed (lefties use the opposite hemisphere of their brain for reading); and all had about the same language skills (Chinese-speaking, with English as a second language for more than nine years). Their IQ, response speed and memory were also tested.

The study first collected data for seven different reading tests of a sample of more than 400 participants. These tests were intended to explore three aspects of their reading ability: phonological decoding ability (the ability to sound out printed words); form-sound association (how well participants could make connections between a new word and sound); and naming speed (how quickly participants were able to read out loud).

Each of these aspects, it turned out, was related to the gray matter volume — the amount of neurons — in different parts of the brain.

The MRI analysis showed that phonological decoding ability was strongly connected with gray matter volume in the left superior parietal lobe (around the top/rear of the brain); form-sound association was strongly connected with the hippocampus and cerebellum; and naming speed lit up a variety of locations around the brain.

“Our results strongly suggest that reading consists of unique capacities and is supported by distinct neural systems that are relatively independent of general cognitive abilities,” said Gui Xue, corresponding author of the study. Xue was formerly a research assistant professor at USC and now is a professor and director of the Center for Brain and Learning Sciences at Beijing Normal University.

“Although there is no doubt that reading has to build up existing neural systems due to the short history of written language in human evolution, years of reading experiences might have finely tuned the system to accommodate the specific requirement of a given written system,” Xue said.

He and Xue collaborated with Chunhui Chen and Qi Dong of Beijing Normal University; Chuansheng Chen of the University of California, Irvine; and Zhong-Lin Lu of Ohio State University.

One of the top features of this study was its unusually wide sample size, according to researchers. Typically, MRI studies test a relatively small sample of individuals — perhaps around 20 to 30 — because of the high cost of using the MRI machine. Testing a single individual can cost about $500, depending on the nature of the research.

The team had the good fortune of receiving access to Beijing Normal University’s new MRI center — the BNU Imaging Center for Brain Research — just before it opened to the public. With support from several grants, the researchers were able to conduct MRI tests on 233 individuals.

Next, the group will explore how to combine data from other factors, such as white matter, resting and task functional MRI, as well as more powerful machine-learning techniques, to improve the accuracy of individuals’ reading abilities.

“Research along this line will enable the early diagnosis of reading difficulties and the development of more targeted therapies,” Xue said.

Filed under reading brain scans brain structure MRI gray matter parietal lobe hippocampus cerebellum neuroscience science

182 notes

Study reveals potential role of ‘love hormone’ oxytocin in brain function
Findings of NYU Langone researchers may have relevance in autism-spectrum disorder
In a loud, crowded restaurant, having the ability to focus on the people and conversation at your own table is critical. Nerve cells in the brain face similar challenges in separating wanted messages from background chatter. A key element in this process appears to be oxytocin, typically known as the “love hormone” for its role in promoting social and parental bonding.
In a study appearing online August 4 in Nature, NYU Langone Medical Center researchers decipher how oxytocin, acting as a neurohormone in the brain, not only reduces background noise, but more importantly, increases the strength of desired signals. These findings may be relevant to autism, which affects one in 88 children in the United States.
“Oxytocin has a remarkable effect on the passage of information through the brain,” says Richard W. Tsien, DPhil, the Druckenmiller Professor of Neuroscience and director of the Neuroscience Institute at NYU Langone Medical Center. “It not only quiets background activity, but also increases the accuracy of stimulated impulse firing. Our experiments show how the activity of brain circuits can be sharpened, and hint at how this re-tuning of brain circuits might go awry in conditions like autism.”
Children and adults with autism-spectrum disorder (ASD) struggle with recognizing the emotions of others and are easily distracted by extraneous features of their environment. Previous studies have shown that children with autism have lower levels of oxytocin, and mutations in the oxytocin receptor gene predispose people to autism. Recent brain recordings from people with ASD show impairments in the transmission of even simple sensory signals.
The current study built upon 30-year old results from researchers in Geneva, who showed that oxytocin acted in the hippocampus, a region of the brain involved in memory and cognition. The hormone stimulated nerve cells – called inhibitory interneurons – to release a chemical called GABA. This substance dampens the activity of the adjoining excitatory nerve cells, known as pyramidal cells.
“From the previous findings, we predicted that oxytocin would dampen brain circuits in all ways, quieting both background noise and wanted signals,” Dr. Tsien explains. “Instead, we found that oxytocin increased the reliability of stimulated impulses – good for brain function, but quite unexpected.”
To resolve this paradox, Dr. Tsien and his Stanford graduate student Scott Owen collaborated with Gord Fishell, PhD, the Julius Raynes Professor of Neuroscience and Physiology at NYU Langone Medical Center, and NYU graduate student Sebnem Tuncdemir. They identified the particular type of inhibitory interneurons responsible for the effects of oxytocin: “fast-spiking” inhibitory interneurons.
The mystery of how oxytocin drives these fast-spiking inhibitory cells to fire, yet also increases signaling to pyramidal neurons, was solved through studies with rodent models. The researchers found that continually activating the fast-spiking inhibitory neurons – good for lowering background noise – also causes their GABA-releasing synapses to fatigue. Accordingly, when a stimulus arrives, the tired synapses release less GABA and excitation of the pyramidal neuron is not dampened as much, so that excitation drives the pyramidal neuron’s firing more reliably.
“The stronger signal and muffled background noise arise from the same fundamental action of oxytocin and give two benefits for the price of one,” Dr. Fishell explains. “It’s too early to say how the lack of oxytocin signaling is involved in the wide diversity of autism-spectrum disorders, and the jury is still out about its possible therapeutic effects. But it is encouraging to find that a naturally occurring neurohormone can enhance brain circuits by dialing up wanted signals while quieting background noise.”

Study reveals potential role of ‘love hormone’ oxytocin in brain function

Findings of NYU Langone researchers may have relevance in autism-spectrum disorder

In a loud, crowded restaurant, having the ability to focus on the people and conversation at your own table is critical. Nerve cells in the brain face similar challenges in separating wanted messages from background chatter. A key element in this process appears to be oxytocin, typically known as the “love hormone” for its role in promoting social and parental bonding.

In a study appearing online August 4 in Nature, NYU Langone Medical Center researchers decipher how oxytocin, acting as a neurohormone in the brain, not only reduces background noise, but more importantly, increases the strength of desired signals. These findings may be relevant to autism, which affects one in 88 children in the United States.

“Oxytocin has a remarkable effect on the passage of information through the brain,” says Richard W. Tsien, DPhil, the Druckenmiller Professor of Neuroscience and director of the Neuroscience Institute at NYU Langone Medical Center. “It not only quiets background activity, but also increases the accuracy of stimulated impulse firing. Our experiments show how the activity of brain circuits can be sharpened, and hint at how this re-tuning of brain circuits might go awry in conditions like autism.”

Children and adults with autism-spectrum disorder (ASD) struggle with recognizing the emotions of others and are easily distracted by extraneous features of their environment. Previous studies have shown that children with autism have lower levels of oxytocin, and mutations in the oxytocin receptor gene predispose people to autism. Recent brain recordings from people with ASD show impairments in the transmission of even simple sensory signals.

The current study built upon 30-year old results from researchers in Geneva, who showed that oxytocin acted in the hippocampus, a region of the brain involved in memory and cognition. The hormone stimulated nerve cells – called inhibitory interneurons – to release a chemical called GABA. This substance dampens the activity of the adjoining excitatory nerve cells, known as pyramidal cells.

“From the previous findings, we predicted that oxytocin would dampen brain circuits in all ways, quieting both background noise and wanted signals,” Dr. Tsien explains. “Instead, we found that oxytocin increased the reliability of stimulated impulses – good for brain function, but quite unexpected.”

To resolve this paradox, Dr. Tsien and his Stanford graduate student Scott Owen collaborated with Gord Fishell, PhD, the Julius Raynes Professor of Neuroscience and Physiology at NYU Langone Medical Center, and NYU graduate student Sebnem Tuncdemir. They identified the particular type of inhibitory interneurons responsible for the effects of oxytocin: “fast-spiking” inhibitory interneurons.

The mystery of how oxytocin drives these fast-spiking inhibitory cells to fire, yet also increases signaling to pyramidal neurons, was solved through studies with rodent models. The researchers found that continually activating the fast-spiking inhibitory neurons – good for lowering background noise – also causes their GABA-releasing synapses to fatigue. Accordingly, when a stimulus arrives, the tired synapses release less GABA and excitation of the pyramidal neuron is not dampened as much, so that excitation drives the pyramidal neuron’s firing more reliably.

“The stronger signal and muffled background noise arise from the same fundamental action of oxytocin and give two benefits for the price of one,” Dr. Fishell explains. “It’s too early to say how the lack of oxytocin signaling is involved in the wide diversity of autism-spectrum disorders, and the jury is still out about its possible therapeutic effects. But it is encouraging to find that a naturally occurring neurohormone can enhance brain circuits by dialing up wanted signals while quieting background noise.”

Filed under oxytocin brain function ASD inhibitory interneurons hippocampus neuroscience science

129 notes

Study Reveals Genes That Drive Brain Cancer

About 15 percent of glioblastoma patients could receive personalized treatment with drugs currently used in other cancers

image

A team of researchers at the Herbert Irving Comprehensive Cancer Center at Columbia University Medical Center has identified 18 new genes responsible for driving glioblastoma multiforme, the most common—and most aggressive—form of brain cancer in adults. The study was published August 5, 2013, in Nature Genetics.

“Cancers rely on driver genes to remain cancers, and driver genes are the best targets for therapy,” said Antonio Iavarone, MD, professor of pathology and neurology at Columbia University Medical Center and a principal author of the study.

“Once you know the driver in a particular tumor and you hit it, the cancer collapses. We think our study has identified the vast majority of drivers in glioblastoma, and therefore a list of the most important targets for glioblastoma drug development and the basis for personalized treatment of brain cancer.”

Personalized treatment could be a reality soon for about 15 percent of glioblastoma patients, said Anna Lasorella, MD, associate professor of pediatrics and of pathology & cell biology at CUMC.

“This study—together with our study from last year, Research May Lead to New Treatment for Type of Brain Cancer—shows that about 15 percent of glioblastomas are driven by genes that could be targeted with currently available FDA-approved drugs,” she said. “There is no reason why these patients couldn’t receive these drugs now in clinical trials.”

New Bioinformatics Technique Distinguishes Driver Genes from Other Mutations

In any single tumor, hundreds of genes may be mutated, but distinguishing the mutations that drive cancer from mutations that have no effect has been a longstanding problem for researchers.

image

An analysis of all gene mutations in nearly 140 brain tumors has uncovered most of the genes responsible for driving glioblastoma. The analysis found 18 new driver genes (labeled red), never before implicated in glioblastoma and correctly identified the 15 previously known driver genes (labeled blue). The graphs show mutated genes that are commonly found in varying numbers in glioblastoma (left), that frequently contain insertions (middle), and that frequently contain deletions (right). Genes represented by blue dots in the graphs were statistically most likely to be driver genes. Image: Raul Rabadan/Columbia University Medical Center.

The Columbia team used a combination of high throughput DNA sequencing and a new method of statistical analysis to generate a short list of driver candidates. The massive study of nearly 140 brain tumors sequenced the DNA and RNA of every gene in the tumors to identify all the mutations in each tumor. A statistical algorithm designed by co-author Raul Rabadan, PhD, assistant professor of biomedical informatics and systems biology, was then used to identify the mutations most likely to be driver mutations. The algorithm differs from other techniques to distinguish drivers from other mutations in that it considers not only how often the gene is mutated in different tumors, but also the manner in which it is mutated.

“If one copy of the gene in a tumor is mutated at a single point and the second copy is mutated in a different way, there’s a higher probability that the gene is a driver,” Dr. Iavarone said.

The analysis identified 15 driver genes that had been previously identified in other studies—confirming the accuracy of the technique—and 18 new driver genes that had never been implicated in glioblastoma.

Significantly, some of the most important candidates among the 18 new genes, such as LZTR1 and delta catenin, were confirmed to be driver genes in laboratory studies involving cancer stem cells taken from human tumors and examined in culture, as well as after they had been implanted into mice.

A New Model for Personalized Cancer Treatment

Because patients’ tumors are powered by different driver genes, the researchers say that a complicated analysis will be needed for personalized glioblastoma treatment to become a reality. First, all the genes in a patient’s tumor must be sequenced and analyzed to identify its driver gene.

“In some tumors it’s obvious what the driver is; but in others, it’s harder to figure out,” said Dr.Iavarone.

Once the candidate driver is identified, it must be confirmed in laboratory tests with cancer stem cells isolated from the patient’s tumor.

image

About 15 percent of glioblastoma driver genes can be targeted with currently available drugs, suggesting that personalized treatment for some patients may be possible in the near future. Personalized therapy for glioblastoma patients could be achieved by isolating the most aggressive cells from the patient’s tumor and identifying the driver gene responsible for the tumor’s growth (different tumors will be driven by different genes). Drugs can then be tested on the isolated cells to find the most promising candidate. In this image, the gene mutation driving the malignant tumor has been replaced with the normal gene, transforming malignant cells back into normal brain cells. Image: Anna Lasorella.

“Cancer stem cells are the tumor’s most aggressive cells and the critical cellular targets for cancer therapies,” said Dr. Lasorella. “Drugs that prove successful in hitting driver genes in cancer stem cells and slowing cancer growth in cell culture and animal models would then be tried in the patient.”

Personalized Treatment Already Possible for Some Patients

For 85 percent of the known glioblastoma drivers, no drugs that target them have yet been approved.

But the Columbia team has found that about 15 percent of patients whose tumors are driven by certain gene fusions, FDA-approved drugs that target those drivers are available.

The study found that half of these patients have tumors driven by a fusion between the gene EGFR and one of several other genes. The fusion makes EGFR—a growth factor already implicated in cancer—hyperactive; hyperactive EGFR drives tumor growth in these glioblastomas.

“When this gene fusion is present, tumors become addicted to it—they can’t live without it,” Dr. Iavarone said. “We think patients with this fusion might benefit from EGFR inhibitors that are already on the market. In our study, when we gave the inhibitors to mice with these human glioblastomas, tumor growth was strongly inhibited.”

Other patients have tumors that harbor a fusion of the genes FGFR (fibroblast growth factor receptor) and TACC (transforming acidic coiled-coil), first reported by the Columbia team last year. These patients may benefit from FGFR kinase inhibitors. Preliminary trials of these drugs (for treatment of other forms of cancer) have shown that they have a good safety profile, which should accelerate testing in patients with glioblastoma.

Filed under brain cancer glioblastoma brain tumor genes stem cells genetics neuroscience science

291 notes

Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.

Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Filed under artificial intelligence AI brain mapping cognitive prostheses technology robotics science

192 notes

Could the Government Get a Search Warrant for Your Thoughts?
We don’t have a mind reading machine. But what if we one day did? The technique of functional MRI (fMRI), which measures changes in localized brain activity over time, can now be used to infer information regarding who we are thinking about, what we have seen, and the memories we are recalling. As the technology for inferring thought from brain activity continues to improve, the legal questions regarding its potential application in criminal and civil trials are gaining greater attention.
Last year, a Maryland man on trial for murdering his roommate tried to introduce results from an fMRI-based lie detection test to bolster his claim that the death was a suicide. The court ruled (PDF) the test results inadmissible, noting that the “fMRI lie detection method of testing is not yet accepted in the scientific community.” In a decision last year to exclude fMRI lie detection test results submitted by a defendant in a different case, the Sixth Circuit was even more skeptical, writing (PDF) that “there are concerns with not only whether fMRI lie detection of ‘real lies’ has been tested but whether it can be tested.”
So far, concerns regarding reliability have kept thought-inferring brain measurements out of U.S. (but not foreign) courtrooms. But is technology the only barrier? Or, if more mature, reliable brain scanning methods for detecting truthfulness and reading thoughts are developed in the future, could they be employed not only by defendants hoping to demonstrate innocence but also by prosecutors attempting to establish guilt? Could prosecutors armed with a search warrant compel an unwilling suspect to submit to brain scans aimed at exploring his or her innermost thoughts?
The answer surely ought to be no. But getting to that answer isn’t as straightforward as it might seem. The central constitutional question relates to the Fifth Amendment, which states that “no person … shall be compelled in any criminal case to be a witness against himself.” In interpreting the Fifth Amendment, courts have distinguished between testimonial evidence, which is protected from compelled self-incriminating disclosure, and physical evidence, which is not. A suspected bank robber cannot refuse to participate in a lineup or provide fingerprints. But he or she can decline to answer a detective who asks, “Did you rob the bank last week?”
So is the information in a brain scan physical or testimonial? In some respects, it’s a mix of both. As Dov Fox wrote in a 2009 law review article, “Brain imaging is difficult to classify because it promises distinctly testimonial-like information about the content of a person’s mind that is packaged in demonstrably physical-like form, either as blood flows in the case of fMRI, or as brainwaves in the case of EEG.” Fox goes on to conclude that the compelled use of brain imaging techniques would “deprive individuals of control over their thoughts” and be a violation of the Fifth Amendment.
But there is an alternative view as well, under which the Fifth Amendment protects only testimonial communication, leaving the unexpressed thoughts in a suspect’s head potentially open to government discovery, technology permitting. In a recent law review article titled “A Modest Defense of Mind Reading,” Kiel Brennan-Marquez writes that “at least some mind-reading devices almost certainly would not” elicit “communicative acts” by the suspect, “making their use permissible under the Fifth Amendment.” Brennan-Marquez acknowledges that compelled mind-reading would raise privacy concerns, but argues that those should be addressed by the Fourth Amendment, which prohibits unreasonable searches and seizures.
That doesn’t seem right. It would make little sense to provide constitutional protection to a suspected bank robber’s refusal to answer a detective’s question if the thoughts preceding the refusal—e.g., “since I’m guilty, I’d better not answer this question”—are left unprotected. Stated another way, the right to remain silent would be meaningless if not accompanied by protection for the thinking required to exercise it.
And if that weren’t enough, concluding that compelled brain scans don’t violate the Fifth Amendment would raise another problem as well: In a future that might include mature mind-reading technology, it would leave the Fourth Amendment as the last barrier protecting our thoughts from unwanted discovery. That, in turn, would raise the possibility that the government could get a search warrant for our thoughts. It’s a chilling prospect, and one that we should hope never comes to pass.

Could the Government Get a Search Warrant for Your Thoughts?

We don’t have a mind reading machine. But what if we one day did? The technique of functional MRI (fMRI), which measures changes in localized brain activity over time, can now be used to infer information regarding who we are thinking about, what we have seen, and the memories we are recalling. As the technology for inferring thought from brain activity continues to improve, the legal questions regarding its potential application in criminal and civil trials are gaining greater attention.

Last year, a Maryland man on trial for murdering his roommate tried to introduce results from an fMRI-based lie detection test to bolster his claim that the death was a suicide. The court ruled (PDF) the test results inadmissible, noting that the “fMRI lie detection method of testing is not yet accepted in the scientific community.” In a decision last year to exclude fMRI lie detection test results submitted by a defendant in a different case, the Sixth Circuit was even more skeptical, writing (PDF) that “there are concerns with not only whether fMRI lie detection of ‘real lies’ has been tested but whether it can be tested.”

So far, concerns regarding reliability have kept thought-inferring brain measurements out of U.S. (but not foreign) courtrooms. But is technology the only barrier? Or, if more mature, reliable brain scanning methods for detecting truthfulness and reading thoughts are developed in the future, could they be employed not only by defendants hoping to demonstrate innocence but also by prosecutors attempting to establish guilt? Could prosecutors armed with a search warrant compel an unwilling suspect to submit to brain scans aimed at exploring his or her innermost thoughts?

The answer surely ought to be no. But getting to that answer isn’t as straightforward as it might seem. The central constitutional question relates to the Fifth Amendment, which states that “no person … shall be compelled in any criminal case to be a witness against himself.” In interpreting the Fifth Amendment, courts have distinguished between testimonial evidence, which is protected from compelled self-incriminating disclosure, and physical evidence, which is not. A suspected bank robber cannot refuse to participate in a lineup or provide fingerprints. But he or she can decline to answer a detective who asks, “Did you rob the bank last week?”

So is the information in a brain scan physical or testimonial? In some respects, it’s a mix of both. As Dov Fox wrote in a 2009 law review article, “Brain imaging is difficult to classify because it promises distinctly testimonial-like information about the content of a person’s mind that is packaged in demonstrably physical-like form, either as blood flows in the case of fMRI, or as brainwaves in the case of EEG.” Fox goes on to conclude that the compelled use of brain imaging techniques would “deprive individuals of control over their thoughts” and be a violation of the Fifth Amendment.

But there is an alternative view as well, under which the Fifth Amendment protects only testimonial communication, leaving the unexpressed thoughts in a suspect’s head potentially open to government discovery, technology permitting. In a recent law review article titled “A Modest Defense of Mind Reading,” Kiel Brennan-Marquez writes that “at least some mind-reading devices almost certainly would not” elicit “communicative acts” by the suspect, “making their use permissible under the Fifth Amendment.” Brennan-Marquez acknowledges that compelled mind-reading would raise privacy concerns, but argues that those should be addressed by the Fourth Amendment, which prohibits unreasonable searches and seizures.

That doesn’t seem right. It would make little sense to provide constitutional protection to a suspected bank robber’s refusal to answer a detective’s question if the thoughts preceding the refusal—e.g., “since I’m guilty, I’d better not answer this question”—are left unprotected. Stated another way, the right to remain silent would be meaningless if not accompanied by protection for the thinking required to exercise it.

And if that weren’t enough, concluding that compelled brain scans don’t violate the Fifth Amendment would raise another problem as well: In a future that might include mature mind-reading technology, it would leave the Fourth Amendment as the last barrier protecting our thoughts from unwanted discovery. That, in turn, would raise the possibility that the government could get a search warrant for our thoughts. It’s a chilling prospect, and one that we should hope never comes to pass.

Filed under neuroimaging brain scans fMRI fMRI lie detection mind reading science

378 notes

Harvard creates brain-to-brain interface, allows humans to control other animals with thoughts alone
Researchers at Harvard University have created the first noninvasive brain-to-brain interface (BBI) between a human… and a rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat’s tail. This is one of the most important steps towards BBIs that allow for telepathic links between two or more humans — which is a good thing in the case of friends and family, but terrifying if you stop to think about the nefarious possibilities of a fascist dictatorship with mind control tech.
In recent years there have been huge advances in the field of brain-computer interfaces, where your thoughts are detected and “understood” by a sensor attached to a computer, but relatively little work has been done in the opposite direction (computer-brain interfaces). This is because it’s one thing for a computer to work out what a human is thinking (by asking or observing their actions), but another thing entirely to inject new thoughts into a human brain. To put it bluntly, we have almost no idea of how thoughts are encoded by neurons in the brain. For now, the best we can do is create a computer-brain interface that stimulates a region of the brain that’s known to create a certain reaction — such as the specific part of the motor cortex that’s in charge of your fingers. We don’t have the power to move your fingers in a specific way — that would require knowing the brain’s encoding scheme — but we can make them jerk around.
Which brings us neatly onto Harvard’s human-mouse brain-to-brain interface. The human wears a run-of-the-mill EEG-based BCI, while the mouse is equipped with a focused ultrasound (FUS) computer-brain interface (CBI). FUS is a relatively new technology that allows the researchers to excite a very specific region of neurons in the rat’s brain using an ultrasound signal. The main advantage of FUS is that, unlike most brain-stimulation techniques, such as DBS, it isn’t invasive. For now it looks like the FUS equipment is fairly bulky, but future versions might be small enough for use in everyday human CBIs. 
With the EEG equipped, the BCI detects whenever the human looks at a specific pattern on a computer screen. The BCI then fires off a command to rat’s CBI, which causes ultrasound to be beamed into the region of the rat’s motor cortex that deals with tail movement. As you can see in the video above, this causes the rat’s tail to move. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 seconds for the entire process — from the human deciding to look at the screen, through to the movement of the rat’s tail. In theory, the human could trigger a rodent tail-wag by simply thinking about it, rather than having to look at a specific pattern — but presumably, for the sake of this experiment, the researchers wanted to focus on the FUS CBI, rather than the BCI.
Moving forward, the researchers now need to work on the transmitting of more complex ideas, such as hunger or sexual arousal, from human to rat. At some point, they’ll also have to put the FUS CBI on a human, to see if thoughts can be transferred in the opposite direction. Finally, we’ll need to combine an EEG and FUS into a single unit, to allow for bidirectional sharing of thoughts and ideas. Human-to-human telepathy is the most obvious use, but what if the same bidirectional technology also allows us to really communicate with animals, such as dogs? There would be huge ethical concerns, of course, especially if a dictatorial tyrant uses the tech to control our thoughts — but the same can be said of almost every futuristic, transhumanist technology.

Harvard creates brain-to-brain interface, allows humans to control other animals with thoughts alone

Researchers at Harvard University have created the first noninvasive brain-to-brain interface (BBI) between a human… and a rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat’s tail. This is one of the most important steps towards BBIs that allow for telepathic links between two or more humans — which is a good thing in the case of friends and family, but terrifying if you stop to think about the nefarious possibilities of a fascist dictatorship with mind control tech.

In recent years there have been huge advances in the field of brain-computer interfaces, where your thoughts are detected and “understood” by a sensor attached to a computer, but relatively little work has been done in the opposite direction (computer-brain interfaces). This is because it’s one thing for a computer to work out what a human is thinking (by asking or observing their actions), but another thing entirely to inject new thoughts into a human brain. To put it bluntly, we have almost no idea of how thoughts are encoded by neurons in the brain. For now, the best we can do is create a computer-brain interface that stimulates a region of the brain that’s known to create a certain reaction — such as the specific part of the motor cortex that’s in charge of your fingers. We don’t have the power to move your fingers in a specific way — that would require knowing the brain’s encoding scheme — but we can make them jerk around.

Which brings us neatly onto Harvard’s human-mouse brain-to-brain interface. The human wears a run-of-the-mill EEG-based BCI, while the mouse is equipped with a focused ultrasound (FUS) computer-brain interface (CBI). FUS is a relatively new technology that allows the researchers to excite a very specific region of neurons in the rat’s brain using an ultrasound signal. The main advantage of FUS is that, unlike most brain-stimulation techniques, such as DBS, it isn’t invasive. For now it looks like the FUS equipment is fairly bulky, but future versions might be small enough for use in everyday human CBIs.

With the EEG equipped, the BCI detects whenever the human looks at a specific pattern on a computer screen. The BCI then fires off a command to rat’s CBI, which causes ultrasound to be beamed into the region of the rat’s motor cortex that deals with tail movement. As you can see in the video above, this causes the rat’s tail to move. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 seconds for the entire process — from the human deciding to look at the screen, through to the movement of the rat’s tail. In theory, the human could trigger a rodent tail-wag by simply thinking about it, rather than having to look at a specific pattern — but presumably, for the sake of this experiment, the researchers wanted to focus on the FUS CBI, rather than the BCI.

Moving forward, the researchers now need to work on the transmitting of more complex ideas, such as hunger or sexual arousal, from human to rat. At some point, they’ll also have to put the FUS CBI on a human, to see if thoughts can be transferred in the opposite direction. Finally, we’ll need to combine an EEG and FUS into a single unit, to allow for bidirectional sharing of thoughts and ideas. Human-to-human telepathy is the most obvious use, but what if the same bidirectional technology also allows us to really communicate with animals, such as dogs? There would be huge ethical concerns, of course, especially if a dictatorial tyrant uses the tech to control our thoughts — but the same can be said of almost every futuristic, transhumanist technology.

Filed under brain-to-brain interface transcranial focused ultrasound neural activity BCI neuroscience science

355 notes

Hidden Beauty: Exploring the Aesthetics of Medical Science
This collaborative project by a scientist and artist asks the reader to consider the aesthetics of human disease, both within and beyond the context of our preconceived social systems. Disease is a dynamically powerful force of nature that acts without regard to race, religion or culture. These forces create visually stunning patterns with a remarkable ability to evoke human emotion in isolation that differs when viewed in the context of the disease that produced the image. We see beauty in the delicate lacework of fungal hyphae invading a blood vessel, the structure of the normal cerebellum, and the desperate drive of metastasizing cancer cells. However, the appreciation of the imagery produced by disease is bittersweet; we simultaneously experience the beauty of the natural world and the pain of those living with these disease processes. Ultimately, this series of images will leave the viewer with an appreciation of visual beauty inherent within the medical sciences.
(Image: Alzheimer’s research, Phillip Wong PhD)

Hidden Beauty: Exploring the Aesthetics of Medical Science

This collaborative project by a scientist and artist asks the reader to consider the aesthetics of human disease, both within and beyond the context of our preconceived social systems. Disease is a dynamically powerful force of nature that acts without regard to race, religion or culture. These forces create visually stunning patterns with a remarkable ability to evoke human emotion in isolation that differs when viewed in the context of the disease that produced the image. We see beauty in the delicate lacework of fungal hyphae invading a blood vessel, the structure of the normal cerebellum, and the desperate drive of metastasizing cancer cells. However, the appreciation of the imagery produced by disease is bittersweet; we simultaneously experience the beauty of the natural world and the pain of those living with these disease processes. Ultimately, this series of images will leave the viewer with an appreciation of visual beauty inherent within the medical sciences.

(Image: Alzheimer’s research, Phillip Wong PhD)

Filed under human disease medical imagery electron microscopy medicine art science

53 notes

Eye movement rhythm important to eye-tracking diagnoses

Quick eye movements, called saccades, that enable us to scan a visual scene appear to act as a metronome for pushing information about that scene into memory.

Scientists at Yerkes National Primate Research Center, Emory University, have observed that in monkeys exploring images with their eyes, the onset of a saccade resets the rhythms of electrical activity (theta oscillations) in the hippocampus, a region of the brain important for memory formation.

Tracking eye movements is already a promising basis for diagnosing brain disorders such as Alzheimer’s disease and schizophrenia. A deeper understanding of how the rhythm of eye movements orchestrate memories could bolster the accuracy and power of eye-tracking diagnoses.

The findings were published this week in Proceedings of the National Academy of Sciences, Early Edition.

Senior author Elizabeth Buffalo was a researcher at the Yerkes National Primate Research Center and an associate professor of neurology at Emory University School of Medicine and is currently associate professor of physiology and biophysics at Universpity of Washington in Seattle. The first author of the paper is postdoctoral fellow Michael Jutras„ who is now an instructor at the University of Washington.

Theta oscillations are cycles of electrical activity in the brain occurring between 3 to 12 times per second. Scientists have previously seen theta oscillations in the hippocampus in rodents, when the rodents were actively exploring, sniffing or feeling something with their whiskers.

"Both animals and humans seem to take in sensory information at this theta rhythm," Buffalo says. "But one striking difference between rodents and primates is the way they gather information about the external world. Rodents are much more reliant on the senses of smell and touch."

She says the actions that are most comparable to rodents’ sniffing and whiskering in primates are saccades. When our eyes scan text or explore a picture, the eyes’ focus tends to jump from point to point several times per second.

Buffalo and Jutras examined electrical signals in the hippocampi of two rhesus monkeys while the monkeys were looking at a variety of pictures and the researchers tracked their eye movements. The researchers observed that after a saccade, the electrical signals in the hippocampus display a more coherent rhythm.

image

The rhythm reset a saccade imposes may be a way to ensure the hippocampus is receptive to new sensory information, the researchers propose.  
“The eye movements are acting like the conductor of the hippocampal orchestra,” Jutras says, “The phase reset might be a mechanism to ensure the ongoing theta rhythm is in sync with incoming visual information.”

Scientists have previously hypothesized that theta oscillations in the hippocampus set the stage for memory formation. The researchers tested this idea by presenting the monkeys each image twice during a viewing session. Because all primates have an innate preference for novelty, monkeys tend to spend a longer time looking at new images and less time looking at repeated ones. The researchers inferred that the monkeys had a stronger memory of a given picture if, upon second viewing, they looked through it quickly. The theta rhythm reset was more consistent during the viewing of images that the monkeys remembered well.

"Based on this finding, we concluded that this resetting of the theta rhythm is an important part of the memory process," Jutras says.

"This study has given us a better understanding of the function of the hippocampal theta rhythm, which has been well characterized in rodents but isn’t well understood in primates," he says. "A future goal is to investigate the relationship between hippocampal theta and eye movements during memory formation and navigation in humans. This could be possible with epilepsy patients who undergo monitoring of hippocampal activity as part of their treatment."

(Source: news.emory.edu)

Filed under memory formation theta oscillations hippocampus eye movements saccades neuroscience science

111 notes

Monogamy’s Boost to Human Evolution
“Monogamy is a problem,” said Dieter Lukas of the University of Cambridge in a telephone news conference this week. As Dr. Lukas explained to reporters, he and other biologists consider monogamy an evolutionary puzzle.
In 9 percent of all mammal species, males and females will share a common territory for more than one breeding season, and in some cases bond for life. This is a problem — a scientific one — because male mammals could theoretically have more offspring by giving up on monogamy and mating with lots of females.
In a new study, Dr. Lukas and his colleague Tim Clutton-Brock suggest that monogamy evolves when females spread out, making it hard for a male to travel around and fend off competing males.
On the same day, Kit Opie of University College London and his colleagues published a similar study on primates, which are especially monogamous — males and females bond in over a quarter of primate species. The London scientists came to a different conclusion: that the threat of infanticide leads males to stick with only one female, protecting her from other males.
Even with the scientific problem far from resolved, research like this inevitably turns us into narcissists. It’s all well and good to understand why the gray-handed night monkey became monogamous. But we want to know: What does this say about men and women?
As with all things concerning the human heart, it’s complicated.
“The human mating system is extremely flexible,” Bernard Chapais of the University of Montreal wrote in a recent review in Evolutionary Anthropology. Only 17 percent of human cultures are strictly monogamous. The vast majority of human societies embrace a mix of marriage types, with some people practicing monogamy and others polygamy. (Most people in these cultures are in monogamous marriages, though.)
There are even some societies where a woman may marry several men. And some men and women have secret relationships that last for years while they’re married to other people, a kind of dual monogamy. Same-sex marriages acknowledge commitments that in many cases existed long before they won legal recognition.
Each species faces its own special challenges — the climate where it lives, or the food it depends on, or the predators that stalk it — and certain conditions may favor monogamy despite its drawbacks. One source of clues to the origin of human mating lies in our closest relatives, chimpanzees and bonobos. They live in large groups where the females mate with lots of males when they’re ovulating. Male chimpanzees will fight with each other for the chance to mate, and they’ve evolved to produce extra sperm to increase their chances that they get to father a female’s young.
Our own ancestors split off from the ancestors of chimpanzees about seven million years ago. Fossils may offer us some clues to how our mating systems evolved after that parting of ways. The hormone levels that course through monogamous primates are different from those of other species, possibly because the males aren’t in constant battle for females.
That difference in hormones influences how primates grow in some remarkable ways. For example, the ratio of their finger lengths is different.
In 2011, Emma Nelson of the University of Liverpool and her colleagues looked at the finger bones of ancient hominid fossils. From what they found, they concluded that hominids 4.4 million years ago mated with many females. By about 3.5 million years ago, however, the finger-length ratio indicated that hominids had shifted more toward monogamy.
Our lineage never evolved to be strictly monogamous. But even in polygamous relationships, individual men and women formed long-term bonds — a far cry from the arrangement in chimpanzees.
While the two new studies published last week disagree about the force driving the evolution of monogamy, they do agree on something important. “Once monogamy has evolved, then male care is far more likely,” Dr. Opie said.
Once a monogamous primate father starts to stick around, he has the opportunity to raise the odds that his offspring will survive. He can carry them, groom their fur and protect them from attacks.
In our own lineage, however, fathers went further. They had evolved the ability to hunt and scavenge meat, and they were supplying some of that food to their children. “They may have gone beyond what is normal for monogamous primates,” said Dr. Opie.
The extra supply of protein and calories that human children started to receive is widely considered a watershed moment in our evolution. It could explain why we have brains far bigger than other mammals.
Brains are hungry organs, demanding 20 times more calories than a similar piece of muscle. Only with a steady supply of energy-rich meat, Dr. Okie suggests, were we able to evolve big brains — and all the mental capacities that come with it.
Because of monogamy, Dr. Opie said, “This could be how humans were able to push through a ceiling in terms of brain size.”

Monogamy’s Boost to Human Evolution

“Monogamy is a problem,” said Dieter Lukas of the University of Cambridge in a telephone news conference this week. As Dr. Lukas explained to reporters, he and other biologists consider monogamy an evolutionary puzzle.

In 9 percent of all mammal species, males and females will share a common territory for more than one breeding season, and in some cases bond for life. This is a problem — a scientific one — because male mammals could theoretically have more offspring by giving up on monogamy and mating with lots of females.

In a new study, Dr. Lukas and his colleague Tim Clutton-Brock suggest that monogamy evolves when females spread out, making it hard for a male to travel around and fend off competing males.

On the same day, Kit Opie of University College London and his colleagues published a similar study on primates, which are especially monogamous — males and females bond in over a quarter of primate species. The London scientists came to a different conclusion: that the threat of infanticide leads males to stick with only one female, protecting her from other males.

Even with the scientific problem far from resolved, research like this inevitably turns us into narcissists. It’s all well and good to understand why the gray-handed night monkey became monogamous. But we want to know: What does this say about men and women?

As with all things concerning the human heart, it’s complicated.

“The human mating system is extremely flexible,” Bernard Chapais of the University of Montreal wrote in a recent review in Evolutionary Anthropology. Only 17 percent of human cultures are strictly monogamous. The vast majority of human societies embrace a mix of marriage types, with some people practicing monogamy and others polygamy. (Most people in these cultures are in monogamous marriages, though.)

There are even some societies where a woman may marry several men. And some men and women have secret relationships that last for years while they’re married to other people, a kind of dual monogamy. Same-sex marriages acknowledge commitments that in many cases existed long before they won legal recognition.

Each species faces its own special challenges — the climate where it lives, or the food it depends on, or the predators that stalk it — and certain conditions may favor monogamy despite its drawbacks. One source of clues to the origin of human mating lies in our closest relatives, chimpanzees and bonobos. They live in large groups where the females mate with lots of males when they’re ovulating. Male chimpanzees will fight with each other for the chance to mate, and they’ve evolved to produce extra sperm to increase their chances that they get to father a female’s young.

Our own ancestors split off from the ancestors of chimpanzees about seven million years ago. Fossils may offer us some clues to how our mating systems evolved after that parting of ways. The hormone levels that course through monogamous primates are different from those of other species, possibly because the males aren’t in constant battle for females.

That difference in hormones influences how primates grow in some remarkable ways. For example, the ratio of their finger lengths is different.

In 2011, Emma Nelson of the University of Liverpool and her colleagues looked at the finger bones of ancient hominid fossils. From what they found, they concluded that hominids 4.4 million years ago mated with many females. By about 3.5 million years ago, however, the finger-length ratio indicated that hominids had shifted more toward monogamy.

Our lineage never evolved to be strictly monogamous. But even in polygamous relationships, individual men and women formed long-term bonds — a far cry from the arrangement in chimpanzees.

While the two new studies published last week disagree about the force driving the evolution of monogamy, they do agree on something important. “Once monogamy has evolved, then male care is far more likely,” Dr. Opie said.

Once a monogamous primate father starts to stick around, he has the opportunity to raise the odds that his offspring will survive. He can carry them, groom their fur and protect them from attacks.

In our own lineage, however, fathers went further. They had evolved the ability to hunt and scavenge meat, and they were supplying some of that food to their children. “They may have gone beyond what is normal for monogamous primates,” said Dr. Opie.

The extra supply of protein and calories that human children started to receive is widely considered a watershed moment in our evolution. It could explain why we have brains far bigger than other mammals.

Brains are hungry organs, demanding 20 times more calories than a similar piece of muscle. Only with a steady supply of energy-rich meat, Dr. Okie suggests, were we able to evolve big brains — and all the mental capacities that come with it.

Because of monogamy, Dr. Opie said, “This could be how humans were able to push through a ceiling in terms of brain size.”

Filed under mammals monogamy mating evolution psychology neuroscience science

341 notes

Sex, Smell And Science – The Genetics Of Olfaction
No two people smell exactly alike. That is, noses sense odors in individual ways. What one nose finds offensive, another may find pleasant, while another might not smell anything at all. Scientists have long known the way things smell to us is determined by our genes.
Now, two studies appearing in the journal Current Biology (1, 2) have identified “the genetic differences that underpin the differences in smell sensitivity and perception in different individuals.” And while some of these differences merely help determine our culinary preferences, others appear to play a subconscious role in how we choose our sexual partners.
For the first study, 200 people were tested to determine their sensitivity to 10 different chemical compounds commonly found in foods. The researchers found four of the ten odors had a genetic association. These were malt, apple, blue cheese, and a floral scent associated with violets.
The research team, led by Sara Jaeger, Jeremy McRae, and Richard Newcomb of Plant and Food Research in New Zealand, used a genome-wide association study. Their first task was to identify which test subjects could smell each chemical compound and which could not. They then searched the subjects’ genomes for areas of DNA that differed between these people.
“We were surprised how many odors had genes associated with them. If this extends to other odors, then we might expect everyone to have their own unique set of smells that they are sensitive to,” explained McRae
“These smells are found in foods and drinks that people encounter every day, such as tomatoes and apples. This might mean that when people sit down to eat a meal, they each experience it in their own personalized way.”
They further found there is no regional differentiation. A person in one part of the world is just as likely to be able to smell a particular compound as a person in another part of the world. In addition, sensitivity to one compound does not predict the ability to smell another compound.
The genes that determine our ability to perceive certain odors all lie in or near the genes that encode olfactory receptors. These receptors occur on the surface of sensory nerve cells in the upper part of the nose. A particular smell is perceived when these receptor molecules bind with a chemical compound wafting through the nose, causing nerve cells to send an impulse to the brain and producing our sensation of smell.
For the violet smell, caused by a naturally occurring chemical compound known as β-ionone, the researchers were able to pinpoint the exact mutation in gene OR5A1 that determines whether the smell is perceived as floral, sour or pungent, and whether it is found to be pleasant.
These findings might have future marketing value. According to Richard Newcomb, “Knowing the compounds that people can sense in foods, as well as other products, will have an influence on the development of future products. Companies may wish to design foods that better target people based on their sensitivity, essentially developing foods and other products personalized for their taste and smell.” 
SEXY OR STINKY?
A separate study was conducted by Leslie Vosshall of the Rockefeller University Hospital. Humans have about 1,000 genes that influence smell, and around 400 of these are responsible for sensing a particular odor molecule.
Testing 391 human subjects, Vosshall studied olfactory responses to two closely related steroids, androstenone and androstadienone, which are found in male sweat. People generally have strong reactions to these steroids, finding them either sweet and florally or rank and noxious. The gene 0R7D4 determines the intensity of these odors as well as the perception of them being either pleasant or repulsive.
According to Vosshall’s report: “People who found the smell repulsive were more likely to have two functional copies of OR7D4; those who perceived it as a more mild smell tended to have one or two impaired copies of the gene.”
This study is part of the larger goal of understanding how genetic and neuronal factors influence behaviors.
A 2002 study published in Nature Genetics provided more insight into the effect of male pheromones on women. This study looked at the link between women’s preferences for the odors given off by men and a group of genes called the Major Histocompatibily Complex (MHC) which contribute to a persons’ immune response.
In this experiment, a group of 49 women were asked to smell 10 boxes. Some of the boxes held t-shirts worn by men with different MHC genes, and others contained familiar household odors such as bleach or cloves.
The t-shirts were worn by men who slept in them for two nights and avoided contact with other scents during that time, even to the point of avoiding other people. According to the report, “the women were then asked to rate each scent based on their familiarity, intensity, pleasantness and spiciness, as well as choose the one odor which they would choose if they had to smell it all the time.”
What the researchers found was the women did not choose the scents of men whose genes were similar to their own, nor did they choose those whose genes were too dissimilar. The women showed no preference for odors from men who had the same genes as their mothers, but did show a preference for odors from men who shared genes they inherited from their fathers.
Scientists believe there are two reasons for preferring a mate whose MHC genes are different than one’s own. One is that it would tend to create offspring with more genetic diversity and thus more robust immune systems. The other is it helps to avoid inbreeding. 
Of course, when people choose their mates, there are a number of social factors that come into play as well. However, studies have shown married people tend to have different types of genes than their spouses.
So, the next time you like the way a person smells, keep in mind it may mean you have complementary genes.

Sex, Smell And Science – The Genetics Of Olfaction

No two people smell exactly alike. That is, noses sense odors in individual ways. What one nose finds offensive, another may find pleasant, while another might not smell anything at all. Scientists have long known the way things smell to us is determined by our genes.

Now, two studies appearing in the journal Current Biology (1, 2) have identified “the genetic differences that underpin the differences in smell sensitivity and perception in different individuals.” And while some of these differences merely help determine our culinary preferences, others appear to play a subconscious role in how we choose our sexual partners.

For the first study, 200 people were tested to determine their sensitivity to 10 different chemical compounds commonly found in foods. The researchers found four of the ten odors had a genetic association. These were malt, apple, blue cheese, and a floral scent associated with violets.

The research team, led by Sara Jaeger, Jeremy McRae, and Richard Newcomb of Plant and Food Research in New Zealand, used a genome-wide association study. Their first task was to identify which test subjects could smell each chemical compound and which could not. They then searched the subjects’ genomes for areas of DNA that differed between these people.

“We were surprised how many odors had genes associated with them. If this extends to other odors, then we might expect everyone to have their own unique set of smells that they are sensitive to,” explained McRae

“These smells are found in foods and drinks that people encounter every day, such as tomatoes and apples. This might mean that when people sit down to eat a meal, they each experience it in their own personalized way.”

They further found there is no regional differentiation. A person in one part of the world is just as likely to be able to smell a particular compound as a person in another part of the world. In addition, sensitivity to one compound does not predict the ability to smell another compound.

The genes that determine our ability to perceive certain odors all lie in or near the genes that encode olfactory receptors. These receptors occur on the surface of sensory nerve cells in the upper part of the nose. A particular smell is perceived when these receptor molecules bind with a chemical compound wafting through the nose, causing nerve cells to send an impulse to the brain and producing our sensation of smell.

For the violet smell, caused by a naturally occurring chemical compound known as β-ionone, the researchers were able to pinpoint the exact mutation in gene OR5A1 that determines whether the smell is perceived as floral, sour or pungent, and whether it is found to be pleasant.

These findings might have future marketing value. According to Richard Newcomb, “Knowing the compounds that people can sense in foods, as well as other products, will have an influence on the development of future products. Companies may wish to design foods that better target people based on their sensitivity, essentially developing foods and other products personalized for their taste and smell.”

SEXY OR STINKY?

A separate study was conducted by Leslie Vosshall of the Rockefeller University Hospital. Humans have about 1,000 genes that influence smell, and around 400 of these are responsible for sensing a particular odor molecule.

Testing 391 human subjects, Vosshall studied olfactory responses to two closely related steroids, androstenone and androstadienone, which are found in male sweat. People generally have strong reactions to these steroids, finding them either sweet and florally or rank and noxious. The gene 0R7D4 determines the intensity of these odors as well as the perception of them being either pleasant or repulsive.

According to Vosshall’s report: “People who found the smell repulsive were more likely to have two functional copies of OR7D4; those who perceived it as a more mild smell tended to have one or two impaired copies of the gene.”

This study is part of the larger goal of understanding how genetic and neuronal factors influence behaviors.

A 2002 study published in Nature Genetics provided more insight into the effect of male pheromones on women. This study looked at the link between women’s preferences for the odors given off by men and a group of genes called the Major Histocompatibily Complex (MHC) which contribute to a persons’ immune response.

In this experiment, a group of 49 women were asked to smell 10 boxes. Some of the boxes held t-shirts worn by men with different MHC genes, and others contained familiar household odors such as bleach or cloves.

The t-shirts were worn by men who slept in them for two nights and avoided contact with other scents during that time, even to the point of avoiding other people. According to the report, “the women were then asked to rate each scent based on their familiarity, intensity, pleasantness and spiciness, as well as choose the one odor which they would choose if they had to smell it all the time.”

What the researchers found was the women did not choose the scents of men whose genes were similar to their own, nor did they choose those whose genes were too dissimilar. The women showed no preference for odors from men who had the same genes as their mothers, but did show a preference for odors from men who shared genes they inherited from their fathers.

Scientists believe there are two reasons for preferring a mate whose MHC genes are different than one’s own. One is that it would tend to create offspring with more genetic diversity and thus more robust immune systems. The other is it helps to avoid inbreeding.

Of course, when people choose their mates, there are a number of social factors that come into play as well. However, studies have shown married people tend to have different types of genes than their spouses.

So, the next time you like the way a person smells, keep in mind it may mean you have complementary genes.

Filed under olfactory system olfaction odor smell sensitivity perception genetics neuroscience science

free counters