Neuroscience

Articles and news from the latest research reports.

Posts tagged intelligence

206 notes


Origin of intelligence and mental illness linked to ancient genetic accident
Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence. Researchers have identified the moment in history when the genes that enabled us to think and reason evolved.
This point 500 million years ago provided our ability to learn complex skills, analyse situations and have flexibility in the way in which we think. Professor Seth Grant, of the University of Edinburgh, who led the research, said: “One of the greatest scientific problems is to explain how intelligence and complex behaviours arose during evolution.”
The research, which is detailed in two papers in Nature Neuroscience, also shows a direct link between the evolution of behaviour and the origins of brain diseases. Scientists believe that the same genes that improved our mental capacity are also responsible for a number of brain disorders.
"This ground breaking work has implications for how we understand the emergence of psychiatric disorders and will offer new avenues for the development of new treatments," said John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, one of the study funders.
The study shows that intelligence in humans developed as the result of an increase in the number of brain genes in our evolutionary ancestors. The researchers suggest that a simple invertebrate animal living in the sea 500 million years ago experienced a ‘genetic accident’, which resulted in extra copies of these genes being made.
This animal’s descendants benefited from these extra genes, leading to behaviourally sophisticated vertebrates – including humans. The research team studied the mental abilities of mice and humans, using comparative tasks that involved identifying objects on touch-screen computers.
Researchers then combined results of these behavioural tests with information from the genetic codes of various species to work out when different behaviours evolved. They found that higher mental functions in humans and mice were controlled by the same genes.

Origin of intelligence and mental illness linked to ancient genetic accident

Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence. Researchers have identified the moment in history when the genes that enabled us to think and reason evolved.

This point 500 million years ago provided our ability to learn complex skills, analyse situations and have flexibility in the way in which we think. Professor Seth Grant, of the University of Edinburgh, who led the research, said: “One of the greatest scientific problems is to explain how intelligence and complex behaviours arose during evolution.”

The research, which is detailed in two papers in Nature Neuroscience, also shows a direct link between the evolution of behaviour and the origins of brain diseases. Scientists believe that the same genes that improved our mental capacity are also responsible for a number of brain disorders.

"This ground breaking work has implications for how we understand the emergence of psychiatric disorders and will offer new avenues for the development of new treatments," said John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, one of the study funders.

The study shows that intelligence in humans developed as the result of an increase in the number of brain genes in our evolutionary ancestors. The researchers suggest that a simple invertebrate animal living in the sea 500 million years ago experienced a ‘genetic accident’, which resulted in extra copies of these genes being made.

This animal’s descendants benefited from these extra genes, leading to behaviourally sophisticated vertebrates – including humans. The research team studied the mental abilities of mice and humans, using comparative tasks that involved identifying objects on touch-screen computers.

Researchers then combined results of these behavioural tests with information from the genetic codes of various species to work out when different behaviours evolved. They found that higher mental functions in humans and mice were controlled by the same genes.

Filed under brain intelligence mental illness evolution genes neuroscience psychology science

160 notes


Will machines kill mankind?
Academics at Cambridge University are pondering the risk to humanity from super-intelligent technology which could “threaten our own existence.”
Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, said: “In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”
Professor Price is planning to launch a research centre next year looking into the danger, teaming up with Cambridge professor of cosmology and astrophysics Martin Rees and Jann Tallinn, one of the founders of Skype.
He wants to bring more attention to a future in which mankind might be at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”
The group won’t be the first people to ponder such a future, which has featured in science fiction since the dawn of the computer age, perhaps most famously with HAL-  the malevolent computer from Stanley Kubrick’s 2001: A Space Oddyssey- and most recently in I, Robot, starring Will Smith.
Acknowledging that many people believe his concerns are far-fetched, Professor Price said: “It tends to be regarded as a flaky concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous.”
He said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like environmental sustainability.
He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.

Will machines kill mankind?

Academics at Cambridge University are pondering the risk to humanity from super-intelligent technology which could “threaten our own existence.”

Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, said: “In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”

Professor Price is planning to launch a research centre next year looking into the danger, teaming up with Cambridge professor of cosmology and astrophysics Martin Rees and Jann Tallinn, one of the founders of Skype.

He wants to bring more attention to a future in which mankind might be at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

The group won’t be the first people to ponder such a future, which has featured in science fiction since the dawn of the computer age, perhaps most famously with HAL-  the malevolent computer from Stanley Kubrick’s 2001: A Space Oddyssey- and most recently in I, Robot, starring Will Smith.

Acknowledging that many people believe his concerns are far-fetched, Professor Price said: “It tends to be regarded as a flaky concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous.”

He said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like environmental sustainability.

He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.

Filed under AI intelligence humanity robotics technology science

301 notes

What makes us intelligent?
… and does Google and Wikipedia make it better or worse? Studies show that other people and tools influence our brain power as much as our own minds.
…
Research shows that people don’t tend to rely on their memories for things they can easily access. Things like the world in front of our eyes, for example, can be changed quite radically without people noticing. Experiments have shown that buildings can somehow disappear from pictures we’re looking at, or the people we’re talking to can be switched with someone else, and often we won’t notice – a phenomenon called “change blindness”. This isn’t as an example of human stupidity – far from it, in fact – this is an example of mental efficiency. The mind relies on the world as a better record than memory, and usually that’s a good assumption.
As a result, philosophers have suggested that the mind is designed to spread itself out over the environment. So much so that, they suggest, the thinking is really happening in the environment as much as it is happening in our brains. The philosopher Andy Clark called humans “natural born cyborgs”, beings with minds that naturally incorporate new tools, ideas and abilities. From Clark’s perspective, the route to a solution is not the issue – having the right tools really does mean you know the answers, just as much as already knowing the answer.

What makes us intelligent?

… and does Google and Wikipedia make it better or worse? Studies show that other people and tools influence our brain power as much as our own minds.


Research shows that people don’t tend to rely on their memories for things they can easily access. Things like the world in front of our eyes, for example, can be changed quite radically without people noticing. Experiments have shown that buildings can somehow disappear from pictures we’re looking at, or the people we’re talking to can be switched with someone else, and often we won’t notice – a phenomenon called “change blindness”. This isn’t as an example of human stupidity – far from it, in fact – this is an example of mental efficiency. The mind relies on the world as a better record than memory, and usually that’s a good assumption.

As a result, philosophers have suggested that the mind is designed to spread itself out over the environment. So much so that, they suggest, the thinking is really happening in the environment as much as it is happening in our brains. The philosopher Andy Clark called humans “natural born cyborgs”, beings with minds that naturally incorporate new tools, ideas and abilities. From Clark’s perspective, the route to a solution is not the issue – having the right tools really does mean you know the answers, just as much as already knowing the answer.

Filed under brain intelligence neuroscience psychology technology science

285 notes

Research suggests that humans are slowly but surely losing intellectual and emotional abilities
Human intelligence and behavior require optimal functioning of a large number of genes, which requires enormous evolutionary pressures to maintain. A provocative hypothesis published in a recent set of Science and Society pieces published in the Cell Press journal Trends in Genetics (1, 2) suggests that we are losing our intellectual and emotional capabilities because the intricate web of genes endowing us with our brain power is particularly susceptible to mutations and that these mutations are not being selected against in our modern society.
"The development of our intellectual abilities and the optimization of thousands of intelligence genes probably occurred in relatively non-verbal, dispersed groups of peoples before our ancestors emerged from Africa," says the papers’ author, Dr. Gerald Crabtree, of Stanford University. In this environment, intelligence was critical for survival, and there was likely to be immense selective pressure acting on the genes required for intellectual development, leading to a peak in human intelligence.

Research suggests that humans are slowly but surely losing intellectual and emotional abilities

Human intelligence and behavior require optimal functioning of a large number of genes, which requires enormous evolutionary pressures to maintain. A provocative hypothesis published in a recent set of Science and Society pieces published in the Cell Press journal Trends in Genetics (1, 2) suggests that we are losing our intellectual and emotional capabilities because the intricate web of genes endowing us with our brain power is particularly susceptible to mutations and that these mutations are not being selected against in our modern society.

"The development of our intellectual abilities and the optimization of thousands of intelligence genes probably occurred in relatively non-verbal, dispersed groups of peoples before our ancestors emerged from Africa," says the papers’ author, Dr. Gerald Crabtree, of Stanford University. In this environment, intelligence was critical for survival, and there was likely to be immense selective pressure acting on the genes required for intellectual development, leading to a peak in human intelligence.

Filed under brain intelligence evolution genetics mutations neuroscience science

73 notes


Cockatoo ‘can make its own tools’
A cockatoo from a species not known to use tools in the wild has been observed spontaneously making and using tools for reaching food and other objects.
A Goffin’s cockatoo called ‘Figaro’, that has been reared in captivity and lives near Vienna, used his powerful beak to cut long splinters out of wooden beams in its aviary, or twigs out of a branch, to reach and rake in objects out of its reach.
Researchers from the Universities of Oxford and Vienna filmed Figaro making and using these tools. How the bird discovered how to make and use tools is unclear but shows how much we still don’t understand about the evolution of innovative behaviour and intelligence.
A report of the research is published this week in Current Biology.

Cockatoo ‘can make its own tools’

A cockatoo from a species not known to use tools in the wild has been observed spontaneously making and using tools for reaching food and other objects.

A Goffin’s cockatoo called ‘Figaro’, that has been reared in captivity and lives near Vienna, used his powerful beak to cut long splinters out of wooden beams in its aviary, or twigs out of a branch, to reach and rake in objects out of its reach.

Researchers from the Universities of Oxford and Vienna filmed Figaro making and using these tools. How the bird discovered how to make and use tools is unclear but shows how much we still don’t understand about the evolution of innovative behaviour and intelligence.

A report of the research is published this week in Current Biology.

Filed under animals cockatoo tool making using tools intelligence neuroscience psychology science

138 notes


Noam Chomsky on Where Artificial Intelligence Went Wrong
If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.
In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.
Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s bookVerbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Read more

Noam Chomsky on Where Artificial Intelligence Went Wrong

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s bookVerbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Read more

Filed under Noam Chomsky AI intelligence cognition behaviorism statistical models neuroscience psychology science

200 notes


The Consequences of Machine Intelligence
If machines are capable of doing almost any work humans can do, what will humans do?
The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.

Read more

The Consequences of Machine Intelligence

If machines are capable of doing almost any work humans can do, what will humans do?

The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.

Read more

Filed under AI machine learning robots robotics technology singularity intelligence science

65 notes

The Gambler’s Fallacy Is Associated with Weak Affective Decision Making but Strong Cognitive Ability
Humans demonstrate an inherent bias towards making maladaptive decisions, as shown by a phenomenon known as the gambler’s fallacy (GF). The GF has been traditionally considered as a heuristic bias supported by the fast and automatic intuition system, which can be overcome by the reasoning system. The present study examined an intriguing hypothesis, based on emerging evidence from neuroscience research, that the GF might be attributed to a weak affective but strong cognitive decision making mechanism. With data from a large sample of college students, we found that individuals’ use of the GF strategy was positively correlated with their general intelligence and executive function, such as working memory and conflict resolution, but negatively correlated with their affective decision making capacities, as measured by the Iowa Gambling Task. Our result provides a novel insight into the mechanisms underlying the GF, which highlights the significant role of affective mechanisms in adaptive decision-making.

The Gambler’s Fallacy Is Associated with Weak Affective Decision Making but Strong Cognitive Ability

Humans demonstrate an inherent bias towards making maladaptive decisions, as shown by a phenomenon known as the gambler’s fallacy (GF). The GF has been traditionally considered as a heuristic bias supported by the fast and automatic intuition system, which can be overcome by the reasoning system. The present study examined an intriguing hypothesis, based on emerging evidence from neuroscience research, that the GF might be attributed to a weak affective but strong cognitive decision making mechanism. With data from a large sample of college students, we found that individuals’ use of the GF strategy was positively correlated with their general intelligence and executive function, such as working memory and conflict resolution, but negatively correlated with their affective decision making capacities, as measured by the Iowa Gambling Task. Our result provides a novel insight into the mechanisms underlying the GF, which highlights the significant role of affective mechanisms in adaptive decision-making.

Filed under gambler’s fallacy decision-making cognition emotion Iowa gambling task executive function intelligence neuroscience psychology science

290,875 notes

A 12-year-old schoolgirl has been accepted into Mensa after discovering she is brainier than both Albert Einstein and Stephen Hawking.
Olivia Manning, from Liverpool, managed to get a whopping score in an IQ test of 162 - well above the 100 average.
Her score is not only two points better than genius German physicist Einstein and Professor Stephen Hawking, but puts her in the top one per cent of intelligent people in the world.
(Other sources: Liverpool Daily Post)

A 12-year-old schoolgirl has been accepted into Mensa after discovering she is brainier than both Albert Einstein and Stephen Hawking.

Olivia Manning, from Liverpool, managed to get a whopping score in an IQ test of 162 - well above the 100 average.

Her score is not only two points better than genius German physicist Einstein and Professor Stephen Hawking, but puts her in the top one per cent of intelligent people in the world.

(Other sources: Liverpool Daily Post)

Filed under brain intelligence IQ Einstein Hawking Olivia Manning neuroscience psychology science

free counters