News and brain candy for the philosophy community
In a fascinating article published in Significance, author Robert Bain delves into the arguments for and against viewing human judgements and decisions in terms of Bayesian inference. We are grateful to Significance and the editor, Brian Tarran, for permission to publish the excerpt below.
The human brain is made up of 90 billion neurons connected by more than 100 trillion synapses. It has been described as the most complicated thing in the world, but brain scientists say that is wrong: they think it is the most complicated thing in the known universe. Little wonder, then, that scientists have such trouble working out how our brain actually works. Not in a mechanical sense: we know, roughly speaking, how different areas of the brain control different aspects of our bodies and our emotions, and how these distinct regions interact. The questions that are more difficult to answer relate to the complex decision-making processes each of us experiences: how do we form beliefs, assess evidence, make judgments, and decide on a course of action?
Figuring that out would be a great achievement, in and of itself. But this has practical applications, too, not least for those artificial intelligence (AI) researchers who are looking to transpose the subtlety and adaptability of human thought from biological “wetware” to computing hardware.
In looking to replicate aspects of human cognition, AI researchers have made use of algorithms that learn from data through a process known as Bayesian inference. Bayesian inference is a method of updating beliefs in the light of new evidence, with the strength of those beliefs captured using probabilities. As such, it differs from frequentist inference, which focuses on how frequently we might expect to observe a given set of events under specific conditions.
In the field of AI, Bayesian inference has been found to be effective at helping machines approximate some human abilities, such as image recognition. But are there grounds for believing that this is how human thought processes work more generally? Do our beliefs, judgments, and decisions follow the rules of Bayesian inference?
For the clearest evidence of Bayesian reasoning in the brain, we must look past the high-level cognitive processes that govern how we think and assess evidence, and consider the unconscious processes that control perception and movement.
Professor Daniel Wolpert of the University of Cambridge’s neuroscience research centre believes we have our Bayesian brains to thank for allowing us to move our bodies gracefully and efficiently – by making reliable, quick-fire predictions about the result of every movement we make. Wolpert, who has conducted a number of studies on how people control their movements, believes that as we go through life our brains gather statistics for different movement tasks, and combine these in a Bayesian fashion with sensory data, together with estimates of the reliability of that data. “We really are Bayesian inference machines,” he says.
Other researchers have found indications of Bayesianism in higher-level cognition. A 2006 study by Tom Griffiths of the University of California, Berkeley, and Josh Tenenbaum of MIT asked people to make predictions of how long people would live, how much money films would make, and how long politicians would last in office. The only data they were given to work with was the running total so far: current age, money made so far, and years served in office to date. People’s predictions, the researchers found, were very close to those derived from Bayesian calculations.
Before we accept the Bayesian brain hypothesis wholeheartedly, there are a number of strong counter-arguments. For starters, it is fairly easy to come up with probability puzzles that should yield to Bayesian methods, but that regularly leave many people flummoxed. For instance, many people will tell you that if you toss a series of coins, getting all heads or all tails is less likely than getting, for instance, tails–tails–heads–tails–heads. It is not and Bayes’ theorem shows why: as the coin tosses are independent, there is no reason to expect one sequence is more likely than another.
“There’s considerable evidence that most people are dismally non-Bayesian when performing reasoning,” says Robert Matthews of Aston University, Birmingham, and author of Chancing It, about the challenges of probabilistic reasoning. “For example, people typically ignore base-rate effects and overlook the need to know both false positive and false negative rates when assessing predictive or diagnostic tests.”
Diagnostic test accuracy explained
How is it that a diagnostic test that claims to be 99% accurate can still give a wrong diagnosis 50% of the time? In testing for a rare condition, we scan 10 000 people. Only 1% (100 people) have the condition; 9900 do not. Of the 100 people who do have the disease, a 99% accurate test will detect 99 of the true cases, leaving one false negative. But a 99% accurate test will also produce false positives at the rate of 1%. So, of the 9900 people who do not have the condition, 1% (99 people) will be told erroneously that they do have it. The total number of positive tests is therefore 198, of which only half are genuine. Thus the probability that a positive test result from this “99% accurate” test is a true positive is only 50%.
Life’s hard problems
All in all, that is quite a bit of evidence in favour of the argument that our brains are non-Bayesian. But do not forget that we are dealing with the most complicated thing in the known universe, and these fascinating quirks and imperfections do not give a complete picture of how we think.
Eric Mandelbaum, a philosopher and cognitive scientist at the City University of New York’s Baruch College, says this kind of irrationality “is most striking because it arises against a backdrop of our extreme competence. For every heuristics-and-biases study that shows that we, for instance, cannot update base rates correctly, one can find instances where people do update correctly.”
So while our well-documented flaws may shed light on the limits of our capacity for probabilistic analysis, we should not write off the brain’s statistical abilities just yet. Perhaps what our failings really reveal is that life is full of really hard problems, which our brains must try and solve in a state of uncertainty and constant change, with scant information and no time.
We hope you enjoyed this excerpt. Go here to read the full article – free to access through August!
About the Author
Robert Bain is a freelance journalist. He was previously editor of Lux magazine and deputy editor of Research magazine.
About the Magazine
Significance is published on behalf of the Royal Statistical Society and is a bimonthly magazine for anyone interested in statistics and the analysis and interpretation of data. Its aim is to communicate and demonstrate in an entertaining, thought-provoking and non-technical way the practical use of statistics in all walks of life and to show informatively and authoritatively how statistics benefit society.