Are Our Brains Bayesian?

Significance MagazineIn a fascinating article published in Significance, author Robert Bain delves into the arguments for and against viewing human judgements and decisions in terms of Bayesian inference. We are grateful to Significance and the editor, Brian Tarran, for permission to publish the excerpt below. 


The human brain is made up of 90 billion neurons connected by more than 100 trillion synapses. It has been described as the most complicated thing in the world, but brain scientists say that is wrong: they think it is the most complicated thing in the known universe. Little wonder, then, that scientists have such trouble working out how our brain actually works. Not in a mechanical sense: we know, roughly speaking, how different areas of the brain control different aspects of our bodies and our emotions, and how these distinct regions interact. The questions that are more difficult to answer relate to the complex decision-making processes each of us experiences: how do we form beliefs, assess evidence, make judgments, and decide on a course of action?

Figuring that out would be a great achievement, in and of itself. But this has practical applications, too, not least for those artificial intelligence (AI) researchers who are looking to transpose the subtlety and adaptability of human thought from biological “wetware” to computing hardware.

In looking to replicate aspects of human cognition, AI researchers have made use of algorithms that learn from data through a process known as Bayesian inference. Bayesian inference is a method of updating beliefs in the light of new evidence, with the strength of those beliefs captured using probabilities. As such, it differs from frequentist inference, which focuses on how frequently we might expect to observe a given set of events under specific conditions.

In the field of AI, Bayesian inference has been found to be effective at helping machines approximate some human abilities, such as image recognition. But are there grounds for believing that this is how human thought processes work more generally? Do our beliefs, judgments, and decisions follow the rules of Bayesian inference?

Pros

For the clearest evidence of Bayesian reasoning in the brain, we must look past the high-level cognitive processes that govern how we think and assess evidence, and consider the unconscious processes that control perception and movement.

Professor Daniel Wolpert of the University of Cambridge’s neuroscience research centre believes we have our Bayesian brains to thank for allowing us to move our bodies gracefully and efficiently – by making reliable, quick-fire predictions about the result of every movement we make. Wolpert, who has conducted a number of studies on how people control their movements, believes that as we go through life our brains gather statistics for different movement tasks, and combine these in a Bayesian fashion with sensory data, together with estimates of the reliability of that data. “We really are Bayesian inference machines,” he says.

Other researchers have found indications of Bayesianism in higher-level cognition. A 2006 study by Tom Griffiths of the University of California, Berkeley, and Josh Tenenbaum of MIT asked people to make predictions of how long people would live, how much money films would make, and how long politicians would last in office. The only data they were given to work with was the running total so far: current age, money made so far, and years served in office to date. People’s predictions, the researchers found, were very close to those derived from Bayesian calculations.

Cons

Before we accept the Bayesian brain hypothesis wholeheartedly, there are a number of strong counter-arguments. For starters, it is fairly easy to come up with probability puzzles that should yield to Bayesian methods, but that regularly leave many people flummoxed. For instance, many people will tell you that if you toss a series of coins, getting all heads or all tails is less likely than getting, for instance, tails–tails–heads–tails–heads. It is not and Bayes’ theorem shows why: as the coin tosses are independent, there is no reason to expect one sequence is more likely than another.

“There’s considerable evidence that most people are dismally non-Bayesian when performing reasoning,” says Robert Matthews of Aston University, Birmingham, and author of Chancing It, about the challenges of probabilistic reasoning. “For example, people typically ignore base-rate effects and overlook the need to know both false positive and false negative rates when assessing predictive or diagnostic tests.”

Diagnostic test accuracy explained

How is it that a diagnostic test that claims to be 99% accurate can still give a wrong diagnosis 50% of the time? In testing for a rare condition, we scan 10 000 people. Only 1% (100 people) have the condition; 9900 do not. Of the 100 people who do have the disease, a 99% accurate test will detect 99 of the true cases, leaving one false negative. But a 99% accurate test will also produce false positives at the rate of 1%. So, of the 9900 people who do not have the condition, 1% (99 people) will be told erroneously that they do have it. The total number of positive tests is therefore 198, of which only half are genuine. Thus the probability that a positive test result from this “99% accurate” test is a true positive is only 50%.

Life’s hard problems

All in all, that is quite a bit of evidence in favour of the argument that our brains are non-Bayesian. But do not forget that we are dealing with the most complicated thing in the known universe, and these fascinating quirks and imperfections do not give a complete picture of how we think.

Eric Mandelbaum, a philosopher and cognitive scientist at the City University of New York’s Baruch College, says this kind of irrationality “is most striking because it arises against a backdrop of our extreme competence. For every heuristics-and-biases study that shows that we, for instance, cannot update base rates correctly, one can find instances where people do update correctly.”

So while our well-documented flaws may shed light on the limits of our capacity for probabilistic analysis, we should not write off the brain’s statistical abilities just yet. Perhaps what our failings really reveal is that life is full of really hard problems, which our brains must try and solve in a state of uncertainty and constant change, with scant information and no time.


We hope you enjoyed this excerpt. Go here to read the full article – free to access through August!


About the Author

Robert Bain

Robert Bain is a freelance journalist. He was previously editor of Lux magazine and deputy editor of Research magazine.

 

 

 


About the Magazine

Significance is published on behalf of the Royal Statistical Society and is a bimonthly magazine for anyone interested in statistics and the analysis and interpretation of data. Its aim is to communicate and demonstrate in an entertaining, thought-provoking and non-technical way the practical use of statistics in all walks of life and to show informatively and authoritatively how statistics benefit society.

Philosophy of Science: How do gravitational waves confirm general relativity?

black holes
Image credit: The SXS (Simulating eXtreme Spacetimes) Project

Last month, this New York Times article announced that a team of scientists “had heard and recorded the sound of two black holes colliding a billion light-years away, a fleeting chirp that fulfilled the last prediction of Einstein’s general theory of relativity.” This, according to the physicists, is the “first direct evidence of gravitational waves, the ripples in the fabric of space-time that Einstein predicted a century ago…complet[ing] his vision of a universe in which space and time are interwoven and dynamic, able to stretch, shrink and jiggle.”

This had us thinking…what are philosophical implications of this recent discovery?

For answers, we turned to Valia Allori, Philosophy Compass philosophy of science editor. Here’s what she had to say.


 

How do gravitational waves confirm general relativity?

By now everybody knows that gravitational waves have been detected, and everybody says that this is another confirmation of general relativity. But does everybody know what general relativity is, what gravitational waves are, why they are a consequence of the theory, and in what sense the theory is confirmed by their detection?  I believe that many who believe they can answer with a ‘yes’ to the first three questions, will not be so sure about the last question. So let us talk about that, even if somewhat informally.

Commonsensically, people believe that experimental data can support theories: if the result predicted by the theory obtains (it is a positive test), then the theory is confirmed by it. General relativity is a theory according to which space-time is not a passive container of matter as Newton believed, but it will be modified by the presence of matter. Just like when a lake’s surface ripples if a stone is dropped in it, and a wave propagates outwards, so space-time ripples around matter and a wave propagates outward: these are gravitational waves. The intuitive idea is that the detection of these waves supports the general theory of relativity, it confirms it. But what exactly does that mean?

One popular account of confirmation is the so-called hypothetico-deductive theory of confirmation, or HD-confirmation. The basic idea is that a theory is confirmed whenever the positive result is logically entailed by the theory. In fact, testing a theory is comparing a logical implication of the theory to the world, and if what one expects turns out to be the case, then the theory is confirmed. This is exactly what happened for general relativity and gravitational waves: the existence of gravitational waves is a logical consequence of general relativity, they looked for them, and finally found them. Because of this, they confirm general relativity. Nonetheless, HD-confirmation has some problems. If some evidence E confirms a theory T, then it will also confirm T&D, where D is some irrelevant statement, namely a statement which has no role in deriving E. For instance, gravitational waves HD-confirm general relativity, but they will also HD-confirm the conjunction of general relativity and that there is life on Mars, which seems wrong.  In addition, it seems that confirmation is not a matter of logical entailment like the HD-confirmation is suggesting. Rather, confirmation seems to be fundamentally about the credibility of a theory: to say that E confirms T is to say that the credibility of T increases because of E.

This is where another popular theory of confirmation, Bayesian confirmation theory or BCT, comes from. The idea is that confirmation is fundamentally about the degrees of belief that people have about a theory, and that evidence can affect such degrees of belief in ways determined by theorems in probability theory, such as Bayes theorem. In particular, a theory T is B-confirmed by evidence E if E increases the degree of belief in T. For instance, assume that scientists believe general relativity to be true with a probability, say, of 0.7. This probability P(T) is called prior probability of general relativity. After the detection of gravitational waves, scientists suitably update their degrees of belief in T. That is, they now assign to T a new probability in light of the new evidence E. This updated probability is called the posterior probability of T given E, and is commonly indicated by P(T/E). BCT says that E confirms T if the posterior probability of T is greater than the prior probability of T. Continuing with the previous example, if the updated degree of belief in T given E is now 0.8, then E confirms the theory T. But how are the degrees of belief updated? BCT says that Bayes theorem provides the link between prior and posterior probabilities. Formally, the posterior probability of T, P(T/E), is given by the prior probability of T, namely P(T), multiplied by the ratio between the likelihood of E, P(E/T), and the expectedness of E, P(E).  The likelihood of E is the degree of belief in E given T: for deterministic theories like general relativity this is 1, but for probabilistic theories it is the physical probability assigned by the theory. The expectedness of E expresses the degree of belief in E regardless of whether T is true. This is supposed to be connected with how ‘surprising’ the evidence is, and the idea is that the less the evidence is expected, the more it confirms the theory. Technicalities aside, BCT is extremely popular because it seems to capture many intuitions about confirmation that HD-confirmation could not account for. In addition of considering confirmation in terms of theory credibility, for instance BCT avoids the problem of irrelevant conjunction because T&D has a lower prior probability than T alone, and therefore is less confirmed by E.

Let us now go back to the original question: what about the case of gravitational waves? Whether their detection B-confirms general relativity fundamentally depends on whether the expectedness of gravitational waves is low. That is, it depends on our degree of belief that there are gravitational waves, regardless of whether general relativity is true: if gravitational waves are a surprising finding, then general relativity is B-confirmed by them. On first thought, this seems not the case: we expected to detect gravitational waves, we have been looking for them for a very long time, we have spent a lot of money to build suitable detectors and screen off all possible interferences, and we were not very surprised that they were finally detected. Nevertheless, we expected them only because we already believe in general relativity. As such, the expectedness of gravitational waves is low, and so they B-confirm general relativity.

But all that glitters isn’t gold: also BCT has problems. One is that ‘old’ evidence does not B-confirm a theory. In fact, if a piece of evidence E is known, then its expectedness P(E) is going to be 1. Because of this, the posterior probability of T will not be greater than the prior probability of T, and thus old evidence does not confirm the theory. But this is extremely counterintuitive: that Mercury’s perihelion had an anomalous precession has been known for a very long time, so it was old news; nevertheless, when it was shown that general relativity could account for it, it was taken as confirming evidence for the theory. Even if this is not the case of gravitational waves, where the evidence is indeed new, it is still a problem for who is trying to figure out what this elusive notion of confirmation really is….


About the Author

valia alloriValia Allori is an Associate Professor of Philosophy at Northern Illinois University. She has worked in the foundations of quantum mechanics, in particular in the framework of Bohmian mechanics, a quantum theory without observers. Her main concern has always been to understand what the world is really like, and how we can use our best physical theory to answer such general metaphysical questions.

In her physics doctoral dissertation, she discussed the classical limit of quantum mechanics, to analyze the connections between the quantum and the classical theories. What does it mean that a theory, in a certain approximation, reduces to another? Is the classical explanation of macroscopic phenomena essentially different from the one provided by quantum mechanics?


About Philosophy Compass

Unique in both range and approach, Philosophy Compass is an online-only journal publishing peer-reviewed survey articles of the most important research from and current thinking from across the entire discipline. In an age of hyper-specialization, the journal provides an ideal starting point for the non-specialist, offering pointers for researchers, teachers and students alike, to help them find and interpret the best research in the field.

Read the Philosophy Compass here.

 

In Memoriam: Jaakko Hintikka (1929-2015)

Our condolences go out to the surviving family and colleagues of world-renowned Finnish philosopher and logician, Dr. Jaakko Hintikka, who has passed away.

His obituary is linked here, in Finnish.

Having taught at Florida State University, Stanford, the University of Helsinki, and the Academy of Finland, Dr. Hintikka ended his career as a professor emeritus at Boston University.

Over his career, Dr. Hintikka made great contributions to mathematical logic, philosophical logic, the philosophy of mathematics, epistemology, language theory, and the philosophy of science. He is credited as the main architect of game-theoretical semantics and of the interrogative approach to inquiry. Dr. Hintikka is also revered as one of the main architects of distributive normal forms, possible-worlds semantics, tree methods, infinitely deep logics, and the present-day theory of inductive generalization.

To celebrate Dr. Hintikka’s long life and career, we’ve made free a small collection of his articles.

Hintikka

Photo Credit: Australasian Association of Philosophy


Existence and Predication from Aristotle to Frege

Philosophy and Phenomenological Research | Volume 73, Issue 2, September 2006

 

Quine’s ultimate presuppositions

Theoria | Volume 65, Issue 1, April 1999

 

Wittgenstein on being and time

Theoria | Volume 62, Issue 1-2, April 1996

 

The Games of Logic and the Games of Inquiry

Dialectica | Volume 49, Issue 2-4, June 1995

 

On proper (popper?) and improper uses of information in epistemology

Theoria | Volume 59, Issue 1-3, April 1993

 

Overcoming “Overcoming Metaphysics Through Logical Analysis of Language” Through Logical Analysis of Language

Dialectica | Volume 45, Issue 2-3, September 1991

 

Metaphor and the Varieties of Lexical Meaning*

Dialectica | Volume 44, Issue 1-2, June 1990

 

Kant on Existence, Predication, and the Ontological Argument

Dialectica | Volume 35, Issue 1, June 1981

 

Language-Games

Dialectica |Volume 31, Issue 3-4, December 1977

 

Partially Ordered Quantifiers vs. Partially Ordered Ideas

Dialectica | Volume 30, Issue 1, March 1976

 

Quine vs. Peirce?

Dialectica | Volume 30, Issue 1, March 1976

 

The Prospects for Convention T

Dialectica | Volume 30, Issue 1, March 1976

 

The Question of?: A Comment on Urs Egli

Dialectica | Volume 30, Issue 1, March 1976

 

Comment on Professor Bergström

Theoria | Volume 41, Issue 1, April 1975

 

Quantifiers vs. Quantification Theory

Dialectica | Volume 27, Issue 3-4, December 1973

 

‘Prima facie’ obligations and iterated modalities

Theoria | Volume 36, Issue 3, December 1970

 

“Knowing oneself” and other problems in epistemic logic

Theoria | Volume 32, Issue 1, April 1966

 

Distributive Normal Forms and Deductive Interpolation

Mathematical Logic Quarterly | Volume 10, Issue 13-17, 1964

 

Modality and Quantification

Theoria | Volume 27, Issue 3, December 1961

 

*Written by Jaakko Hintikka and Gabriel Sandu

In Memoriam: Abner Shimony (1928-2015)

AbnerWe are sorry to hear of the passing of Dr. Abner Shimony, noted American physicist and philosopher of science.

Having earned a doctorate in philosophy from Yale University and a doctorate of physics from Princeton University, Dr. Shimony was a professor emeritus at Boston University and leaves behind a lifetime of work investigating the connections between physics and philosophy. Dr. Shimony also served in the U.S. Army’s Signal Corp of Engineers.

A detailed obituary and service information can be found here.

To honor Dr. Shimony’s life, we have made free a small collection of his work.


 

Introduction

Dialectica | Volume 39, Issue 2, June 1985

 

Concluding Remarks

Annals of the New York Academy of Sciences | Volume 480, New Techniques and Ideas in Quantum Measurement Theory, December 1986

 

On Martin Eger’s “A Tale of Two Controversies”

Zygon | Volume 23, Issue 3, September 1988

 

Degree of Entanglement

Annals of the New York Academy of Sciences | Volume 755, Fundamental Problems in Quantum Theory, April 1995

 

Multipath Interferometry of the Biphoton*

Annals of the New York Academy of Sciences | Volume 755, Fundamental Problems in Quantum Theory, April 1995

 

The Concept and Practice of Dialogue in Martin Eger’s Philosophy

The Philosophical Forum | Volume 39, Issue 4, Winter 2008

 

*Written by Michael Horne and Abner Shimony

A “brave new world” revealed, not?!?

ImageOver two years ago I wrote a blog entry entitled “Brave New World.” In that entry I mused about the possibilities of the Large Hadron Collider (LHC) at CERN, about its search for the Higgs boson and the idea that everything we know about the world can change in the blink of an eye. When the LHC was started for the first time, there was a lot of excitement going around in the physics community. Particle physicists were waiting anxiously for results to surface. However, for over two years the LHC was riddled with problems. The magnets were broken, or too strong to hold the current and other such things that spelled a serious handicap for the LHC. The friendly competitors at Fermilab, near Chicago, now had the possibility to maybe beat the folks at CERN. The Tevatron at Fermi however was closed in 2011. Results of many of the experiments however were still being analyzed and showed a definite possibility of a Higgs boson. In early July of 2012 the elusive Higgs boson, or a particle that at least had the possibilities of the Higgs, was discovered at CERN. Peter Higgs himself was present and so were many physicists and observers of the wider particle physics community.  But did Miranda’s brave new world appear? Continue reading “A “brave new world” revealed, not?!?”

New Naturalistic Philosophy Editor for Philosophy Compass

We’re delighted to announce the appointment of the new editor of the Naturalistic Philosophy section of Philosophy Compass, Edouard Machery.

Edouard is Associate Professor in the Department of History and Philosophy of Science at the University of Pittsburgh, a Fellow of the Center for Philosophy of Science at the University of Pittsburgh, and a member of the Center for the Neural Basis of Cognition (Pittsburgh-CMU). His research focuses on the philosophical issues raised by psychology and cognitive neuroscience with a special interest in concepts, moral psychology, the relevance of evolutionary biology for understanding cognition, modularity, the nature, origins, and ethical significance of prejudiced cognition, and the methods of psychology and cognitive neuroscience. He has published more than 60 articles and chapters on these topics in venues such as Analysis, Behavioral and Brain Sciences, The British Journal for the Philosophy of Science, Cognition, Mind & Language, Philosophical Transactions of the Royal Society, Philosophical Studies, Philosophy and Phenomenological Research, and Philosophy of Science. He is the author of Doing without Concepts (OUP, 2009), and he has been an associate editor of The European Journal for Philosophy of Science since 2009. He is also involved in the development of experimental philosophy, having published several noted articles in this field.

Interview: The Art of Comics: A Philosophical Approach

Aaron Meskin is Senior Lecturer in Philosophy at the University of Leeds. He is the author of numerous journal articles and book chapters on aesthetics and other philosophical subjects. He was the first aesthetics editor for the online journal Philosophy Compass, and he co-edited Aesthetics: A Comprehensive Anthology (Wiley-Blackwell, 2007). He is a former Trustee of the American Society for Aesthetics and is Treasurer of the British Society of Aesthetics.

Roy T Cook is an Associate Professor of Philosophy at the University of Minnesota, a Resident Fellow of the Minnesota Center for Philosophy of Science, and an Associate Fellow of the Northern Institute of Philosophy (Aberdeen). He works in the philosophy of logic, the philosophy of mathematics, and the aesthetics of popular art. He blogs about comics at:  www.pencilpanelpage.wordpress.com

Philosopher’s Eye: Why did you two decide to edit The Art of Comics: A Philosophical Approach?

AM: I thought there was enough good Buywork out there being done on comics that someone could produce a good book on the subject matter. I like to work collaboratively, so when I met Roy it seemed like a good idea to work together. I suppose there’s also a sort of selfish reason–philosophy is about conversation and I wanted more conversation (and more interlocutors) on a topic I care about.

RTC: Aaron was nice enough to ask me – someone with no prior professional experience in aesthetics – to comment on a three-paper session on comics at an aesthetics conference. The volume was conceived over coffee at the same conference, based on the positive response to the papers and resulting discussion.

PE: What’s the central concern of the book, and why is it important?

AM & RTC: The book focuses on the aesthetic issues that are raised by the art form of comics. It is not philosophy ‘in’ or ‘through’ comics–the basic idea is Continue reading “Interview: The Art of Comics: A Philosophical Approach”