Did Reason Evolve For Arguing? – Hugo Mercier

August 15, 2011

Why are human beings simultaneously capable of reasoning, and yet so bad at it? Why do we have such faulty mechanisms as the “confirmation bias” embedded in our brains, and yet at the same time, find ourselves capable of brilliant rhetoric and complex mathematical calculations?

According to Hugo Mercier, we’ve been reasoning about reason all wrong. Reasoning is very good at what it probably evolved to let us do—argue in favor of what we believe and try to convince others that we’re right.

In a recent and much discussed paper in the journal Behavioral and Brain Research, Mercier and his colleague Dan Sperber proposed what they call an “argumentative theory of reason.” “A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis,” they write.

Given the discussion this proposal has prompted, Point of Inquiry wanted to hear from Mercier to get more elaboration on his ideas.

Hugo Mercier is a postdoc in the Philosophy, Policy, and Economics program at the University of Pennsylvania. He blogs for Psychology Today.



This is point of inquiry for Monday, August 15th, 2011. Welcome to Point of Inquiry. I’m Chris Mooney. Point of inquiry is the radio show and the podcast of the Center for Inquiry, a think tank advancing reason, science and secular values and public affairs. And at the grassroots. Why are human beings simultaneously capable of reasoning? And yet also so bad at it? Why do we have faulty mechanisms like the confirmation bias embedded in our heads? And yet at the same time, we find ourselves capable of brilliant rhetoric and complex mathematical calculations. According to Hugo Messiaen’s, we’ve been reasoning about reasoning all wrong. Reasoning is very good at what it probably evolved to let us do, argue in favor of what we believe and try to convince others that we’re right. In a recent and much discussed paper in the journal Behavioral and Brain Research, Mass. SeeI and his colleague Dan Sperber proposed what they called an argumentative theory of reasoning. A wide range of evidence in the psychology of reasoning and decision making. They write can be reinterpreted and better explained in light of this hypothesis. Given the discussion, this idea has already prompted the inquiry, wanted to hear from SCA to get more elaboration on his ideas. Hugo Mercier is a postdoc in the Philosophy, Policy and Economics program at the University of Pennsylvania. He also blogs for Psychology Today. Hugo Messiaen’s, welcome to Point of Inquiry. 

Hi, Chris. Thank you very much for having me. 

It’s good to have you on. I wanted to talk to you about this theory of yours, the argumentative theory of reasoning, which has gotten quite a lot of attention lately. And I’m hoping at the outset to say that we can, I think, talk calmly about it and not have an argument of it or should we have an argument about it? 

Well, if we if we want to get a better, better ideas about the theory, maybe we should have a civil argument about it. That that might be good. 

OK, well, first, tell me how the idea first developed. You were working with a scientist named Dan Sperber, who is in France. He’s a cognitive scientist. And as I understand it, he was trying to understand why we have all these cognitive biases. 

Yes, exactly. So then was facing the literature in reasoning and decision making, in the psychology of reasoning and decision making, and to realized that there was a major mismatch between what people thought reasoning was designed to do. So most people think that reasoning is designed to help the Lewen reason or get a better belief and better decision. So it’s supposed to help you when you’re, you know, weighing the pros and cons of a decision or deciding whether you should believe something on your own reasoning should help you do that. But what he realized and what was sort of hiding in plain sight was that reasoning doesn’t work so well. It doesn’t do that very well. And so that prompted him to to ask, well, then why do we reason? Why are humans able to reason? And he elaborated this idea that the function of reasoning is, in fact, not to help the Lewine reason or but to help people argue, to help people solve problems and get a better belief in groups when they’re talking with each other. 

Mm hmm. Let’s talk more about the biases in particular. You in your paper that kicked all this off, you talk about, too, that I noticed. One is, you know, confirmation bias and another is just motivated reasoning. In other words, following your emotions to a conclusion that you want to reach. Not one that necessarily is warranted by the evidence. 

So those the main ones, yes, there are some other biases. And there’s this phenomenon of reason based choice that maybe we can come back to later. But I’m happy to talk a bit about the confirmation bias and motivated reasoning first. 

Yeah, well, certainly, I think that just just tell everybody what they are and then will my my ultimate question is, I know what they are, but I think we should hear that. But then the question is why? Why do they exist in our heads? I mean, this is an evolutionary psychology account. And I think it’s hard for people to imagine why evolution would create an animal that can reason and yet does so badly. You would think that there would be a survival advantage to getting it right. 

That is a very, very good point. And that is exactly the question that prompted Dan’s inquiry into into this matter. And so our answer is that people are actually very, very good at it, reasoning when we see reasoning as being designed for arguing. So when you when you think of two people arguing, the one who is at some point, you know, thinking if an argument in a way should have a confirmation bias, it makes a lot of sense for that person to have a confirmation bias, which is the tendency to find beliefs and ideas that will support, cure your already existing belief. And so, you know, if I want to convince you of something. Well, I’m not going to be interested in arguments that support your side or that go against my fight. I want to find arguments for my side. And so if we look at reasoning that way, then it makes sense. It makes a lot of sense to have a confirmation bias. But the thing that is really crucial is that if the two of us are disagreeing and I have a confirmation bias, my side, then you’re going to have a confirmation bias for your side. And as long as we’re not in complete denial, as long as we have some hint of good faith in that, we’re actually listening to the other person’s arguments, then these confirmation biases should not actually create any problem in the way you can even think of confirmation bias as a kind of division of cognitive labor. So instead of each of us having to think through all the possible arguments, pros and cons of all the solutions, you know, you find arguments for your solution. I find arguments for mine. And we evaluate each other arguments. And in the end, the results should be good for how they work. 

And this would also be similar to saying that I have a confirmation bias when I make my argument to you. You hearing my argument, assuming you disagree with me, have a I guess I would call it a disconfirmation bias. 

You look for mistakes in my reasoning, then you argue back. I look for mistakes in your reasoning. 

Yes, that would be the 80s. So this kind of submission by asking you is just another form of the confirmation bias. 

So when you agree with something, you have a tendency to find evidence that confirms that belief. And then when you disagree with something, you have a tendency to find arguments and evidence that does come from that belief, too. In both cases, you want to confirm your intuition. But the thing that is important is that even though if you tell me something that I disagree with, I may have an initial inclination to find arguments against it. I should still be able to have MHR arguments and be convinced. So if my bias was so strong as to forbid me entirely from ever accepting an argument, then the whole point of argumentation would be moot and people would be not arguing with each other. So even though I have a bias, it should be fairly reduced. When I’m evaluating arguments, when I’m trying to decide whether your arguments are good. My goals should just be to well, I should decide whether that is a good argument and whether I should change my mind or not. 

Well, I think we may. Sometimes the bias does become big enough. I think that that’s not possible. But but let me just let me just summarize if I understand the theory right. In its implications. One is that, you know, if you’re a lone person reasoning by yourself, it’s kind of like, you know, you run the risk of being a crazy hermit in the wilderness or of Frank Zappa said, you know, your daddy has this tacky little pamphlet that he keeps in his bottom drawer. Like, you know, when people reason alone, they come up with craziness. And when people reason in groups where everybody agrees with them, then they’re not a hermit in the wilderness. But they’re more like a cult where they all, you know, say the same thing. And that’s not working either. So what you’re saying is you have to reason with people who disagree with you. 

If if I if I could just sometimes. So the phenomenon you are describing is people reasoning group, but with people who agree with them is usually called, as, you know, group polarization. And even though it’s usually a bad thing, the arcade is huge. It’s a good thing. So you should imagine people, you know, fighting slavery in 18th century America. It was probably good that, you know, the Quakers got together and all agreed that slavery was bad and, you know, try to to find solutions against bad and became, you know, even more extreme in their belief that slavery was bad. So there are exceptions. It’s not always bad to become polite. But in the vast majority of cases, it probably is. 

I want to clarify another thing in that’s about the concept of truth. Some might interpret your theory is suggesting that reasoning didn’t evolve to help us attain truth. I’m not sure that’s what you mean. It’s more like it didn’t evolve to get us to truth in less were in the right context, which is a deliberative contest. Is that right? 

That is exactly right. So reasoning is not designed to help us arrive at the truth on our own. And sometimes it does happen. You can you can think of maybe, you know, the lone mathematician and you know, that sort of thing. So if it’s not that, it never happens, but it doesn’t work very well on the whole. But when we’re arguing with other people, then reasoning should help us communicate better beliefs. So if you believe something and I believe you know, I believe otherwise through argumentation, the one of us who is closer to the truth should be able to convince the other one. Obviously, that doesn’t work every time, but the efficiency of reasoning comes from its ability to communicate good beliefs. So if I’m right, I should be more likely to convince you than the other way around. 

One one objection here. I guess this is a blanket objection. I’m sure you’ve heard this because it’s the one that’s thrown at everything in evolutionary psychology is that. Oh, you’re telling a just so story in a just so story is, you know, you see a giraffe that has a long neck or you see a human who has a flawed ability to reason. And so you try to make up an a story to explain why that particular attribute exists, because evolution wanted to make things like this. But you can’t verify the explanation because you can’t go back in time. Human reason, whatever it can do, you can’t create a time machine so you can’t see it. How do you respond to that? 

So that is that is a very good point. That is indeed often used to take evolutionary psychology. Let me give you an analogy. Imagine you’ve never seen a nail and you’ve never seen a hammer. So I think you don’t need to know who made the hammer. And through that process, it was it was made to realize that the hammer is a very good tool to nail the nail. So you can principle, you can look at the hammer and you can see that he has a convenient handle, that he has a convenient, heavy and solid part of the other extremity. And then if you try to hammer you can you will you verify that it’s working very well at nailing the nail. Now, if you compare that to that screwdriver, you can see that there’s crude driver is not going to be very good at nailing nails. And if you actually try to nail. With this cab driver is not going to talk to Wong very well. And in that, in either cases, you need to know how the tools came in to be designed, what is the history of the tool? And you can do exactly the same thing with psychological mechanisms. So people had been suggesting and people are still suggesting that the function of human reasoning is to improve on individual cognition. And so we can look at where their reasoning does that. Well, not at any thousand on the whole. And we can look at what is the defining reasoning. And we observe that reasoning is designed to be biased. And that doesn’t fit with the function of helping us, helping individual cognition. Then we can suggest that reasoning has this argument that the function. And we look at the performance of reasoning and we see that reasoning performs well in groups that people aren’t very good at arguing. And we look at the designing of reasoning and we see that reasoning has the design that we expect of the mechanism that is made and that evolve to to argue. So it is indeed entirely possible. And people do that all the time in evolutionary biology and also in evolutionary psychology to test evolutionary hypotheses without looking back in time. So we can just look at the mechanism now and we can see well whether its performance matches the expected performance and whether the design matches the expected design. 

One other objection that comes up, and I don’t know if you think this is a serious one, but I’m going to throw it out. You is there’s this problem of group selection as opposed to individual selection. You know, we talk about what level natural selection operates on. Does it operate in the gene, the individual, the group. Right. It sounds to me like maybe your theory of reasoning is for the good of the group or is it for the good of the individual? 

So basically, I think we are agnostic in that respect, too. But the thing that is important is that reasoning does not only benefit the group, it also benefits the individuals. So when then again, if we take the very simple case of the two of us disagreeing and exchanging arguments and then the one of us who is right manages to convince the other, well, usually if you are right and you wanted to convince convince me of something, then it means that you had some incentive, some interest to convince me. Maybe if I’m if we share the same belief, then I’ll be more willing to help you or something like that. And if you are right and you’ve managed to convince me, then I have better beliefs, too. I’m better off. So the both of us individually are better off. Which also means that the two of us as a group are also better off and the group is also better off. But we don’t know whether that is a byproduct of just, you know, helping individuals or whether you played any role in the evolution of reasoning. My best guess would be that it was purely at the individual level. But I don’t have any sort of strong arguments one way or the other, I guess. 

Okay. Another recent trend in psychology and in cognitive science has been all about showing that thinking and feeling aren’t different. I mean, they’re part of the same process, or at least the emotions drive the stage to reasoning process. They feed into it and they guided. How does that relate to your theory? 

That another very good question. And fortunately, I haven’t probably given it as much thought as I should have. The thing I think that may be important to stress is that even though when we when we think of the confirmation bias and motivated reasoning, we tend to think of them as happening because of emotions. And it’s partly true that in many cases, people get upset when you when you Christian their beliefs and they get even more biased. But the thing that is one of the things that are interesting is that the confirmation bias can also happen in cases where very little or no emotion is involved. So you should give people an abstract reasoning, tax and reasoning task, something that has to do with logic or mathematics. And even though they have no emotion, they have no commitment to their answer. They are going to display as strong confirmation bias. So even the emotions may make things may make things worse or better. And maybe sometimes they’re not necessary for for for reasoning to display biases. 

So here’s the thing that I like your theory. I wouldn’t have wanted to have you on this show if I didn’t find a compelling in many ways. Here’s the thing that really after I trial those objections, the thing I really don’t get I mean, I’m trying to imagine a group of early humans, and I’m then I’m imagining how scientists behave today, which I think scientists are the ideal of your theory of reasoning. Hopefully that’s, you know, as far as we’ve gotten. 

Yeah. It’s not bad. 

Well, so the point is there’s a big gap between them and, you know, we. Really only had a scientific enlightenment once in human history in terms of a modern scientific enlightenment in most of the time. I would think people are engaging in all kinds of group think and in a lot of cases, dissenting from the group and trying to engage in this public reasoning process could get you killed. 

That may be true of some beliefs. But I think that our theory missed the address is sort of daily life beliefs. So if you if you take sort of, you know, nave Hunter-Gatherer examples because that, you know, that’s the environment in which we evolved. If I think that, you know, we’re more likely to find praise in the east in your mall and you think we’re more likely to find praising the West, then we can have an argument about it without jeopardizing our our, you know, our cultural beliefs or anything like that. And I think argumentation mostly evolved to sort out that kind of daily life. You know, problems are new and we have very sort of commonsensical issues. And if you didn’t evolve necessarily to deal with, you know, questions of eternal truth and you know what you know, what is the world made off and the questions that scientific Adventists are trying to address. Now, this question have very little fitness consequences. You know, knowing the ethnic composition of water was not going to really help anybody reproduce back then or even know, for that matter. So but when you think of even even now, when you mean you think of all the arguments you have with you, with your partners, with your friends, the vast majority of them, they’re not the ones we recall usually, but the vast majority of them are about these, you know, sort of daily lives come and things. And in these cases, I think reasoning works rather well. 

Okay. I guess that’s fair enough. I mean, we on this show and, you know, in the skeptic humanist community generally, I think maybe we focus on the extreme worldview clashes because that’s what we we feel like it’s our job to focus on. And I agree that most arguments, you know, in the broadest definition are not really about those. 

They’re much more about ordinary things. What is, you know, based on your theory views come into the present moment. What is the ideal argumentative context? I mean, give me a case where groups do it really well. 

I read in one of your papers that if you have a group of people working on a math or logic problem, they do better than one person working on a math or logic problem. 

So how do you said how do you set up the game? Do you just get any group and any cons? I mean, do people need to be face to face? I would I would think that face to face makes people more reasonable and more willing to listen. 

So the thing is, when you see when people interact with each other, there are many other things besides reasoning that take place. So maybe, you know, maybe you’re going to be flirting with someone. Maybe Irene’s trying to save face. Maybe you agreeing to engage in some strategy contraction to achieve some goal. So at the same time, as you want to promote reasoning, there are also other things that you may want to you might want to promote like, you know, brightness or civility and other things that you might want to to reduce as much as possible. And so it’s not only a question of of favoring reasoning, but I guess the thing one of the thing and maybe the thing that is crucial for reasoning to work well is interaction. 

So people have to be able to interact in that they exchange arguments. I give you an argument, then you give me a counter argument. I give you an argument, you give me another counter argument. And this usually works better in Face-To-Face interaction because we’re just we’re wired to work that way. We’re wired to be very sensitive to two small cues in terms of, you know, you have a small silence that will tell me that it’s my turn to speak. And you have maybe you’re going to blush, which is the turn bright that I shouldn’t, you know, approach on that topic. So all of this all of these things are harder to replicate online. And so I guess my idea context would indeed be a face to face conversation with a small group of people, something like four or five people, because conversation tends to break down when you have more than five people. That’s something that you can often observe at dinner parties. If you have maybe four people, you will have one conversation, actually, you know, people exchanging ideas or organize stories. 

If you have more than that, either it breaks down in smaller groups or you have, you know, someone saying something and then someone saying something else. And it’s not quite the same dynamic. So, yeah, I guess my idea of context would be small groups of people who they have to have some kind of come on ensure as they have to be interested in collaborating in solving some problem together. But they have to disagree. They have to disagree as to the best way to solve the problem. 

And I know that this also clearly gets us into the territory of what’s called deliberative democracy theory. And, you know, I don’t know what I don’t know the field in detail. I know it exists. And my understanding is that sometimes when people. Together in groups, I don’t know if this is even in person. Sometimes if you’ve got two groups with strong views that are different than they polarize, right, they become more sure of themselves. Sometimes you get them to move closer together. What is what is the difference and how do you set it up to get one outcome as opposed to the other outcome? 

I wish I knew the answer to that. I guess it is going to depend on quite a few other critical factors. It’s possible that there is a threshold that if people aren’t just too opposed, then it’s going to be very, very hard to bring them together. It’s also possible that the nature of the task plays an important role. So if you’re purely talking about ideology, if you’re speaking well, OK. So let’s take a group of very extreme Democrats and very extreme Republicans and have them talk about abortion. Maybe they’re against polarize, but if you haven’t, you should have the same people talk about something that is more directly practical, like how to reduce the deficit or some such topic. Maybe things will get better. 

Do you think that that may been the trap? 

I mean I mean, I think that that is important, that I’m not talking about politicians. I’m talking about sort of people who would not necessarily be strongly involved. I mean, the people who have political ideas but who don’t have any sort of incentive to to to stay consistent because, you know, they’re are going to be punished by by the voters or anything like that. People, you know, sort of that just know citizens. And so it just very quickly, I guess the central insight of deliberative democracy is that voting, for instance, is not always the best way to to aggregate opinions. And this is something that I think are theory captures quite well. So if you take one of these logic problems that that psycho’s just really like and people suck at. And that’s why psychologists like them. If you if you take a group of of 10 people, they’re nearly always going to vote on the wrong answer. But if you allow them to argue with one another, then the person who has the right answer will nearly always be able to convince everybody else that this is the right answer. So the question is from from an institutional point of view, when is voting the best solution and when is deliberating the best solution? And it seems as if when people disagree, then oftentimes deliberating is going to do well when they all agree with each other, then maybe voting will Lewis to avoid group polarization. So it’s interesting to know when, you know, depending on the dynamics of discussion, when is deliberation more or less likely to to bring good results? 

Yeah, I think that the key distinction here, if I can throw it out, is really I mean, this is why I talked about emotion before. 

If it’s a problem, like a math problem or a logic problem where people don’t have a huge stake, they’re not emotionally committed to the outcome of a math problem. Usually, I mean, I hope that they’re not. Then, yeah, if you got a griots, if it’s hard and you got a group and there’s one good math problem. One good math person or one good logic person, I’m totally cool in thinking of that person can convince the group if if it’s an emotional issue. Such that, you know, you know, is is the fetus life? I mean, is is an embryo, you know, a fully human being with full rights. I mean, then I, I don’t think that the same process is going to play out at all. And so I really think that it’s it’s about whether you have an investment in the question. 

So, yes, I think I think you’re you’re you’re right in. You’re Mr. Right. Maybe, though, the best way for you to look at that would be not necessarily in terms of emotion, even though they’re going to correlate very strongly with that other factor. But in terms of whether and sort of or how important the beliefs are in terms of group identity. So if you have a belief that is central to your identity as a person, if you have a belief that is true by your family, by you, by your friends, by your colleagues, maybe, you know, by by the people you hold dear, then convincing you to change your mind is going to be very hard because you would you would be likely to pay a huge and social cost if you were to abandon that belief. And usually those beliefs are also not and with emotions. But I would I would guess that even though, you know, the emotions are present, it’s mostly the fact that the belief is so crucial from a social point of view that makes it hard to change. And so if you thought of all their beliefs, that could be very emotional and people might be able to change their mind, too. If, for instance, I want to convince you that it’s so that your partner is cheating on you, it’s going to be a very, very emotional topic. But if I give you good evidence, you’re probably going to change your mind or at least suspect that something is amiss. So it’s possible to find cases in which emotions run high, but people can still be convinced. And I think that’s when the belief is is mostly personal. It’s not something that is crucial to your social identity. 

Fair enough. I think that that is a real distinction. Now, there’s one more sort of crazy deliberative or non deliberative case that I encounter all the time that I want to ask you about. And this is what do you say about the person who loves to argue and who wants to go out and find things that they disagree with, but just to attack them and not really open to changing? Please. I feel like I see this all the time on the Internet, on my blog. This is not a deliberative process, but it does involve going out, finding and engaging. 

Yes, it does. And I would guess it’s the problem is that from a from a from acquaintances point of view, I guess we’re maybe more wired to find pleasure in convincing other people than we wouldn’t than we aren’t fam pleasure and being convinced, even though, you know, being convinced that that you know, that we were wrong. Should should be. Should be, it should be a good thing. I mean, you know, getting true beliefs should be a good thing. And and too you can even find people who will. I just want you to give the high out of our defining arguments and trying to convince others just, you know, just for the sake of it in a way, because if you could both pick up old from from its function, which was to work in that actual interaction in which you’re as likely to become as you are to convince the other person. So it just did it is slightly dysfunctional. And I think it’s very much brought about by by the possibility offered by by the media and by the Internet to to not interact. So when you when you’re commenting on a blog post, you don’t have to listen to the other guy’s argument. You can just, you know, post your comment and, you know, and leave it at that. Whereas in real life, usually when you’re arguing with someone when, well, that person is going to counter argue and you’ll be forced in a way to to listen to our arguments and maybe to change your mind. And sometimes, sadly, that, you know, the Internet doesn’t really or you can you can more easily avoid being confronted with other people’s arguments on the Internet. If you really don’t want to. 

Well, let me ask you sort of the big, big question. And I don’t I don’t know if there’s an answer to this, but, you know, in human history, there are some great moments for reasoning. I mean, we know what they are because they’re famous. They’re, as you know, ancient Athens. There is, you know, these Islamic enlightenment going on while Europe was in the dark ages. And then there’s, you know, Italy in the Renaissance. What I mean, is there any way the your theory can begin to apply to these kind of places in these times? 

Well, I think the possibility of open disagreement has been crucial to all these cases. So if you look at Renaissance Italy, for instance, you had a lot of city states and intellectuals were able to navigate between the cities, states, if they happen to to get the ruler of Venice or piece are very angry, they could move to the other city, for instance, if the same, if you to look at China and the worrying state periods with a period which was maybe one of the best the most intense flourishing of intellectual activity in China in. China, there was a lot of, as the name indicates, a lot of different states at war with each other and were able to travel between these states and to find like a more enlightened ruler, you can sort of find that, you know, things in that personal. You had different cities, states, and also asking itself encouraged disagreement and allowed agreement on many topics, not on every topics they did, you know, try to ban Sócrates. But I think if you if you if you foster disagreements on on on issues, then reasoning would work better. And, you know, if you if you if you stop people from disagreeing by, you know, forcing them to hold the official view, then, you know, you’re going to stifle reasoning and it’s not going to work so well. 

Well, you know, Hugo, thank you. I think this has been a great discussion. And just, you know, to keep it consistent with the theme here. I guess I’ll offer you, you know, inclosing a chance for your closing argument. 

Yes. So I guess if I want to emphasize something, it is that reasoning is good. People are very good at it reasoning. But they should really try to do it, you know, with people who disagree with them. As you’re saying at the outset, we should see people who disagree with us and try to argue with them rather than, you know, staying in the comfort of our little group of people who vaguely agree with us and only only talking to them. 

Well, great. And on that note, I hope a lot of people who disagree with us will listen to this show. It’s been great to have you on. 

Well, thank you. Thank you very much for having me again. Thank you. 

I want to thank you for listening to this episode of Point of Inquiry to get involved in a discussion about the argumentative theory of reasoning or to get into an argument about it. Please visit our online forums by going to center for inquiry, dot net slash forums and then clicking on point of inquiry. The views expressed on point of inquiry aren’t necessarily the views of the Center for Inquiry, nor of its affiliated organizations. Questions and comments on this show can be sent to feedback at point of inquiry. Dot org. 

One of inquiry is produced by atomizing and amrs New York, and our music is composed by Emmy Award winning Michael Waylan. The show also featured contributions from Debbie Goddard. I’m your host, Chris Mooney. 


Chris Mooney