Dan Kahan – The Great Ideological Asymmetry Debate

February 13, 2012

So who’s right, factually, about politics and science? Who speaks truth, and who’s just spinning?

It’s kind of the million dollar question. If we could actually answer it, we’d have turned political debate itself into a… well, a science.

And is such an answer possible? What does the scientific evidence suggest?

In this episode of Point of Inquiry, Chris Mooney brought back a popular guest from last year, Yale’s Dan Kahan, to discuss this very question-one that they’ve been emailing about pretty much continually ever since Kahan appeared on the show.

In the episode, Kahan and Mooney not only review but debate the evidence on whether “motivated” ideological biases are the same on both sides of the political aisle—or alternatively, whether they’re actually “asymmetrical.”

Dan Kahan is the Elizabeth K. Dollard Professor of Law and Professor of Psychology at the Yale Law School. He’s also the Eli Goldston Visiting Professor at Harvard Law School. His research focuses on “cultural cognition”—how our social and political group affiliations affect our views of what’s true in contested areas like global warming and nuclear power—and motivated reasoning. Before then, he served as a law clerk to Justice Thurgood Marshall, of the U.S. Supreme Court (1990-91) and to Judge Harry Edwards of the United States Court of Appeals for the D.C. Circuit (1989-90).



Today’s show is brought to you by Audible. Please visit Audible podcast dot com slash point to get a free audio book download. This is point of inquiry for Monday, February 13th, 2012. 

Welcome to Point of inquiry. I’m Chris Mooney point of inquiry is the radio show and the podcast of the Center for Inquiry, a think tank advancing reason, science and secular values and public affairs. And at the grassroots. At the start of the show, I want to let you know that this episode of Point of Inquiry is sponsored by Audible Audible’s, the Web’s leading provider of spoken audio, entertainment information and educational programing, offering thousands of books that you can download your computer, your iPod or a C.D. And today it’s got a special offer for point of inquiry listeners. It’s willing to give you one audio book download for free to participate. 

All you have to do is go to the following Web site, audible podcast, dot com slash point. Once again, that’s audible podcast, dot com slash point. 

And since we just had Lawrence Krauss on the show, very popular guest, let me note that his latest book, A Universe from Nothing, is available right now on Audible’s. 

So that could be your your free audio book download. It’s right there. And you can get it for free. 

So who’s right? Factually about politics and science, who speaks truth and who’s just spinning? It’s kind of the million dollar question. If we could actually answer it, we’d have turned political debate itself into like a science. There’s such an answer possible. What is the scientific evidence itself suggest? 

In this episode of Point of Inquiry, I brought back a popular guest from last year, Yale’s Dan Kahan, to discuss this very point, this very question. It turns out to be one we’ve been e-mailing about pretty much continually ever since he first appeared on the show, says about time that we had this big discussion, even really this big debate, because you’ll see our views on it differ. So this time, I wanted him not only to unpack his influential theory called cultural cognition, but to debate the evidence on whether ideological biases are the same on both sides of the political aisle about science, but about much more than just science, or alternatively, whether the biases are in some way a term that Cohon himself started using asymmetrical. 

End by the end of the debate, well, you’ll see we’ve explained forever who’s right and who’s wrong. Yeah. No, not really. But but we’ve actually at least given you the contours of it. So I hope you enjoy it. Dan Kahan is the Elizabeth Dollar, professor of law and professor of psychology at the Yale Law School. He’s also the Eli Gholston visiting professor at Harvard Law School. His research focuses on cultural cognition or how our social and political group affiliations affect our views of what’s true in contested areas like global warming and nuclear power. 

Before that, he served as law clerk to Justice Thurgood Marshall on the Supreme Court and the judge, Harry Edwards of the U.S. Court of Appeals for the D.C. Circuit. Dan Kahan, welcome back to a point of inquiry. Thanks. Good to be back. 

It’s good to have you. And this is a bit of a departure from our normal shows, because you and I have actually planned or we’ve discussed that we’re going to quasi debate some things that we’ve been going over ourselves for almost a year by e-mail in a series of exchanges. So we’ve been planning this for some time and it’s great that we can get this out. But before we do that, I want to start off by giving a bit of recap of your research and also what you’ve been up to for less years since we had you on that to lay the groundwork, because I’m sure that the listeners, everybody will fully remember what cultural cognition is and how the different cultural groups behave with respect to their biases about science. So maybe just let let’s dove in there. 

OK, well, cultural cognition is a conception. I would say, of motivated reasoning as applied to formation of beliefs about risk in policy, relevant science. I know that your listeners must be very familiar with the idea of motivated reasoning. 

Of bring it to death. Yeah. Yeah. And it did that. 

Although there isn’t anybody if there is anybody who hadn’t thought about it, a really good way to think about it is that the classic study they saw game, which, as you remember, was one where the researchers had the students from two Ivy League schools watch film of a football game between their schools, where there had been some controversial calls made by the referee. And they were supposed to students were supposed to evaluate whether the referee made the right calls. And they found, of course, that, you know, the students from Penn said he made mistakes and he called penalties. And Penn students in Princeton, he made mistakes. He called them Princeton. This was motivated reasoning that they were there. Their stake in affirming their membership in the group was actually affecting what they see. So that’s what our research does. And it tries to identify what the relevant groups are on issues involving a policy, relevant science. And then all the ways in which people, in order to affirm their membership in those groups, approach information in a way that helps them to do that. 

Well, let’s let’s talk about the groups, too. There’s this great little grid image that you do see. You divided up into this Cartesian plane with four for quite gas. And then, you know, I’ve I’ve I’ve made a PowerPoint for myself where I then make a Republican elephant appear in one quarter in a Democrat, a donkey appear in the other. But but unpack that, too. 

Sure. So, I mean, you eat that there can be different kinds of understandings or specifications of what kinds of group commitments are that they’re motivating the risk perceptions. And so, know, I don’t claim that this is exclusive of one that would focus, for example, on party membership. But we use a slightly more complicated scheme and we have two scales and their crosscutting and they both measure different aspects of people’s outlooks about how society or any kind of group they’re in should be organized. 

These are pretty familiar constructs and they’re pretty ubiquitous even across the social sciences. So one dimension is how individualistic or communitarian you are on the other. How hierarchical or egalitarian I am. And I can elaborate on what those mean, but they really do mean pretty much what you would expect. And then we say that different combinations of the values or outlooks that you can form with this kind of crosscutting set of scales are like the teams in the study and they saw a game. These are the kinds of associations that people have that motivate them to form perceptions of risk and other other kinds of policy, relevant facts that are congenial to that group. 

And I’ll just, you know, for our listeners, we talked about this last time. 

The hierarchs seemed to team up with the individualists in the Republican Party today, and the communitarians seemed to team up with the egalitarians, although like now, one of the when you and I talk, we’re almost always focusing on I mean, people envision this, that the kind of the grid, this hierarchy in egalitarianism, running vertically and communitarianism individualism horizontally. Oh, we’re thinking about the individuals hierarchy in one quadrant and the egalitarian communitarians in the other. But that’s because we’re talking so much about environmental risk and environmental risks, even going back to debates about nuclear power, which is where this theory got its start with Mary Douglas in the air. And Watsky, those have been the operative combinations of. Values that have squared off. But we look at plenty of other issues where the conflicts pit troops with other kinds of values against each other. As far as the Republican and Democrat, I mean, that’s fine. It is the case that hierarchy and individualism, but will correlate to some extent with Republican and conservative, but not not perfectly fair enough. 

So one thing you’ve done, interestingly, since we last interviewed you is you showed that these cultural values, if you’re in one of these groups, they are more powerful than how much science you know or how much how good you are. 

Mathematical reasoning in determining how you view a scientific topic. 

Yeah, well, actually, I mean, and that’s one way to put it. I should think that’s another way to look at it, is that it’s that they reinforce each other. 

So one possibility would be that people are relying on the cultural values or or their team membership, as it were, as a kind of a substitute for an ability to understand scientific information on their own. It is hard, after all, to make sense of all the technical information, and you have to use shortcuts. And maybe just a group membership is a shortcut. If you start with that way of looking at it, then you might think that the tendency to be focusing, to be forming perceptions that fit your group identity would diminish as people become more knowledgeable about science or as they become more adept at technical reasoning on the kind of thing that cognomen would call system to or fast reasoning their ability to handle quantitative information and thinking in a disciplined, logical way. What we find is that the cultural outlooks actually become even more important as people become more science literate and as their their technical reasoning capacity goes up. So you have polarization among people who are low in science literacy and technical reasoning capacity, but it’s even bigger among the people who are high in technical reasoning capacity and scientific knowledge. So it’s not a substitute. It’s kind of like a magnifier, right? I mean, one interpretation would be that the motivation people have to fit their perceptions to these values actually is easier for them to act on as they are as they become more knowledgeable about science and more adept at making arguments. Hugo Mirzaei talks about the people being lawyers. Well, these are the best people at being lawyers and gathering evidence and convincing themselves. But but that’s that’s the kind of dynamic we were finding right in my sort of perverse. 

And, you know, we’re never gonna succeed communicating science sort of way that I emerged that I get into sometimes I think that well. Okay. So we finally killed completely the deficit model here, which is the idea that, as it’s called, that. That giving people more facts and more information will make them more accepting of what’s true. I mean, what you show is that giving people more facts, information, or at least the ability to analyze them makes them more biased. 

That that could be, although we should be careful here to avoid another kind of bias, which is I guess the availability here is stick that when you focus a lot on really salient instances of something, you tend to overstate its frequency. And what we’re looking at here is something that’s pathological. We’re looking at all these cases and climate change being the best one where you see this kind of dramatic cultural polarization. But we’re ignoring all the cases where there isn’t any cultural polarization, that we’re ignoring the denominator. In fact, the number of issues where there isn’t this kind of polarization is orders of magnitude larger than ones where there is. And in those cases, too, I think people are relying on cultural cues. But the cultural cues aren’t pushing them in opposite directions. So, you know, I don’t think the problem is that we have a disease process for conveying information and having people recognize who knows what about what. I just think that we have. We have diseases. We have we have there’s some kinds of pathologies that are out there. But I don’t think in general it would be better if people didn’t know science writing like that or that you didn’t give them information. So I think we should figure out why why things go wrong here. 

And I just want to talk about one more reason. A study you did before we get into sort of what it means for who’s right and who’s wrong, which is, of course, what everybody wants to. You know, this is this was used in showed. 

And I think that this neatly showed why your theory works, that if you assume this cultural cognition perspective, then you can figure out how to depolarize an issue, which is that you just frame it differently so that suddenly it appeals in a different way to one of the groups and it makes them no longer defensive. And it actually makes them interested or, I don’t know, even enticed. And you did this with climate and geoengineering. 

Right. Exactly. And so when I think about it, the model here of communication and this is relates what I was just mentioning a few seconds ago, people are receiving information about science along two channels. One is the content. And so obviously, you want to convey correct content and you want to make it accessible to ordinary people. But the other is a kind of cultural significance or meaning channel. And if the if the information can be conveyed on that channel is hostile to people’s cultural commitments, then they’re going to they’re going to close your mind to it. And they’re going to resist it. And that’s basically what we’re seeing with the climate change and giving people more information, they could actually become more polarized. But that second channel is amenable to some adjustment. And so if you can if you can combine the content you want with the meaning that is congenial to the group that otherwise is reacting defensively, then you should be able to offset that, that the hypothesis was that the meaning of climate change information tends to be threatening to people with hierarchy and individualistic values because they associate it with that kind of restriction on hostility to commerce and industry. And those are activities that are important for their values. Geoengineering has a different kind of connotation for them. They like technology. They like nanotechnology, for example. They they are heartened to think that there might be things we can do that would enable us to make fewer restrictions on commerce and industry. So when they’re made conscious that that’s a possible response to climate change. It’s not as threatening to consider the information that’s coming along. Channel one, which was information about how serious climate change is at a problem and how we have to do something about it. 

So I just want to ask one more point about what this means for the implications for communicating about science in contested areas. 

So I agree. I mean, I I’ve been an advocate of, quote, framing science. And we had a big framing science debate back in 2007, 2008 about this very thing. Or at least I see it as as is consistent with the idea of you frame it differently than people aren’t threatened and then they actually open their minds survived. But the problem is, is that this is communicating in the real world, not in an experiment. You are going to not be able to control frames. I mean, that’s ultimately what’s going to get you every time. I mean, maybe a society is as a whole of frames things, you know, as it evolves and thinking about them. And those become the overarching cultural meanings. But really, at any given time, you can’t even get scientists on the same page. And even if you could, you couldn’t get everybody else in the same page. 

Well, you know, I agree with you. I mean, first of all, I agree that what I’m talking about here is a conception, you know, an instance or applications of something that you were talking about a few years ago and writing with Matt Nesbit about framing. Of course, you know, knowing that we want to frame doesn’t tell us how we we want to to whom we want to frame and how. Right. And so I see our research with our measures on cultural cognition as aiming to identify to whom you want to speak, how. And part of the value of it is that it’s suggestive of hypotheses about what might work for framing. And it also gives you a kind of apparatus to measure whether that’s working. Otherwise, you just try lots of things and you don’t even know what’s working. But it’s exactly that kind of thing. The point the other point raises about, you know, well, you know, not to be jargony about it, but called external validity. I mean, something might be working in an experiment, but is that really a good model of what it is in the world that you’re trying to understand? And I think there are a lot of bad understandings of what it is we should be trying to model, because we can’t be having in our science deliberations a kind of seminar on science. That’s not realistic. And, you know, I just that’s not how people, when they form the information correctly, are doing it. So, you know, that that that’s definitely true. It’s also a lot harder to frame things in ways that remove the bad meanings, the threatening meanings. Once they’re there, then it probably is to prevent them from getting there. And part of what’s difficult about removing them is what you said, that even if you had ideas about how to do that, that were effective. 

You can’t control the entire communication environment. And people will get their information from lots of sources, including a lot of people who have an interest in maintaining the state of of conflict because they’re actually getting a benefit themselves out of it. They have a niche SIBERRY. That’s a horrible political economy problem. Man, what do you want to do? Well, a quick example. We did some work for a group, Arcus Foundation, on gay and lesbian parenting. And they’re all the time involved in campaigns to try to. Make jurisdictions, states more receptive to gay and lesbian adoption or to prevent them from adopting laws that are going to going to prevent restrict game lesbian adoption. When we do the work we do. We know that they and other groups, advocacy groups will will be in a position to target groups and get a message across if we can help them to know to whom they should be talking and how can I think we’re helping them out. But then then, you know, they face, like you say, the tough task of going out there and competing and jockeying to get the attention. I hope somebody else is working on that. 

Fair enough. Well, I just I wanted to just show how your research has been fruitful over last year and some of the new things that it’s done. 

And then, you know, obviously there’s still we’re still in a hornet’s nest in terms of communicating science in the science of communicating science. The more light that it casts on things, it seems also show how it’s quite difficult, which is why we’re not you know, we don’t have not everyone in America is accepting of what some of us think they should be accepting of. But what I wanted to then get to the baiting with your discussing with you is this idea. All right, fine. We’ve got the cultural groups. They engage in motivated reasoning and they do it because they’re pushed by their values in a way that’s at least partly subconscious to do it. And they’re pushed in different directions because they have different values. Is that the end of the story? I think that’s that’s what motivates me. And the reason I don’t believe it’s the end of the story is because I look all around me and I write about politicization of science, and I don’t see it in the same way on the two sides of the aisle. Now, I see it on two sides of the aisle, but I don’t see it in the same way. And I and I gather that you somewhat disagree with me about this and you view it’s more symmetrical. So I guess let’s make you let you make your case. 

OK. So first off, I think it helps to kind of try to start with what’s at stake and kind of ground it. And I think we agree entirely about that, which is we want to make sense of what we call the science communication problem, which is that notwithstanding the abundant availability of really good information, scientific information on how to make policy promote public interest, better people are engaged at tremendous political conflict about these facts that admit a scientific investigation. Why does that happen? Can we predict it? Explain it and then do something about it. Now, if we both agree, I think that motivated reasoning has it plays a big role in this. It’s not the only thing, but it’s one thing. And this question about the cemetery in a cemetery then is. Do we have reason to think that motivated reasoning is creating more of creating a bigger barrier on one side, either of an ideological spectrum or a one quadrant of a of a cultural map than it is in the others? And I think that, you know, the question is, how does that relate to the problem of the science, science, communication? If that were the case that the moderate reasoning was concentrated in some particular way, that would be important and would be you couldn’t have an understanding the science education problem that wasn’t cognizant of that and that didn’t try to address it as such. All right. So that’s how it. That’s how I frame it. And I would say you have this book coming out that I’m very eager to read. 

And my views are provisional always in my mind. 

So anything not at any time, but especially I can anticipate that I’ll read something in the book that will be important to me. But based on the stuff I’ve done so far, I would say that I think the balance of evidence is that there’s not this kind of concentration ideologically or culturally in a way that’s of consequence for explaining, predicting or figuring out solutions to the science communication problem. To the extent that it’s not related to that. I mean, maybe it’s a fault, but I’m not as interested. So when I think given the target, is there any kind of. Is there a degree of a symmetry that would be of consequence for understanding and fixing the science communication problem? That that that would be my position. Right. And I compacted a bit. And it has as theory components and practical components. But I’ll stop there so that I can catch my breath. But you also you can you can respond that framing of it. 

Well, let’s just first introduce a distinction here. So, I mean, it seems to me that you could have. Asymmetry and bias for a couple reasons. 

You could have it because there’s something about the groups, it’s inherently makes them differentially biased, which would be some kind of dispositional explanation. This is who they are. And I can spin an argument about that. But it could also be situational in the sense that they’ve staked out ground that they must defend. And it seems to me that if you staked out ground that you just can’t defend her. At least, you know, you’re for example, you put yourself in conflict with the scientific community, then you probably have the reason, a motivated way more just because you’re wrong. I mean, you know, in other words, if you’re going to keep taking this stance, you’re going to keep getting whacked for it and you keep having to defend yourself for it. So it might be hard to disentangle those. 

Oh, yeah. That that would just be a kind of base rate problem. I mean, we might think that that that that the vulnerability to the motivated reasoning is symmetric. But the occasions in which one will be forced to engage in motivated reasoning in the world are not evenly distributed across the groups. Right. And you would just happened to see one group putting itself to having to exercise it more. Right. Those are different. The different accounts for sure. All right. So that’s an interesting point. But I’m willing to say that I think that on the dispositional point that I’m I’m pretty you know, I’m I’m I’m I’m leaning against the idea that the disposition to be moniba, the motivated reasoning on these kinds of science issues is concentrated on one side or the other. Right. And I sort of give you a theory account of why that’s so. I mean, that there are lots of reasons why people can get the wrong answer on science and there’s even a lot of reasons that have to do with motivation. You’re very knowledgeable about motivated reasoning and you know that there can be all kinds of things that motivate people to construe information in a biased way, including self-interest. That book I do is premised on the idea that there are these that that group belonging. It motivates people’s perceptions in all kinds of way, just like they saw a game experiment with the students. Those students had a stake, an emotional stake in affirming their membership in the group and being in good standing with their group. And that could be threatened by forming perceptions that would put them at odds with other people. That theory is what we use in our work on the cultural cognition with certain understanding of what the groups are. But that theory, which then as very general applicability doesn’t give you, wouldn’t lead you to think that there’s going to be any reason for there to be an asymmetry. The groups, for example, can be so different in nature. Right? Not just political ideologies, but the college institutions, for example, the kinds of perceptions that can be influenced are so diverse. 

You and I both focus a lot on how people handle evidence, scientific evidence and arguments. But as you know, and even in this study that they saw a game. Even people’s perceptions can be influenced. So it’s not a question of being closed minded. Right. Which which some other mechanisms have to do with. 

Given the kind of the the the nature of the group identity, protective disposition and then various ways in which it can it would work, it just wouldn’t make a lot of sense for there to be a very significant asymmetry. And then in the work that we do, I don’t think we see at least a cemetery that would be of any kind of consequence. We see the kinds of results we would expect to see given our theory, a theory that doesn’t give you any reason to expect that there would be an asymmetry. And so that’s a kind of one theory ish sort of answer. 

Well, it raises two points for me. And I don’t know how we’ll keep ourselves organized if we keep making two points rather than one. But just let me let me try to do it. So, first of all, I agree that your theory is a theory about people being biased based upon their or motivated based upon their group affiliations. So let me, you know, sort of like cross the streams with your research and someone is close to you, close to your research to me anyway, which is Jonathan Height. And hopefully we’re going to have him on the show later. And he he finds left and right. Differences in morality, right. Or moral foundations and motivated reasoning on that basis. You know, again, people are being pushed to have certain forms of thinking based upon who they are morally. And but one of the things he says is, is that the conservatives have a much stronger group identification to begin with. That’s one of the you know, one of one of their moral foundations is. You know, protect the group or be loyal. I mean, one positive form of this morality is being loyal to your friends. So whereas the you know, the liberals are not as much sensitive to that kind of sense of belonging. And I frankly, I recognize this all the time. And the people that I debate with and among sometimes the same side with, so to speak, is, you know, in the framers famous phrase, herding cats, I think applies in the scientific community, liberal commune. So, so, so that feels right to me. So that’s that that suggests that this group thing is not necessarily evenly distributed. And then the second point is. 

I don’t know, maybe that’s enough. But the second point is I feel like in some of your own studies, if you try to take an issue where there’s right bias or, you know, hierarchal individuals bias, which is like global warming, and then you cross it with an issue where they’re supposed to be left biased like nuclear power, it seems that you see differences right there in some of your data. 

Well, let me start with the first plan. I mean, I think that I’m I’m going to be sure to listen to the show at height is where is fantastic. 

And he has another set of measures for characterizing dispositions. No doubt all of these things overlap to some extent. 

And the question then is what kind of measure is going to allow you to do what you want to do the most effectively? 

But one thing I would say about his his measures. This is my understanding of his work over time. Initially, I was very interested in motivated reasoning. And if you look at the rationale that the dog tail, whatever paper, the famous paper, its tail wag the dog. 

Yes, exactly. And it’s about motivated reasoning. 

And it’s about how people have moral commitments that are extrinsic to maybe the utilitarian judgments that they’re going to make. And they’re conforming those judgments. Utilitarian judgments are like judgments to those other kinds of values. That’s a motivated reasoning framework. I think that the really rich and interesting stuff he does now with the differences in moral style is not necessarily about motive, a reasoning. What he says is that there are some groups that put more emphasis in identifying as a moral behavior that denigrates something sacred or pure. And then there are groups that put more emphasis on harm, I guess, in in a secular sense. And I think that that’s right. But those groups are involved in disputes about how their values relate to states of affairs. A group that says, well, I want to treat sodomy as illegal because it is profane something. And somebody else who says, no, I’m not going to because I don’t like it should be done. But that doesn’t harm anybody. Right at heart, Devlin debate is no motivated reasoning there. There’s just two sets of values and people are making conscious evaluations of states of affairs based on that. And I think a lot of what he talks about is that and his way of characterizing that is extremely rich and powerful and suggestive and important. And it might even be the case that then if people are thinking about their conscious values, that an idea of promoting kind of group cohesion and community might be more central to one of those groups, moral visions than the other. I accept that. And that probably has correlations with liberal conservative, although you would tell you I like you his measures better than liberal and conservative. But I say it, but then it’s not clear what follows from that, from motivated cognition. It could still be the case that people who have those those commitments when they agree about the policy ends and disagree only about the facts are going to be conforming their their views to these group commitments. And even if a liberal group has a moral position that says we shouldn’t be using coercive power to promote community cohesion, that person still belongs to a community. 

That person still cares about his or her status in the group that the people in. And they saw game study. I think there are probably liberals and conservatives. 

Right. Although it’s from the 50s or something, isn’t it? It’s going away. 

It’s going on like, you know, psychologists versus economists. So it cuts across ideology. That’s in response to the first point. Yeah. 

Now, to some extent, I think it probably does. OK, yes. 

Second point is in our work, you say in our work that sometimes you have noticed that there seem to be a symmetries and that might be possible. You know, the we’ve done a lot of studies and rarely are the effects kind of uniform across the spectra that we’re using to characterize people’s values. 

And when I say to theoretical again but, you know, in a sense that the the the effects you observe in the world are kind of noisy and lumpy. 

And if I if I say, well, there’s a trend, you know, more of this. More of that. Well, I can say that. But if there’s more a little bit more of it here, you know, than there is there. That’s going to happen because the world is lumpy. And so I think we see a lot of that kind of thing in our studies. And I can find ones that go one way and one the other. But I think the best kind of study isn’t one that tries to measure where the effect is located along the spectrum. It’s probably too abstract. But as you know, a lot of our studies try to say, well, people are going to take a position on the relevance of a piece of evidence, the same piece of evidence conditional on what we’re able to make them believe. Is its significance for their their cultural outlook? Right. So in the geoengineering experiment, for example, we can show that egalitarian communitarians will will adjust how convincing they find a piece of climate change science, depending on whether they think the result is going to be that they that geoengineering is favored or not. If they think if they think accepting this information means more emission controls, then they find it very persuasive. If they think it means engineering, then less so. And we find exactly the same for the hierarchal individuals. But flip it around. Right. So I’m not trying to find issues that are their pet issues and seeing which of them are easier to move. I’m asking them to consider the one and the same thing. 

And if they are actually then changing their view about how relevant that thing is based on whether or not it happens to contribute to cultural ends that they support. That, to me, is evidence that that the processes they are in both, you know, if the processes they are in both, I’m not sure what to make of like how to measure it. You know, it’s not a measurement issue with rulers. It’s a question of what the theory is. The theory predicts it’ll be there on both sides. And it is. Give me a theory that predicts could only be there on one side and let’s test that, you know. 

But if I show it’s there on both sides, I believe it is. Then again, the issue is what do we do about the size vacation problem? 

And I think even if somehow the difference in magnitude could be shown, there is enough of a problem on both sides that any kind of communication solution we have will be. It will. Well, we’ll be thinking generally about motivated cognition, not about one or another group’s problem and being open minded. 

Well, having read lots of this stuff, I agree it’s there on both sides. In fact, I know a study in which it is definitely worse on the left and we can talk about that. Yours don’t ever seem to you so most on the left equal. 

I don’t know if you find it quote worse, but I’ve seen one. And it has to do with as with race, actually. 

Well, here’s one thing, Chris. You have done this tremendous amount of research, and I’m going to I’m really excited to see it. I mean, one reason I’m really excited to see it is that I know that you are committed to and also able to discipline your assessment of the evidence that you’re gathering. 

And you have been scrupulous. You say we’ve been debating. 

I think we’ve been exchanging views and kind of pushing and prodding each other in a way that tends to promote knowledge. 

You use a veil to point out something, even if I hadn’t seen it, that you think you will. Few. 

But OK, so, so, so clearly, people can be biased by their preconceptions in a direction they can be, quote, push, and they don’t necessarily know what’s happening to them. And so you can capture this in any number of people for all these different motivations. So why do I therefore still care about measuring more and less? I mean, it’s it’s really basically because as a political reporter out in the real world, when I look at what people are doing and I know that it’s multi causal why they’re doing what they’re doing. But I don’t see anything like global warming denial. I don’t see a thing like evolution denial on the left. And I know that that’s partly cultural. It has to be it’s partly about where our political debate is at this period in time. 

But, you know, when I when I then see, you know, what I saw on one of your studies, which was if I can if I can summarize this, you know, if you take the hierarchal individualists and you give them a scientist who agrees with them, a fake scientist, you made the scientist up. But they agree with them than 86 percent of them think he’s an expert. But if you give him a fake scientist, it doesn’t agree with a man. Only 23 percent think he’s an expert. This is Giants swing. 

And then just based upon their bias or where they’re starting from. And then I look at something like Newk and I compare that to the real world right now. I say, OK, there’s there’s almost there’s this really strong rejection of global warming on the political right in U.S.. And then I say, okay, what’s what’s some issue that’s like kind of vaguely similar because, A, it involves science. It applies. It appeals to the left’s moral impulses. See, it’s at least somewhat talked about in public policy. So I’ll try to find one. Nuclear power is often the one. Right. 

And then you try to measure. You try to square things up again as a first of all, you find, well, the left’s all over the place a nuclear power that doesn’t. President Obama seems before it. You know, this is guy at The Guardian, George Monbiot, who is like the classic liberal who’s just constantly attacking Greens for being anti-nuclear. Mumby It’s all pro nuclear. 

I don’t find the same resistance kind of isolated cases. 

I mean, those are those are particular individuals. 

And so I guess the question is like, what happens with people just out in the world? And even in the study you’re talking about, except that there was a bigger effect. So you’re saying that the nuclear power, that the hierarchal individual, a swing was bigger. 

It was big. And the swing on that bike to the swing was there on both sides. Sure. 

On both sides were much more likely to see the featured scientist who has a HD and is a member of an elite institution or a member National Academy of Sciences. As an expert, if the person is depicted as taking a position consistent with the position in their group, as dominant in the group and not an expert. If the if the featured scientist says, oh, that I take the opposite position. The magnitudes in the raw data were bigger on the IACI individual side. That would be true. My my response is just that. The part of it could be lumpiness. 

Yes. I don’t know. I can go back and find places where it looked like most of the effect we saw was concentrated on the other side. But it was there on both sides. I mean, significant on both sides. And you mention correctly that we’re looking at these studies and we’re trying to say, well, what do they tell us about the real world? 

And all I say is I see an effect like that. And this kind of study that gives me a lot of reason to think something in the real world could happen, that wouldn’t be good. And the effect size was big enough to make me have that anxiety, even on the egalitarian, communitarian or liberal side. 

If we’d gotten only effect sizes that big with the other side, you know, it would’ve been it’s still been a good, you know, an interesting study. 

So it’s that that’s that’s my point about my interest in this issue is kind of motivated, as it were, but consciously by an interest in and trying to understand the science communication problem. I think it’s big enough all around to be addressed as part of the solution to the science communication problem. 

But if people just want to know the answer in the abstract, you know, I agree that that might not be sufficient. You keep trying to measure. You know, that that could well be true. 

Well, I mean and let’s face it, honestly, there’s no reason to think that in the real world, global warming and nuclear power excite the same passions at the present moment. 

Well, it also over time, Chris, because, you know, as you know, the study of the risk perception actually kind of got its start with the National Academy of Sciences pulling its hair out over the resistance of the public to nuclear power. And that was that had a left complexion. And sometimes people, if they haven’t really been paying attention, they think they hear Mary Douglas in there and Wildavsky and the cultural theory of risk. They assume, oh, that’s just a bunch of conservatives trying to prove the left is crazy because most of their examples were about ideologically left resistance to environmental science. Maybe that just shifts and changes over time. But the issues are so varied. We we do things in war, too. As you know, I mentioned that we do stuff with videos. We show them videos of people engaged in protest activity like the Occupy. Do you see this person, the protester, engaging in violent conduct or not before the police crackdown? And there we see it. Do it, people. That depends what kind of protester he is. And so that this might be the case you were talking about, where it’s hard to know if you take two separate issues, if the motivational stakes are symmetric for the two groups. Right. Over time, they might change. I like to study where you can actually just see that whether they’re changing their mind or not. They’re kind of looking at that evidence and saying, well, he is or isn’t persuasive, depending on whether it fits their group view of that. That’s what we did, too, in the study on the scientific consensus. And you point out that the effect sizes were bigger, the higher up individuals as you. 

Well, fair enough. And let me just actually tell our listeners that this shows a little long and hope you’ll stay with us. 

Longer, because we actually really are getting into this stuff. And I hope they like that or maybe they have tuned out. 

Hopefully you can stay with us a little more. Happy, too. Okay. 

Well, so then the other thing is that I really want to raise is we’ve had a couple shows. 

We had one with a guy named Jonathan Wiler, UNODC, talking about this thing called authoritarianism. And the reason I bring this up is met there many. But one is that our you know, our listeners are very interested in sort of the criticism of religion, the Christian right and the authoritarian personality. Seems to me like it’s going to show up based upon all the research on it among your group called The Hierarchs. You told me in the Lesha that they tend to be more more religious. And what that’s one of the things that we tend to see an authoritarian personality. And what what they what are authoritarian basically believes is, you know, they tend to see the world in a in a concrete way and in a more binary black and white kind of way. That’s one of the key distinguishing features of being an authoritarian. 

And, you know, respecting authority is also part of this, you know, sort of seeing it in a my way or the highway kind of outlook where, you know, there’s you know, people have white hats on, their blackout’s on. 

And there’s been tons of research on this. And a lot of the research points, again, to this sense that this is this is a kind of very defensive point of view. 

And, you know, you look at the world and you say, are you in my ingroup or you in my outgroup? You know, are you are you part of the team? Are you not? 

And if you’re not, you know, then you’re very condemning of that that other that other. 

So that, again, would suggest differential bias in one of the groups. And I take it you’re not sold on this either. 

Well, I mean, here is it that I think there are lots of different kinds of mechanisms out there that biased information processing that that have some impact on it, that isn’t correlated with just getting the right answer and very data. They can you know, they can all be contributing something. And the question is, how much are they contributing to? What kinds of of problems? The authoritarian personality. If it I mean, it’s it’s one thing to talk about its its validity. And at least when Adorno did it, it’s it’s been retooled I guess. 

I mean, we even use it Allama object that the old version of science is, you know, and I’ll tell you this, there is a correlation. 

It’s not super high. But one thing we try to make sure is that our measures not only are measuring something on which they’re reliable, but that they’re valid. They’re measuring what we say they measure. And one way to do that is to see if they have correlations with other kinds of things that you would expect, like Republican and Democrat and liberal and conservative about how in the Big Five and so forth. And also scale. Oh, that’s that’s an older version. Right. But, yeah, there’s a correlation between the hierarchy. The FCL was not really huge. You know, it’s like point two. But I do expect it would be a nice rolling readers. 

That’s the original. I guess that’s the original way authoritarians was measured and F stood for fascism. 

Yeah, that’s adornato stuff. But here’s the thing. If what you have to do, then, I mean, you can find lots of mechanisms that will predict a kind of a symmetry. And I think this is a thing if you want to study this, what you should do is design a study where the results are informed by. But you can buy the theories that you want. That that that imply respectively, symmetry or asymmetry and have them have the results be one that unambiguously declares a winner. What you don’t want is to just have a have results that could be explained by all of them. And then you kind of take the measuring stick out and try to see how many centimeters bigger the effect sizes are. Come up with something where you would predict a result. If the the theory that predicts symmetry, that identity protected cognition says one result and then something like authoritarian personality which predicts asymmetry would predict different result. And then they can have a fair fight. And my guess is that you might find things like that. But but if you if you stage those battles on the terrain where we see the science communication problem, identity, collective cognition is going to be winning too much, too much teaching to be disregarded. 

No, it will. No, nobody’s definitely nobody’s disregarding it. 

Well, they think that a third time personality explains it and can therefore inform hypotheses about how to fix that. The science communication problem, to the extent that it. Is identifying the source of that of the design mutation problem in one side of the political spectrum? Right. I mean, I can’t I don’t think there’ll be any hypothesis about from the thought chain personality on why we would see any effect on the egalitarian, communitarian side in any of our experiments. Right. I mean, that’s kind of right. 

No, you wouldn’t. You wouldn’t. That’s a good point. 

Oh, if if it’s it could be the case that our Chambers naledi is real and that it’s doing many things in many places. But if we look at the things that we’re concerned about and test them with theories that make predictions it wouldn’t make. Right. Then we see those those predictions being borne out. 

Yeah. I mean, it’s probable that there’s many sources of bias. I mean, and there’s, you know, many mechanisms for bias. And there’s 17 of them. Actually, I’ve counted them recently. Right. 

Well, but what this also gets to to me is that. We have this incompatibility between it. 

I don’t know if you’ll agree with us, but I will group your approach to categorizing the groups. And Jonathan Hights approach to categorizing the groups. And George Lakoff approach to categorizing the groups as being essentially about morality. 

In other words, your peoples are biased by their view of how society should be ordered and how you know, who gets ahead, who doesn’t, and what the rules are versus being about personality or disposition. What kind of. And and there’s there’s a bunch of theories in that area as well. And they all have data to back them up to. And it’s not clear how the two relate to each other. 

Yeah, well, yeah, I think that some that that’s a good way to put it, or at least that that that there’s a class of theories that talk about these group based dispositions. And if you think that there’s a kind of identity, identity, protective identity, assertive nature to the psychological mechanisms that they’ll suggest you’re going to see more symmetry. 

There are other kinds of things that are just a kind of cognitive style or personality trait that says closed mindedness, rigidness and those I think if you believe those are correlated with that, with a certain ideology, then you might expect to see that the asymmetry, the relationship is they could both. They could all be right. But they and and then the question is how much you know, what phenomena are they explaining? It might be the case that the identity, protective theories explain some phenomena and personality rigidness approaches something else. Or it might be that they both contribute but one a lot more than the other to the science communication problem. Right. 

So because I do think that, you know, the horse race notion that there just has to be one one view wind up making us more confused in the end that you’re putting your finger on something that is an important distinction. And so how to think about sorting that out is we’re doing well. 

Another interesting thing is we have the kind of proliferation of measures for higher level cognitive processing. There are dual processing is older than economist and economists view is now the one people are most familiar with. 

So this is Daniel Kahneman with the new bestseller. That’s right. 

Anyway, even his best seller, The Thinking, Slow and Fast Thinking Phase is slowly OK, is itself a synthesis of work that he’s amassed over quite a period of time. 

So even his due to a process stuff, it goes back. But by his his conception that that the system one is rapid, unconscious, visceral, intuitive and maybe OK in general, but still more prone to error versus the system too, which is reflective, deliberate, conscious. More likely than to be to generate reliable results. 

And how exactly to measure those dispositions? There are all kinds of different measures. And then how that intersects with what we’re talking about with motivated reasoning, too. I think people are. I guess it’s productive confusion. But it’s creating confusion. 

If we were to dig in there, then we would probably have another 20 minutes and no more certainty. 

Well, but here’s one thing. 

I think it’s easy to because because the it it it might be the case that there are correlations between the system one and system two. I don’t know, you know, with with ideology or something. 

But that wouldn’t necessarily answer our question unless we understood that somehow the system to reasoning was free of this kind of motivated cognition bias. Right. I don’t think that it is. But but some people try to get at the symmetry issue by looking at system one, system two and related kinds of constructs for seeing whether people are disposed to use the higher level cognitive processing. I’m not convinced that’s a good strategy. 

No. And I know there’s research on that, too. And actually, well, I’m tempted, but I think that I don’t want to the people still with us on this show snoozing off. 

So I think this is been this has been really fun. We’ve been meaning to do this. 

And I think that we have stretched pretty far into the kinds of questions you have to start thinking about if you’re going to tackle this. So now why don’t you make a final statement and just summarize, you know, who’s right and who’s wrong for all time? 

Well, I thought, of course, that’s a trick. Because anybody who thinks that anybody’s right for once and for all and for all time is is definitely displaying, if not a bias, a kind of outlook towards knowledge that is is not liberal. 

The most important sense, philosophical sense, the pop parian sense that science is a scale that that never stops weighing. 

And so I think the scales tipping in the side of this of the symmetry right now, but it’s still open for me. And and when your book lands on the other side of scale, that might very well tip tip things off. Take a look. 

Well, and I’ll just say that I appreciate all the things that we don’t know here. But when I just try to make sense of reality, the reality of our politics with scenes is so show such blaring a symmetry, then I’m not ready to let go of that idea of uncertainty. 

Although I acknowledge yeah, it’s good to know that we shouldn’t let go. We should keep pushing at it. 

All right. Well, Dan, thank you so much for really enlightening and sophisticated episode of Point of Inquiry. 

Oh, this is really. This is a lot of fun. Glad to do it. 

I want to thank you for listening to this episode of Point of Inquiry to get involved in discussion about this show. Please visit our online forums by going to center for inquiry, dot net slash forums and then clicking on point of inquiry. 

The views expressed on point of inquiry aren’t necessarily the views of the Center for Inquiry, nor of its affiliated organizations. 

Questions and comments on this show can be sent to feedback at point of inquiry, Torg. 

One of inquiry is produced by Adam Isaac in AMR’s, New York. Our music is composed by Emmy Award winning Michael Waylan. This show also featured contributions from Debbie Goddard. I’m your host, Chris Mooney. 


Chris Mooney