Alex Garland: Ex Machina and the Question of Consciousness

May 18, 2015

Ex Machina, a new film that tells the story of a billionaire programmer who creates an artificially intelligent female robot, is in theaters now, and its writer and director, Alex Garland, is our special guest on Point of Inquiry this week. Although this is Garland’s debut as a director he has also written hit novels such as The Beach as well as written and produced screen plays such as 28 Days Later.

As the power of computers and the software that runs them rapidly advances year by year, the representation of artificial intelligence in sci-fi films like Ex Machina are inching closer and closer to reality. Josh Zepps talks to Garland about the science and philosophy behind consciousness, the future of self-aware machines, and the ethical considerations we’ve barely begun to ponder.



This is point of inquiry for Monday, May 18th, 2015. 

I’m Josh Zepps, host of Huff Post Live, and this is the podcast of the Center for Inquiry. Today’s show has a sponsor. We brought to you by Casbah a revolutionary, disruptive mattress company. You can get 50 bucks off any mattress by visiting Casper dot com slash point and using promo code point. These are great mattresses. Capitol complex. Point use promo code point. 

Artificial intelligence. The singularity, self-aware robots. What was once sci fi fantasy is edging ever closer to reality. And the likes of Bill Gates, Elon Musk, Stephen Hawking are worried that artificial intelligence may soon surpass and threaten our own. While the dawn of conscious machines is the scenario explored in the smart new sci fi film X marking up its writer director is Alex Garland, author of The Beach and the screenwriter of 28 Days Later and Never Let Me Go, among other films, to discuss artificial intelligence and science and art. Alex is here. Thanks for being with us. Pleasure. So our listeners will know about the Turing test. You know what’s interesting about the film, I thought, is that it’s not exactly a Turing test. 

No, not at all. It’s quite different. Can you explain how? 

Well, because the the sort of the human component in the test can see that he’s interacting with the machine. 

Right. So the Turing test is. You wouldn’t. Yeah, exactly. You wouldn’t know whether or not you’re communicating with an artificial intelligence or a biological intelligence here. The robot is clearly a robot. 

There is no blind test and there’s low controls. So it’s just. And and so it’s a sort of post Turing test. The more I you know, I mean, just to be clear, I from a position as a layman who gets interested in this stuff. The more I learned and thought about it and read about it and read what other people thought about it, it became clear that the Turing test is primarily a test for passing the Turing test. It doesn’t it doesn’t necessarily require self-awareness or sentence in order to posit very, very complex language skills. But it could be passed by something that doesn’t know it’s a computer, for example. 

Isn’t part of the point, though, that Turing was trying to make. The question becomes moot. Well, it’s almost almost a meaningless question to ask. Doesn’t absolve Wennerstrom, is that not? If, for all intents and purposes, behaving like a human, it may or may not be moot? 

We don’t know. I mean, the issue is you can speculate all sorts of things. 

And that’s along the lines of if it looks like a duck and quacks like a duck, it’s a duck, you know. 

But I think one has to say of this stuff that may or may not be true because because the the problems involved in understanding that stuff about, say, a self-aware machine also exist with us. They’re the same problems of theory of mind and human consciousness and what it is we have, whether we correctly perceive what our consciousness is or or maybe it’s it has a much more illusory state. And it’s not what we feel. It is in quite certain terms inside us. So until we understand what consciousness is in ourself, it’s quite hard to predict what it would be or what form it might take in a machine. But it’s a perfectly reasonable position to take. But it is just a position. It’s not a factual statement. 

You made the claim that the Turing test would imply self awareness. Yes, absolutely. 

Or that that in itself doesn’t matter that the thing is moot. Is it? All I’m saying is it’s an unknown. Yeah. It may or may not be. The basic thing is we don’t really know what consciousness is. A thing happened in neuroscience, in the study of consciousness, which is people just stopped it sort of fell off the radar for quite a long time. I was speaking to the professor of neuroscience at Sussex University back in the UK, and he said at that point he started saying, I want to study what consciousness was, which I’ll get this wrong. But let’s broadly say it’s true. Fifteen, twenty years ago, something like that, people around him were saying, like, what are you talking about? Why would you study that? It doesn’t make sense. Maybe because, I mean, it would be better if he was explaining this, but I’m going to speculate. You should just be thinking about science. You know, spikes in neurons and stuff like that. And just simply looking at that, what why get bogged down in what feels more like an area of philosophy, say, than an area of science? And all I’m really saying is a big, mushy thing. It’s really hard to take very clear positions on it. Very, very smart people have positions that contradict each other. 

So so what what what is the creative decision then on your part when you start thinking about about writing something that explores all this? To go into the question of. All right. 

Let’s make sure the robot is known as a robot and nonetheless can convince us that it’s that it has feelings to be reasonable in the presentation of the ideas that that’s the position. So, so very broadly. Here’s a bunch of things that I find interesting. That’s where it starts to me. I start reading, I start talking about it with people. I think I find it interesting. Then I. I find a narrative that attaches to the thing that I find interesting. And I think, okay, maybe there’s a film in this. And then because of the film, it is the kind of film it is, which is basically an ideas movie. There then becomes a requirement to to be reasonable in the presentation of the ideas. Otherwise, the film itself is kind of valueless. An idea of film and ideas movie that doesn’t well represent the ideas is a waste of time. So then it’s a question of sending this stuff to the people that write the books. I don’t want to. These are not my ideas. This is. Trying to disseminate other people’s ideas and so sending it to. Testing it, saying, did I. When you wrote this, is this what you meant? And doing that as best I could. 

One of the one of the conceits is that the artificial intelligence gets created basically by the head of a kind of a Google company using algorithms that he’s derived from search engines, essentially. 

You know, the premise being that what we’re all tapping away into Google every day is as good as any kind of snapshot of a hive mind, I guess. 

Where is that site? Where does that where’s that conceit that come from? 

It came it came from the idea that it’s actually stated in the film in a line, which is that search engine inputs appear to provide clear information about what people are thinking about in a sort of self-evidence based way. You know, I am thinking about buying a TV. 

So I am with this TV thing. But if you looked at a sequence of inputs, what you might see is the way thought patterns work. So you get a sequence of things that follow on very directly from each other. And then suddenly a kind of non sequitur, like left field jump. And if you could figure out why that non sequitur, like jump it happened. So you were doing a string of things about TV and then suddenly you do it about French cars or something like that. Why that happened. And map the more information you got, the more the less non sequitur like those things would be that you would start to get something that isn’t just about what people are thinking, but how they’re thinking. But that is not can I swear, measure. That’s probably bullshit. 

That’s just that’s just that’s something I thought maybe. Yeah, that was amazing. And it turned into an intellectually avowed as well. Go do it. 

It was partly to do with actually a sense of what it is we’re giving up of ourselves to tech companies. Yeah. Is that I mean, is it it’s an app that sort of prescription for how you create an API. That’s more to do with our relationship with tech. 

Yeah, I mean, it it’s got hints of a sort of dystopian future, right? I mean, there’s this. Yeah. 

I mean, you know, how does it how does it relate to Snowden and how does it relate to the NSA and how does it relate to all of that? 

It relates directly to that, that that Snowden story broke while we were filming, while we were making the thing. 

And and so in some respects, that kind of paranoia might retrospectively look prescient. But it wasn’t prescient at all because everybody either knew this or suspected it and talked about it. I mean, just an illustrative way. I remember meeting an actress for one of the parts in this film and we were talking about actually that scene. And she said, oh, yeah, yeah. My manager told me I need to cover over the cameras on all of my laptops and phones and stuff like that. Now, that was a long time before the story broke, and it was also a long time before there was a release of I mean, that was a different it wasn’t plugging into people’s cameras. It was stealing images that people stored on the cloud, you know, type stuff. But nonetheless, all it really is saying is people were people were scared about that, maybe not on the basis of much evidence, just just on a sort of intuitive level. This doesn’t quite feel right. Something like that. 

What do you think the upshot is going to be of all of all that? 

I mean, on the one hand, you’ve got a scenario in which if it becomes the case that small scale lone wolf inspired jihadist attacks sort of happen in the background in the same way that Charlie Hebdo and in the same sense that there was the Copenhagen killing and the Sydney Asian, all that, then maybe people will have more of a willingness to accept a lot of what Snowden was was opposing and trying to reveal. On the other hand, a future in which you’ve got big companies like Google in cahoots with with a security state. And we’re on them all the time, 24 hours a day. 

Punching in our deepest thoughts is a bit dystopian or just disturbing. 

I mean, we don’t. Yeah, I mean, it’s I mean, the only real answer to that is, is it’s a balance. 

And the trick is about getting the balance right. And I feel on a personal level, I feel really quite sure that not enough thought is currently being put into the balance. And we’re not getting it right that that would be that would be my position. But but I think in this kind of conversation, the key is to not be binary about it. You don’t take a totally libertarian view or a totally regulated view. You say and by the way, almost anybody reasonable would agree with that. And it’s just. So it’s a sliding scale. Yes. And it’s where you choose to put the pin on the spectrum. I suspect the pin. I’m not I suspect I feel really quite sure the pin is in the wrong place at the moment. A lot of it is just to do for me with checks and balances. I could accept a certain kind of surveillance if there were certain kinds of checks and balances on the people doing the surveillance. But there aren’t. And and the people doing this surveillance come from two particular sectors. One of them is government. The other one is private, and sometimes, as you said, the government and the private are definitely working together and also sometimes they’re working independently and nobody’s looking hard at them. That can’t be right, because there there are literally no examples, as far as I can tell, in human history when a lot of power without oversight is a good thing. It just doesn’t work like that. So when you were coming. 

When you were approaching the character of these self-aware robot ever in the film, were you coming at it with a preconceived conclusion as to whether or not she actually is self-aware? 

Yes, I was I was coming to it from the position that she is. And it’s a meaningful form of awareness. And it’s not the same as our awareness. I imagined it something like this is just to be illustrative, something like a human and a dog. That’s not that. That is to say, you can tell a dog is sentiment and has consciousness, but you don’t actually know what it’s like to be a dog, even though you can recognize that ascensions. I’m not equating Avios intelligence with a dog because she’s got brilliant language skills in intelligence terms is probably smarter than us. In fact, is massive loss in the way she’s presented. But but the key point would be to say that when you have an A.I., it would be a mistake then to think that we would be the same as them. And I could be very sophisticated, could have an internal emotional life and be self aware, but not know what it was like to be asked. And we wouldn’t know what it was like to be the AI. And that will be a sort of complex tension. Should this ever happen. That’s gonna be something I’m going to have to figure out how to deal with. 

She’s not just intellectually smart. She don’t give away the ending. But it turns out that she’s quite emotionally savvy. Yeah, right. 

I mean, I don’t know where you want to take that. That’s not in question. But I mean, it’s sort of interesting to imagine the possibility of a machine having having an emotional capacity to manipulate. 

Yeah. Although in itself, I think we can kind of we can over mystify something which is obviously very mysterious. 

I mean, the mysterious thing is that we don’t know what consciousness is. But at the same time, creating new consciousness as is a routine thing happens throughout human history and across the world the whole time in the form of children. So it’s sort of like I guess it’s another balancing act. It’s not to get too freaked out about it if if you did. And it’s a massive if. But if you did have a machine that had human like qualities in terms of sentience, then part of the questions is what rights do a tribute to? How do you treat it? And it seems to me that what we value in each other, basically ascensions, we worry much more about killing a human than we do about cutting down a tree. And that’s because of the mind that exists in one and doesn’t exist in the other. And so I’ve been interested I’ve been interested to see that in some of the anxiety that is stated by eyes. There was a letter released after a conference in Puerto Rico earlier this year where a group of very, very top people got together and then became signatories to a letter. And in the letter, there was a statement, something along the lines of we need to make sure that I’s do what we want them to do. And that is the kind of statement that you can see making sense at this moment in time when people are worried about runaway superintelligence. People like Stephen Hawking or Elon Musk making very reasonable statements saying we need to think about this and we need to be careful about it. It’s also the kind of statement that could really come back and haunt you if you had created a sentiment machine with an emotional existence. 

Surely that machine should basically be on a similar level of rights that you and I are on. And and to have begun the thought process by saying we need to make sure the sentient machine does what we wanted to do seems problematic to me. 

I mean, isn’t there a missing component there in addition to sentiment’s, which is the capacity to feel pain in that you could imagine? I mean, part of the difference between cutting down a tree and killing a human is sentiments. But another part is that we have a complex neurological system that makes us capable of being hurt. 

If we got I don’t agree, largely because I think you could, for example, in a human, remove their capacity to feel pain by certain kinds of drugs. And it would also still be wrong to kill them. 

I didn’t want to be killed, which is to say that sentence is still an important component. 

I think sentiment is it is it’s the only significant component. I mean, that you could then get into a complicated discussion about what sentence actually is. I mean, that really and this is where to go back to where we started talking. Things get very mushy. It’s very, very hard to pin down this stuff. 

But it’s a little bit like the way judges talk about pornography. It’s hard for me to define it, but I know what it is when I see it. There’s something there’s something about that which I think stands out. 

I mean, think about the way that our concentrated animal feeding operations treat, you know, animals that you could argue aren’t really quite sentiment. I mean, dumb ones, chickens and stuff. 

And again, it depends on your definition of sentiment. But what’s problematic about that is that they can they can clearly feel there’s some kind of discomfort and pain. 

That is problematic. All I’m saying is that you can’t choose pain as your ultimate guide because you can remove pain and still find an ethical reason not to do it right. So and also, I mean. Chickens I genuinely don’t know about because I simply don’t understand enough about that neurology. But. The fact for me that a dog can see its reflection in a mirror. No, it’s not looking at another dog. No, it’s looking at itself. 

Means that I think that dog has sentience and has self-awareness and consciousness and stuff. That said, also it’s in graded levels. Because I also don’t put a dog on the same ethical standing as I put you or me. Someone came into this room and said, I’m going to kill you or the dog. I’d say, kill the dog. Right. I wouldn’t I wouldn’t think about that very hard. 

We are delighted this week to have a sponsor which enables us to bring you this content for free. Kasper is the mattress company that is completely revolutionizing the mattress industry. Mattresses have notoriously high markups. You know, mattress stores have to pay for all of this retail space so that you can go in, lie on it for 20 seconds, 30 seconds a minute, Max, and then try to figure out whether or not it’s going to be the kind of mattress that you actually want to be lying on for a third of your life. It’s a pretty broken system. And Casper is changing all that. I’ve slept on it on these beds, and they’re amazing. They’ve got just the right sink and just the right bounce. They’re made of latex foam and memory foam. And there’s a risk free trial where you don’t just spend 20 seconds or 30 seconds or a minute lying on it. They will send you a casaba mattress for 100 days with free delivery, free, painless returns. You can try it for up to 100 days and then just get rid of it if you don’t like it. Casper mattresses are made in America. They are obsessively engineered and they’re cheap. It’s 500 bucks for a twin. It’s 950 for a king. Compare that to the prevailing rates for other mattresses. It’s quite incredible. Now, you can help us out as well as helping out your back by trying out a cast by going to Casper dot com slash point and using the promo code point. 

That’s Casper dot com slash point. Use promo code point. You’ll be helping us. You’ll be helping you. You’ll be helping Casper. 

So back to your point about our kind of our tendency to prejudge the utility of what might end up being a sentiment, intelligence and artificial intelligence, Does that mean that as as roboticists and as computer engineers started to edge towards developing systems that we could think of as being intelligent, that they they need to be thinking in advance about the rights of those systems rather than just the utility of those systems? 

I think yes, because the degree to which we think about the rights of other things always is tied into how we behave more generally in society. So so I think that’s a perfectly reasonable statement to make. Even though I know some people, it probably sounds strange and a bit unreasonable, but I would agree with that. I think that I think there’s this there’s a secondary there’s a secondary thing as well, which is a lot of this debate to me often seems to be about should we, shouldn’t we should we be trying to do this or not? And I think the debate, which ties into exactly what you just asked, is, is more along the lines of if it’s possible it’s going to happen. It’ll be like human cloning. Human cloning is probably possible. So it’s probably going to happen and it’s not should we or shouldn’t we do it? It’s what are we going to do when it does happen? And it’s like the sort of invasion of Iraq, you know, try and figure out what the consequences of this are, you know, rather than whether it’s a good idea or not. And so but the big mistake is in the end, is not having thought about the consequences. Ultimately, it looked like should we. Shouldn’t we? But in fact, it was it was the chaos of what went wrong when it was on an inevitable track, you know. And and there are real ethical dangers and problems that can flow from this stuff. And I just think it seems reasonable to engage in them. And and as it were, I know how staff there sounds. But as it were, computer rights seem like just part of the discussion that one or two 1/2 year. 

So at the beginning of this conversation, you made that the point that philosophers of mind can legitimately disagree about whether or not self-awareness and consciousness and sentences is possible in artificial. Yeah, not a physical. I’m right. Yeah, sure. Yeah. 

So how does that then feed into how we should think about, about it moving forward. If you if it’s true that you sort of take it as a given that either is self-aware. Do you also take it as a given that a consciousness that feels as much like itself as I do like me is capable of existing in silicon? 

In a computer. 

Are you capable of existing in a computer, you look like something that’s analogizing mode yourself. 

I mean, not even if I did. 

I just mean the sense that I have, which is more than just the sum of my parts, more than just the data that’s whizzing around in my head. The sense that I have of my experience of being alive. Is that capturable in computer form? 

Well, well, I think a computer feel that way. Well, I think that if I think you’d have to assume again that it might be possible. 

Yes, absolutely. The thing is that what we have existing in the world at the moment as a model to think about these problems is us. And if you’re a materialist, if you’re a sort of physicalist, if you believe that the brain is a physical object and the mind is a is a product of physical things, and that that leads you, I think, to the answer to that question, would we have minds with physical things? We attach a whole bunch of thoughts, rules, ethics and processes and behavior patterns to that. And then what you do is you say, well, that is now our best model, to speculate about what this other thing might be and how we ought to treat it or think about it. And it’s it’s not to say, again, that is right. It’s just that it’s reasonable in in this film and in everything about it. It’s funny, actually, because people, you know, it’s called science fiction. It is science fiction. There’s almost no science in it because and some of what we’re talking about sort of conflating, in effect, science and philosophy or speculative science, we can’t do this yet. We actually forget about the mind. We can’t. We don’t even have the level of robotics to create a for, let alone her mind, you know. So from the get go, it’s not something you can be accurate about. It’s it’s just something you tried to be reasonable. 

Right. Right. In your clarifying my last question, you sort of alluded to something which I think is sort of the singularity idea. Right. Could we would it be possible if you could replicate all of the all of the data that exists in my head in a computer form? Would that then be me? There are the rakers, Willes of the world and the Jason Silvers of the world who think that within the next few decades we’re going to have some kind of capacity to essentially upload ourselves into into software, and that the difference between biological intelligence and artificial intelligence is that that very distinction is going to become moot so that, you know, the future is not gonna look like it does the next market. It’s going to look much more like a kind of a blending of either with a few more terminologies. 

Right. Thing. Yeah. I think what I can say about that is that I know that some of the people funding some of the well-funded attempts to sort of unlock strong A.I.. 

Their motivation is to do with being able to download themselves. So that doesn’t again, doesn’t mean it’s possible. But some of the people who are most invested in this in a kind of literal way, that is what they’re after. 

Mm hmm. So they think it’s possible. What do you what do you make of the concerns that I mentioned at the top from people like Stephen Hawking and so on about the possibility of artificial intelligence sort of going awry? 

I was talking to Bill Nye yesterday on our first live and he was saying, this is nonsense. I mean, the artificial intelligence is going to have to be plugged in. What are they going to do? Build coal mines to be able to power themselves if we unplug them? Now that you’re you’re innocent. We don’t agree with that. I definitely don’t agree with that. 

Because because we’re sensitive and we find means of powering ourselves. And why wouldn’t the machine also do that? It might be easier for them to power themselves. 

I mean, if if the question is, is do I attach myself to their anxiety, then it’s something like that. 

Except except the balance for me has shifted more towards the benign than the than the fearful. It’s something I would I want there to be strong eyes. I’d like that to be sentient machines. I find it fascinating and interesting. And I think it was part of our broad human development. If that were to happen. That said, what the film does is it draws an analogy between the development of strong AI and the development of nuclear power. There’s lots of parallels throughout in little bits of music or overt conversations about Oppenheimer quoting the Bhagavad Gita or whatever happens to be in terms of making that connection. And and I think the issues of one are very similar to the issues of the other, which is there’s immense latent danger. It’s unarguably there. But I personally would not choose for nuclear power not to have been developed despite Nagasaki and Hiroshima. I think it’s important that we did it. And I think it will lead to other things and it leads to a greater understanding of the universe. And I feel basically the same about strong. 

Why? So if you could undo having nuclear weapons, but not nuclear power, would you? 

No, I wouldn’t, because because it’s then you’re talking about Utopia. And I think the key with what humans need to do is actually get to grips with the reality of the world they live in. And creating unrealities is not the way forward. 

This is a theme. A common theme in your other work as well. I mean, your your novel, The Beach was basically about that, right? About the attempt to create a sort of paradise on earth that goes horribly awry. 

I guess that that’s probably over dignifying that. 

That was about that was about something that happens, which is young westerner’s who have outgrown Disneyland, turning Southeast Asia into a kind of adolescent early 20s version of Disneyland, except it’s got weed and magic mushrooms instead of Mickey Mouse. 

Right. But it then becomes a becomes a Lord of the Flies, the kind of the version of that, in a way. Right. Yeah, it does. It does. So, I mean, I guess they might my sort of question is, do you think that that’s an endemic part of the human experience that we’re always going to try to turn to overreach for perfection, whether it’s ISIS striving for their long lost empire and Calloway’s or whether it’s there’s always people who do and people who try to strive for perfection. 

To me, it’s not where I’m at. I’m looking to exist in the gray area. And I’m a sort of centrist effectively. And, you know, perfection always leads to sort of something totalitarian, doesn’t it? Basically, that that’s that’s where it ends up. So who wants that? No one. No one with any brains. 

But one of the other similarities between nuclear weapons and artificial intelligence, as you as you were speaking then that struck me, is that they’re both they’re both godlike acts. 

I mean, one is destructive and one is creative, but they’re both they’re both basically reaching beyond the ordinary realm of a. Lauer. 

Let me query that because because I think that’s to do with mankind needing to get to grips with the reality of their situation. And I’m obviously then parly going to say that as an atheist, I don’t think they are godlike. So I think they’re manlike. And it’s one of the the ability to destroy another human being is built very, very deeply into our history in an industrial waste in the last hundred years, literally an industrial way. And so if you make that into a godlike act, you sort of let us off the hook. It seems to me the act to destroying another person is typically, as far as I can tell, done by other humans. 

Mainly because I don’t think God has the capacity to do it because he’s not there. And on a sort of secondary thing where the act of creation is concerned equally, we were not created by God. 

We were created as effectively by evolution of creation is the right word in that context. But more to the point, as we said earlier, we routinely as humans create other humans. Those are the terms by which literally everybody on the planet is here and always has been. I don’t see where God comes into that. So. So destruction and creation make it, man. By the way, just just for what it’s worth. But when it sounds too pretentious about it, this film is called Ex Mackinder. It’s from a phrase which is Dayr Sex making a God bet their spit is dropped out of that phrase because he doesn’t need to have a role in this particular problem. 

Interesting. Of course, as an atheist speaking to a listener base who is almost exclusively atheist, I use I use God as I use God as a metaphor. 

I mean, I think the point that I was sort of angling at is prior to the 20th century, the only powers that would be capable of raising an entire city or creating a conscious life were the powers of nature and biology would be typhoon’s and so on. And then all of a sudden a century. But anyway, we don’t need to dwell on that on the languid language of God to wrap it up. I’d like I’d be interested in your thoughts about whether are sort of broadly optimistic or pessimistic about the future of the planet, given that we live at a time when although Athie ism and secularism are on the rise in Western societies, there is a huge resurgence of religious mania elsewhere. 

Yeah, sure. I mean. I know it’s a it’s a it’s odd sort of situations, and I what I feel is I feel broadly optimistic in the grand sweep. 

I feel optimistic, but I also feel incredibly dismayed and shocked by our capacity to be so unreasonable in such a relentless way. And I’m not talking about in sort of far flung parts of the world from our perspective in this studio. Now, I’m also mean in the same city that we’re in now in, you know, eastern side of America or other parts of America or my own country, Britain or wherever it happens to be. I mean, there there is an incredible capacity to be unreasonable in the form of violence and oppression and and also a kind of terrible lack of empathy, I think, for for people who are outside of our line of sight. I sometimes think that my country, the UK, congratulates itself on having moved past the Victorian era of having slums and things like that and having workhouses of certain sort. But really what we’ve done is we just push them into other countries and then believe they’re not there. And and our willful blindness about that stuff I find incredibly disturbing. But if you step back, I suspect that the thing I care about most. Which is aside from friends and family, is sort of broadly the protection of human rights and being reasonable. I think the graph is moving in the right direction. I think so. I’m I’m forty five in the 1970s. I think my country was a less reasonable place in terms of the way we respected each other and all sorts of different ways. So yes, positive, optimistic, but with a great sense of dismay. 

Alice Callan, thanks so much for being on point of inquiry. Pleasure. 


Josh Zepps

Josh Zepps

An Australian media personality, political satirist, actor, and TV show host. He lives in Brooklyn, New York. He was a founding host for HuffPost Live.