Comprehending the Incomprehensible: Samuel Arbesman on Rapidly Accelerating Technology

November 07, 2016

We live in a digital era in which science and technology have revealed new frontiers never before possible. In developing the complicated technologies that permeate our lives, is it possible that humans have failed to grasp the magnitude of the complexity they have created? This week’s guest is a complexity scientist, Samuel Arbesman, author of the new book Overcomplicated: Technology at the Limits of Comprehension.

Arbesman explains that the rate of technological expansion is growing too quickly for our intellects to keep up, and the dangers of not understanding the inner workings of our creations are already revealing themselves, whether it’s the New York Stock Exchange suspending trading without warning or Toyota cars accelerating uncontrollably to the surprise of their drivers. The complexity of the code behind much of what has become fundamental components of society are so far past the limits of human comprehension that oftentimes no one is even able to find the cause when these systems go awry. Arbesman lays out why it’s so difficult for even experts to keep up with technological progress and how we can make efforts to prevent our creations from destroying themselves…or us.



This is point of inquiry. 

On Josh Zepps. And this is a podcast of the Center for Inquiry, which aims to foster a secular society based on science and reason. Secular humanism, skepticism. You can support the center by going to. 

Center for inquiry dot org slash membership. Or just listen to this show. Of course, you can follow us on Twitter at point of inquiry. Follow me at Josh Zepps. Zepps and you can find my other podcast. We the People Leive, which is a witty political roundtable, also based on the principles of reason and of making debate healthy again at WTOP under school. Leive on Twitter. Now, do you ever feel that technology is getting so complicated, so ubiquitous, so interconnected that it’s not just incomprehensible but may actually be impossible to be comprehended? That’s the thesis of a new book called Overcomplicated Technology at the Limits of Comprehension. Its author, Sam Ombudsman, is a complexity scientist at Lux Capital. That’s a venture capital firm that’s focused on big, daring ideas in science and technology. Sam joins me now. Thanks so much for being a point of inquiry. 

It’s my pleasure. Great to be on it. 

One of the things you say in the book’s introduction is that technological complexity today has reached a tipping point. I think a lot of people would agree that we’ve reached some kind of tipping point. 

Some people think that that we’re about to merge our brains with machine brains or that we’re about to reach some kind of moment where artificial intelligence rules us all. Your your concern is much more prosaic. Well, kind of tipping point are you talking about? 

Yes. I mean, my tipping point is a little bit smaller scale and those kind of things also. But near term might have when essentially that we are building systems that are so complex that no one can fully understand them anymore. And this is not just that they lay people who are interacting with these technologies, like everyday people working with their phones by not necessarily understand how they work. Increasingly, even the people who are experts and work with these systems on a daily basis or the ones who actually built these things do not fully understand these systems. And so the point that we’ve we’ve essentially reached this point of incomprehensibility for our own technologies, the things that we ourselves have created. 

Why does that matter? As long as there are still teams of coordinated people who are capable of understanding things in the aggregate? 

So I think in many cases it doesn’t matter that much. I think for the most part, we’re still able to use a team or to understand a system partly at least kind of muddle through and figure out how things are working. 

But I think increasingly, we often come from the perspective that we can understand these systems. But then we realized that we actually are unable to that these systems are truly incomprehensible, often until it’s too late. Until we’re confronted with a glitch or a bug or some sort of failure. And in this case, it’s due to a failure of understanding. In this case, there’s a a gap between how we thought the system worked and how it actually doesn’t work. And I think we need to kind of recognize this more explicitly so we can kind of build in a certain awareness and almost a sort of humility of how to understand these systems from the outset. So we were not going to be blindsided when we don’t fully understand these systems. I think increasingly, even though we think we can use we can rely on teams of people who all have specialties and who understand different aspects of a system as we build more and more complex technologies that rely upon many, many different scientific fields, many different domains as well as ones, there’s kind of interconnected and have lots of different parts. It’s actually not enough to just say, OK, we have different people who are all specialized and understand different parts and that will be enough to understand the whole thing because these systems are so interconnected and so complex. Oftentimes, even if you have people who understand the different parts, they don’t know or understand how these things all interact together and we end up getting emergent behavior and behavior that kind of highly counterintuitive things that are glitchy. And we don’t realize that until we’re confronted by all all the bugs and failures. 

You talk about a give the example of one day last July when July 2015. That is when United Airlines Systems crashed, The Wall Street Journal went off line. I think the Wall Street Stock Exchange went crazy. And these were all the people thought that this was some kind of coordinated attack, but it just turned out to be a bunch of overly complicated systems or screwing up. At the same time. Can you elaborate on that? 

Yeah, yeah. I think until when this happened, I on that day I can remember I remember I was on Twitter at the time, like watching how people responding and I think a number people thought, yeah. Was this kind of coordinated, a separate hack. No one really knew who was doing this. And and it became clear that probably was really just as far as we can tell. Just all these three independent systems all going down in this case on the same day, around the same time, because of glitches and bugs, there might have been maybe some sort of interconnection. I think maybe I think people thought maybe The Wall Street Journal’s Web site went down because New York Stock Exchange went down. And so there forever once the Water Journal’s Web site then kind of crashed. We’re not really sure that that that was one idea. But in this case, it was just. A whole bunch of systems that were not designed in a way that they thought they should have been and they all went down. And I think oftentimes when systems become so incredibly complicated that the gap between how we think something, something should work or how it actually does work is only revealed when something goes wrong. And so and so increasingly, we’re getting to this point where the only way to gain insight into these really complex technologies is by actually looking at the bodies. I like seeing when something goes wrong. And in this case, it was a whole bunch of systems that all went down and said, oh, wow. Now we actually realized that’s there’s something we don’t fully understand about these systems yet. 

And even after the fact, because it’s not merely a problem of us not being able to predict what’s gonna go wrong in advance. It’s also sort of a forensic problem after the fact that like when in more simple systems, there’s a deductive set, a sort of sequence of reasoning that you’re able to conduct which will lead you back to a truth. You give the example of Richard Feynman’s investigation into the Challenger disaster, where it had the beautiful simplicity of him staging this example of the overing, which is the part of the shuttle that failed and dipping into cold water to showing that it would become more vulnerable when it’s cold. And they have it, it’s almost like a magic trick. Hey, this is this is you know, after all of our detective work, we’ve arrived at this one thing that almost seems quaint in an era where, as you mentioned, Toyota cars just screw up and even Toyota doesn’t seem to know why. 

Yeah, with with a Toyota example and this kind of injury situation where there in these certain cars had this thing, which I kind of euphemistically described as like unintended acceleration with these cars, which is speeding up, sometimes crashing. In some cases, people actually died. And initially, people thought maybe it was due to sticky gas pedals, didn’t seem to explain everything, or maybe it was due to a format hitting the gas pedal, kind of getting caught on it. Certain of the cars that actually were having this problem did not actually have detachable formats. That seemed like that also couldn’t be the problem. And so after people began examining what kind of what’s going on underneath the hood of these cars, they actually began studying the this case like the actual software of these of these cars, because I can think of cars as these big mechanical contraptions that are essentially computers on wheels to a certain degree. And they might have tens of millions of lines of computer code within them. And in this case, it looked as if the computer software, the computer code was probably more complex than it needed to be. But it was so complicated that really at this point, as far as I can understand at this point, all you can say that more likely than not, because of the massive complexity and poor design of the computer software, it’s more likely than not that that would lead to these kinds of failures, even though you really can’t point that way. This is a specific line that caused this failure to happen. This is a specific chunk of code because these systems are so incredibly complicated. You can just say that these failures are almost the inevitable result of the complexity, which is kind of a heroine to a certain agree, but also humbling and kind of recognizing that we’re yet we’re building these things that when when they’re so complex, you can kind of say the complexity leads to these failures. But it’s often very difficult to say which specific part because these things are so incredibly interconnected. 

Are there certain kind of modes of behavior or heuristics or rules of thumb that you would encourage people who are developing technologies to to follow? 

Because I feel like we need to be one of the most important things to stop these systems from failing and to do well, to stop the whole thing from coming crashing down. Next on, there’s some bad black swan into interaction between a bunch of different events that we didn’t foresee is to be cognizant of this catch 22 that we seek. In addition, we seek greater complexity because complex systems improve our lives. But the they they kind of carry embedded within them the risk of their own failure and to consciously try to make things as simple as possible whilst achieving the most, I guess complex outcome is that. Can you think of rules of thumb that you would tell engineers to abide by? 

And so there are certainly like best practices and rules of thumb for people to have reduced these kinds of things. Like, for example, if you make a system more modular where you kind of have like chunks of thing, like maybe a chunk of computer code or a specific portion of a large machine or system, that it’s kind of somewhat independent. Having a system that’s more modular means it’s going to be a little bit more easily understandable. So I think making things more modular is certainly a good technique. And there’s other techniques in terms of reducing the number of bugs and code that you can try to adhere to. So, for example, there are a number of a number of techniques and best practices that if Toyota had adhered to. It would have actually, I think, significantly reduce the chance of these kinds of failures. So, like, for example, one thing within software, it’s like if you write spaghetti code where you just kind of allow code to kind of refer back to itself in multiple different ways and kind of in this kind of crazy mess that is going to increase the complexity. In this case, the incomprehensibility of the software. So if you can kind of try to remove the conditions for spaghetti code, that will actually make things a little bit better. Sometimes it’s more easily said than done. The problem with this, though, is even though there are these best practices in way and the kind of techniques to reduce the complexity, there are these fairly strong forces that push us ever closer towards incomprehensibility. And each of these forces are fairly reasonable on their face, like, for example, the fact that when you’re building a system, you might want to add functionality. You want to. You might want to make the system more sophisticated over time. So what that means, though, is you end up building kind of new pieces upon the old pieces. And so you can kind of have extreme situations where you have like legacy code or legacy systems where something might be decades old, but still foundational for a new for our newer technology. Because you’ve slowly added to it over time, maybe because the initial system was so important they didn’t want to take it down or you don’t want to mess with it. But then you go to a certain point where suddenly the foundational piece was built by someone who’s long retired or might even be dead and you’re still using it. It’s it’s still required for the system to operate. So that’s kind of like this force of accreting the new upon the old ends up yielding this kind of mass. Even though each individual design decision might make sense, the same thing with interconnecting. So, like, it might make sense. And actually, it seems like it makes a lot of sense to try to interconnect systems together. So you might want to pass information from one maybe computer program to another or two different two different technologies. That seems like a great idea. It makes things more powerful and more usable. The downside is, though, when you have increased interconnectivity, you can also have the increased kind of non-linearity worth. There’s like weird feedback. Where something might pass that information to another system and it might respond in ways that you don’t know or understand. Another force is that of like edge cases and exceptions. It’s one thing to kind of write a simple computer program or make a simple technology that actually does some stuff, fairly simple task. It’s another thing to deal with all the messiness of the real world. So, for example, let’s say I want to make like a calendar application. It’s one thing to say, okay, I want to have a calendar. Has three or sixty five days done. But you write. Oh, like, I need to add in the leap year and then I need to add in time zones and I might need to add in a whole bunch of other complicating features. And suddenly you realize, wait a second, this thing that I initially thought was simple actually needs to have a whole bunch of complex parts in it because the world itself is complicated. And so even though that makes sense, it ends up making the system far less understandable. So oftentimes there’s this weird tension between trying to make a system as simple and understandable as possible. But at the same time, still wanting to make it powerful and sophisticated and usable. And so these kind of and you always have to balance these kinds of things. And so even though you might want to make it really simple, you ofttimes can’t because you so want to make the system actually a really good one. And so trying to balance these things is it is tough to do. 

Yeah. And as you as you say, and no one is starting with a blank sheet, it’s not like people get parachuted down into a tabula rasa universe in which they can start building things from scratch. They’re always working on top of existing things that were pasted on top of other existing things that will pasted on top of other existing things. The second chapter of your book is is about the Cluj. I’ve heard it pronounce Claudio Kluge. How do you pronounce that word? 

So I personally pronounce occludes, but I certainly hear it hurt different ways. Can you. Can you give us a thumbnail definition? 

Yeah, sure. Occlude says this term from engineering and computer science, where it’s it’s occlude is something that is built to get the job done. It’s almost like jury rigged, but it’s like it’s not necessarily the cleanest kind of system. So it’s almost like picture like a Rube Goldberg sort of contraption that that works, gets the job done. It might be elegance or kind of interesting in its own way of how the problem is solved, but it is by no means necessarily. It’s not, as are you using these kind of best practices, the kinds of things I mentioned earlier. It it works, but it’s but it’s often messy and might very well be one of these kind of systems where you have something that works pretty well and then you kind of just glue something on top of it in order to handle some bug or handle some new situation. And so when you do a whole bunch of these, then you get something that’s very kludgy and it works, but it’s by no means pretty to look at or sometimes even understandable in the end. 

Yeah. I mean, I was amazed the other day when I was talking to to an aviation specialist who was who was talking about some of United’s complexity of United’s back end. And they were saying, well, if you’re booking a ticket on the continental system. And I said, hang on. Continental is an airline that hasn’t existed for years. 

They said, oh, yeah, but the portion of of United’s reservations still go through the continental system. All of the companies, Continental, still exists. It’s just part of United. So they have they’ve never bothered to do that. It’s just too complicated for them to transfer all of that I.T. from the old Continental Airline into the new United system. So they’re still running two parallel systems. I mean, if you need to if you ever wanted to create a scenario that that had in it a seed of potential difficulties, running two parallel systems and never having the opportunity to just shut the whole thing down to unite them would seem to be would seem to be a good way to do it. How do we how do we do this? It doesn’t say, you know, we can’t ground all of the world’s planes for a month while we try to figure this out. 

Yeah. I think we can. I think it’s kind of this interesting situation where almost like if you want to understand, like the history of an organization, you can actually look at it. Look at that. Like, look at this. It’s like a weird, almost like archeological levels of computer systems or the technologies that a company or organization uses, because you oftentimes these things and it’s kind of being built one upon the other. And in some cases, you can kind of try to restart something from scratch or maybe build something in parallel. And so when you’re ready, you kind of just switch it over to the newer, cleaner thing. But oftentimes that newer, cleaner thing fairly quickly becomes messier over time because it has to deal with all the new eventualities. So it might be one things. Okay. We merge Continental with with United. And so now we built a new system that’s nice and clean, but maybe it’s going to merge again with another with another airline or there’s some other eventuality has to deal with. And so it’s this almost continuing battle on this case. I’ll feel her continuing losing battle, trying to kind of manage all the complexity while at the same time trying to manage all the new complexity that the system has to deal with while also still trying to make whatever is there before a little bit simpler. 

You talk about how in any given household, there’s always one person who you feel like can fix things. And you know what? If I if I don’t know ahead why my computer is screwing up, I could always go and ask my brother, who thing who knows more about these things, and he’ll be able to fix it. But the reality is. So we always so we always sort of if no one in the house can fix it, then we think that it’s just a matter of of the fact that we. We lack the information. But there’s someone out there who we could call into the house who would be able to fix it. And you say that in 2016, that’s no longer true. I mean, there might be one person who can fix my MacBook Air, but there’s not one person who can fix an airline system or who can fix a complicated system inside of inside of an automobile company, for example. 

I wonder whether that’s new because it wasn’t there at all philosophical thought experiment decades ago that there isn’t a single person in the world who could who knows how to make a pencil. You know, there are people who know how to get the graphite. There are people who know how to how to shave down the wood. There are people who know how to glue it all together. But no individual human being in the world can make something as simple as a pencil. So what’s what’s new about now? 

So I am. Yes. And certainly I think for. Because I like division of labor and kind of complex manufacturing systems. I think, yes, certainly we have not been able to kind of understand all the different aspects of how to build something for for quite some time. 

I think we’re in this kind of weird new stage and I want to sorry, say it’s like in the past, like couple years, I would say maybe it’s in the past couple of decades where we’ve gone to the point where we can’t even necessarily understand why something goes wrong. And I think that and I think one of the reasons is even though we’ve had and complex machines and complex technologies for a long time, I do think computers have have created kind of a new order of magnitude of complexity. And so there’s there’s a well-known computer scientist, Edgar Dykstra. He wrote about computer science maybe in the past 50 years or so. And one of the things he wrote about is this idea of like computer programing has a certain like a radical novelty to it, where even though an airplane might have thousands or even millions of components, when we have a computer program that might have tens of millions of lines of code or even more, and they’re all interconnected in this very complex way that actually has that is kind of much, much more complex than many of our other systems. The weird thing is, though, kind of going back to like everyday people experiences, we often are shielded from that kind of complexity. And so we often think that maybe there should be an expert who can understand all the systems systems because we don’t realize how complex these technologies are. And so I actually remember this is I think soon after the Apple Watch came out, there was an article in The Wall Street Journal. They were talking about whether or not people can still buy mechanical watches, kind of like really nice, well crafted mechanical watches. And there was and they quoted someone who was being interviewed saying, like, how this person appreciate appreciates the massive complexity of a mechanical watch as opposed to like a thing like Apple Watch, which is really just a chip. And our arena, like a good chip, is like orders of magnitude more complex. But we’ve kind of been shielded when we don’t realize it. And I think that’s one of the weird things where increasingly when it when even an expert doesn’t fully understand these systems, it’s incumbent upon all of us, even just just to be aware of the massive complexity that we’re surrounded by, even if we can’t solve it. And we might not be able to understand. I, I, I, I doubt many of us can kind of troubleshoot an Apple Watch or an or an iPhone and all of its complexity and maybe even the Apple geniuses Kanta like that at the Apple store. But we at least need to find ways of at least being aware of that complexity and recognize that things are not just chips. There are these enormously sophisticated things that can have complicated software built on top of it. That really is very, very, very powerful, but also hard, if not impossible to fully understand. 

Yeah, but that person’s instinct about the watch is getting it something I think important and inhuman, which is that I don’t think they are using the wrong word when they’re saying complex. The chip is complex, but once you’ve made a chip, once you’ve figured out what the program is for the Apple Watch, then every into each individual, Apple Watch is just kind of that’s like replica. It’s a it’s a Gattaca world in which you’re just printing off bazillions of same of examples of sameness. Right. There’s something about a mechanical watch where the complexity is alive inside the fit, the the single inclination of one physical machine in a way that a microchip chip doesn’t feel like it works that way. You know what I mean? And I wonder whether or not you think that as we get more and more complex, they could be a growing craving for things that are simpler in their complexity. 

So, well, I think when this person was talking about that, the Apple Watch for some kind, what I think they’re also like they’re also speaking about like the craftsmanship, kind of like the art of the craftsmanship. Which is what WIDGREN or which is what you’re talking about, I think. And that’s certainly something that at the same time, though, while each individual Apple Watch might feel kind of identical in its complexity, there’s not kind of any sort of craftsmanship or kind of sense of pride in each individual copy. At the same time, though, the total number of people who are involved in the Apple Watch certainly dwarfs the number of people who are involved in the craftsmanship of of a more traditional mechanical watch. 

I think I think we see and going back to your question of like kind of there’s this new trend or kind of like instinct towards maybe simpler kind of craftsmanship. But I think that’s. And you see a little bit that maybe in kind of the maker movement of people trying to kind of build their own things, like whether or not they’re 3-D print something, built something mechanical, build their own piece of hardware with software on top of it. And I think and for me, maybe some of that kind of maker movement grows out of this desire to have a certain amount of craftsmanship, even for things that are fairly technological. I think, though, for the most part, though, in the end, when you kind of participate in these kind of making things yourself instead of actually feeling you’re making something simpler, I think you’ll actually just get a better appreciation for the true staggering complexity that is all around us as well, how easy it is to make something that’s not fully understandable. And so I think for me, like people, more people participating and learning how to code or learning how to kind of make simple piece of hardware, we’ll actually do a lot of the work and getting people to appreciate the true staggering complexity of the world around us and kind of the technological world around us, as opposed to Nasserite making things that are simpler. Because I and I think to a certain degree, as long as you want to make things that are sophisticated and complex, they’re always going to be fairly incomprehensible. And there are ways of handling this, I think, written like meeting these technologies part way and trying to kind of gain insight into what’s going on. But I think more and more we’re I have trouble building these very simple systems that actually do the kinds of things that we want them to do. 

I was talking to a retired pilot who had been who was basically saying that pilots these days don’t get a lot of opportunity to actually know how to fly planes, really. So the one thing that if everything goes wrong, they don’t have the expertize to to focus on the fundamentals. 

And he was alluding to the the Air France crash between Rio and Paris a few years ago, where it seems like the pilots just really missed the ball on some some simple things that they they should have been doing when all of the computers screwed up, because airplanes these days are so complicated that they can basically fly themselves from point-to-point safely without much human involvement at all. And I wonder whether you think that that’s a sort of a metaphor, Wolf, for the way the world is going in general. Is there a risk that with this over complexity and over automation, we sort of lose our ability to intervene in human ways? And does that matter? Or should we say, well, the computers are somewhat better at flying planes and doing things than we are anyway? That that’s that’s a small chance to take. 

So I think mean, certainly as these systems become better than humans, for the most part, I think that’s I think that’s fair tradeoff with self-driving cars. If self-driving cars are even just a little bit better than humans, driving humans are terrible at driving cars. And if you kind of think about it, it makes no sense why people should be controlling these large metal boxes at high speeds in close proximity. So it sounds horribly dangerous. So I think if computers can do this in a better way, even if we might not understand all of it and there could be some some failures, I think I think overall that is a good thing. At the same time, though, I think we still do need a certain amount of comfort in understanding or at least trying to kind of see the complexity in some aspects of the system. And so. So one of these I talk about my book is like if you look at back in the earlier days of personal computing, one of the ways you could get a new computer program onto your, like, Commodore Vic 20, which was like the computer of my family had Rousell was you would actually just type in the code yourself. You would buy through like these magazines that had type in programs and there’d be code in the back of the magazine. You’d kind of just entering the code manually. And even if you don’t know how to program on all of its details, you would see this very clear relationship between the text you were typing and the resulting output. And I think we’ve lost that that close connection to our machines for the most part. And I think in our haste to kind of automate things and have slick graphical interfaces, we’ve lost a certain amount of closeness to what’s going on technologically. And I think if we have ways of how to be able to kind of peek under the hood even a little bit, that will do some of the work now. It certainly will not necessarily allow us to take control of all of our machines in this kind of era of automation. But it might at least allow us to recognize that we should have a sense of what’s going on. And I kind of underneath these systems and actually people have been talking about this in kind of all of a I mean, a lot of the new artificial intelligence systems like these narrow networks and think that they’re they’re very hard to interpret what’s going on. They might be enormously predictive and powerful. But in terms of the way they make their decisions, it’s very hard to understand. So people have been a number of people have been arguing that maybe we need to have more. Paintable or interpretable A.I. systems? Because then we can better understand how they’re actually making their decisions both when they make decisions correctly, as well as sometimes when they fail. And I think that that kind of potential change in design or at least thinking about how to design systems that are least a little bit more explainable, even in our era of automation, might help us feel more comfortable with these technologies. 

The book is Overcomplicated Technology and the Limits of Call Branch and say fascinating stuff. Thanks so much for being with us. 

Thank you. 


Josh Zepps

Josh Zepps

An Australian media personality, political satirist, actor, and TV show host. He lives in Brooklyn, New York. He was a founding host for HuffPost Live.