MINDWORKS

Mini: Why now? Ethics and AI (William Casebeer and Chad Weiss)

Daniel Serfaty

Well before we studied the formal principles of engineering and design, we studied ethics. Given that artificial intelligence is becoming more and more advanced each year, is it time that we study the ethics of artificial intelligence? MINDWORKS host, Daniel Serfaty, sits down with Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center, and Mr. Chad Weiss, Senior Research Engineer at Aptima, as they make the case for the study of ethics of artificial intelligence. 

Listen to the entire interview in The Ethics of Artificial Intelligence with William Casebeer and Chad Weiss

Daniel Serfaty: So what we're talking generally speaking is about really the future of work of war, of transportation, of medicine, of manufacturing, in which we are blending, basically different kinds of intelligences, artificial and human. And we are at a moment of a perfect intersection between technology and philosophy. Let's call it by its name, ethics. The ancient Greek philosophers studied ethics way before formal principles of engineering and design. So why now, why is it important now at this juncture to understand and study the ethics of artificial intelligence? Why now? Bill?

Bill Casebeer: I think there are really three reasons why it's so important now that we look at the ethics of technology development. One is that our technologies have advanced to the point that they are having outsized effect on the world we live in. So if you think about over the span of evolutionary timescales for human beings, we are now transforming ourselves and our planet in a way that has never been seen before in the history of the earth. And so given the outsized effect that our tools and ourselves are having on our world, now more than ever, it's important that we examine the ethical dimensions of technology.

Second is I think that while we have always used tools. I think that's the defining hallmark of what it means to be a human, at least in part. That we are really good tool users. We're now reaching a point where our tools can stare back. So the object stares back as the infamous saying goes. So in the past, I might've been able to use a hammer to drive a nail into the roof, but now because of advances in artificial intelligence and machine learning, I can actually get some advice from that hammer about how I can better drive the nail in. And that is something that is both qualitatively and quantitatively different about our technologies than ever before.

Third, and finally, given that we are having dramatic impact and that our technologies can talk back, if you will, they're becoming cognitive. There's the possibility of emergent effects. And that's the third reason why I think that we need to think about the ethics of technology development. That is we may design systems that because of the way humans and cognitive tools interact, do things that were unintended, that are potentially adverse, or that are potentially helpful in an unexpected way. And that means we can be surprised by our systems and given their impact and their cognition that makes all the more important that we think about those unanticipated consequences of these systems that we're developing. So those are at least three reasons why, but I know Chad probably has some more or can amplify on those.

Chad Weiss: Yeah. So I think it's an interesting question, perhaps the best answer is if not now, when? But I also don't see this as a new phenomenon. I think that we have a long history of applying ethics to technology development and to engineering specifically. When I was in grad school, I joined an organization called the Order of the Engineer, which I believe some of my grad school mates found a little bit nerdy at the time, but it fit very well with my sort of worldview. And it's basically taking on the obligation as an engineer to operate with integrity and in fair dealing. And this dates back to, I believe the 1920s. A bridge collapse in Canada, and it became readily apparent that engineers have an impact on society.

And that as such, we owe a moral responsibility to the lives that we touch. In the case of AI I think that the raw power of artificial intelligence, or these computational methods presents some moral hazards that we need to take very seriously. And when we talk about ethics in AI, I think that one thing I've noticed recently is that you have to be very deliberate and clear about what we're talking about. When we say artificial intelligence, if you read between the lines of many conversations, it becomes readily apparent that people are talking about vastly different things. The AI of today, or what you might call narrow AI is much different from the way that we hypothesize something like an artificial general intelligence that has intelligence closer to what a human has. These are very different ethical areas, I think. And they both deserve significant consideration.

Daniel Serfaty: Thank you for doing a 360 on this notion because I think the definitions are important as well as those categories, Bill, that you mentioned are very relevant. I think what most people worry today are about your third point. Which is, can I design that hammer, it may give me advice of how to hit a nail, but can suddenly the hammer take initiatives that are not part of my design specification. The notion of emerging surprising behavior.

I mean, Hollywood made a lot of movies and a lot of money just based on that very phenomenon of suddenly the robot or the AI refusing to comply with what the human thought should be done. Let's start with an example, perhaps. If you can pick one example that you're familiar with from the military or from medicine. Can be a robotic surgery or from education or any domain that you are familiar with, then describe how the use of it can represent an ethical dilemma.

I'm not yet talking about the design principles. We're going to get into that, but more of an ethical dilemma, either for the designer who designed those systems or for the operators that uses those systems. Could you share one example? I know you had tons of them, but to pick one for the audience so that we can situate at least the kind of ethical dilemma that are represented here, who wants to start?

Bill Casebeer: I can dive in there Daniel, and let me point out that Chad and I have a lot of agreement about how the kind of a history of technology development has always been shot through with ethical dimensions. And some of my favorite philosophers are the ancient virtue theorist out of Greece who we're even then concerned to think about social and physical technologies and how they impacted the shape of the polis, of the political body.

It's interesting that Chad mentioned the bridge collapsed. He might've been referring, correct me if I'm wrong, Chad, Tacoma Narrows bridge collapse, which was where a new change in the design of the bridge, where we eliminated trusses from the design of the bridge was what actually caused the aeroelastic flutter that led to the bridge oscillating and eventually collapsing. That dramatic footage that you can see on YouTube of the collapse of galloping Gertie

And so that just highlights that these seemingly mundane engineering decisions we make such as "I'm going to build a bridge that doesn't have as many trusses can actually have direct impact on whether or not the bridge collapses and take some cars with it." So in a similar fashion, I'll highlight one technology that demonstrates an ethical dilemma, but I do want to note that I don't know that confronting ethical dilemmas is actually the best way to think about the ethics of AI or the ethics of technology. It's a little bit like the saying from, I think it was Justice Holmes, that hard cases make bad law. And so when you weighed in with a dilemma, people can immediately kind of throw up their arms and say, "Oh, why are we even talking about the ethics of this? Because there are no clear answers and there's nothing to be done."

When in fact for the bulk of the decisions we make, there was a relatively straightforward way to design and execute the technology in such a fashion that it accommodates the demands of morality. So let me throw that caveat in there. I don't know that leading with talk of dilemmas is the best way to talk about ethics and AI, just because it immediately gets you into Terminator and Skynet territory. Which is only partially helpful.

Having said that, think about something like the use of a semi-autonomous or autonomous unmanned aerial vehicles to prosecute a conflict. So in the last 20 years, we've seen incredible developments in technology that allow us to project power around the globe in a matter of minutes to hours, and where we have radically decreased the amount of risk that the men and women who use those systems have to face as they deliver that force.

So on the one hand that's ethically praise worthy because we're putting fewer people at risk as we do what warriors do. Try to prevail in conflict. It's also ethically praiseworthy because if those technologies are constructed well, then they may allow us to be yet more discriminate as we prosecute a war. That is to reliably tell the difference between somebody who's trying to do us harm, and hence is a combatant and someone who isn't, and is a person on the battlefield.

And so those are two ethically praiseworthy dimensions of being able to drop a bomb from afar, you put fewer lives at risk. You put fewer warriors at risk and you potentially become more discriminant, better able to tell the difference between combatants and non-combatants and morality demands that if we were going to be just warriors.

However, the flip side of that is that being far removed from the battlefield has a couple of negative effects. One is that it makes you less sensitive as a human being potentially to the damage that you're doing when you wage war. So when you are thousands of miles away from the battlefield, that's a little bit harder for you to see and internalize the suffering that's almost always caused whenever you use force to resolve a conflict. And that can cause deadening of moral sensibilities in such a way that some would say we perhaps become more likely to use some of these weapons than we otherwise would if we were allowed to internalize firsthand the harm that can be done to people when you drop fire from above on them.

Secondly, if we delegate too much authority to these systems. If they're made up of autonomous, semi-autonomous and non-autonomous components, then there's the likelihood that we might miss certain dimensions of decision-making that are spring loading us to use force when we don't necessarily have to.

So what I mean by that is that there are all kinds of subtle influences on things like deadly force judgment and decision-making decisions that we make as warriors. And let me use a homely example to drive that home. When I was teaching at the Air Force Academy, we have an honor code. And so the cadets all swear that they will not lie, steal or cheat or tolerate amongst the cadet body, anyone who does. And you might think that that is a matter of individual judgment to do something that you, I might later regret when it comes just to say, preparing for a test. You might make that fateful decision to cheat on an exam, in a way that ultimately serve no one's interests. Those who want people who know the subject matter, and those who want individuals to be people of integrity, who don't cheat or lie.

But it turns out that when you look at the data about what leads cadets, or really any human being to make a bad decision, a decision they later regret that there are lots of other forces that we need to take into account. And in the case of those students who cheated, oftentimes there were precipitating conditions like a failure to plan. So that they had spent several sleepless nights before the fateful morning when they made a bad decision to cheat on an exam. And so the way to build a system that encourages people to be their best selves was not necessarily to hector or lecture them about the importance of making a decision in the moment about whether or not you're going to cheat on the exam. It is also to kit them out with the skills that they need to be able to plan their time, well, so they're not sleepless for several days in a row.

And it also consists in letting them know how the environment might exert influences on them that could cause them to make decisions they would later regret. So we should also be thinking about those kinds of things as we engineer these complicated systems that deal with the use of force at a distance. So I consider that to be a kind of dilemma. Technologies that involve autonomous and semi-autonomous components that have upsides because they put fewer warriors at risk and allow us to be more discriminant, but also my deaden us to the consequences of a use of force. And might also unintentionally cause us to use force when we would otherwise decide not to, if the system took into account all of the social, psychological dimensions that you support decision.