EPISODE #36: Why Baking Ethics into an AI Project isn’t Just Good Practice, it’s Good Business
As artificial intelligence opens up new possibilities that make the unimaginable routine, it also raises unasked ethical questions whose answers will determine if humanity reaps the benefits of AI equitably. Increasingly however, we find ourselves in uncharted territory, grappling with unanticipated ethical quandaries stemming from the continuing entwinement of machine learning & mankind. Perhaps now more than ever we should recall Albert Einstein’s prescient assertion that “Relativity applies to physics, not ethics.”
To help us navigate between the Scylla of moral dilemmas and Charybdis of virtuous justifications AI provokes, we call upon Dr. Michael Quinn. As Dean of the College of Science and Engineering at Seattle University, Dr. Quinn & his colleagues recently launched a free public course entitled “AI Ethics for Business”, an offering whose timeliness can’t be overstated. We speak with Dr. Quinn about the journey students of his course will cover, as well as some of the universal terrain all AI practitioners inevitably encounter. En route we’ll learn about getting software developers to think more ethically, the 9 common rationalizations people use as moral excuses to avoid ethical thinking, and the dark side to Facebook founder Mark Zuckerberg’s famous motto “Move fast and break things.”
Guy Nadivi: Welcome everyone. My name is Guy Nadivi and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Dr. Michael Quinn, Dean of the College of Science and Engineering at Seattle University. Dr. Quinn and Seattle University were in the news recently for launching an online course entitled “AI Ethics for Business”, which was made possible with a gift from Microsoft.
I think the ethics of AI and the ethics of technology in general is something IT executives are beginning to look at and consider more carefully in this age of digital transformation. As new tools and capabilities are rolled out to an often largely unaware public, new and oftentimes unexpected ethical situations are emerging at the intersection between high tech and humanity. One example is online resume submissions, which I think many people still don’t realize are frequently reviewed and judged by software robots employing AI, or some other kind of decision-making capability.
So, the ethics of AI is a broad based, multifaceted subject that affects everyone, and in fact has too many implications to cover in the short timeframe this podcast affords. But we’re going to do our best to unpack what we can. Dr. Quinn, welcome to Intelligent Automation Radio.
Dr. Michael Quinn: Thank you. I’m glad to be here.
Guy Nadivi: Dr. Quinn first off, congratulations! I understand that under your stewardship, Seattle University’s College of Science and Engineering has grown about 70% in the last decade, making it the university’s fastest growing college or school. That’s an impressive growth spurt.
Dr. Michael Quinn: Thank you. Yes, it has been impressive. It shows that the STEM disciplines are very popular right now. People are moving into these disciplines because they understand that you get a degree in a STEM field it really sets you up for professional success. So we are the beneficiaries of that trend and of course we’re providing a good educational environment, which is why I think we keep bringing in more students year by year.
Guy Nadivi: So Dr. Quinn, tell us a little bit about the genesis of Seattle University’s AI ethics initiative.
Dr. Michael Quinn: Sure, I’d be happy to. It all started in August 2018, when Brad Smith, the president of Microsoft, visited Seattle University. The official purpose of the meeting was to discuss Microsoft’s contribution to our Center for Science and Innovation. But Brad had something more in mind and it only took him about two minutes to commit two and a half million dollars for our construction project and then he changed gears, he shifted the conversation to talk about ethics.
Dr. Michael Quinn: And he asked me, “Do you teach a computer ethics class at Seattle University?” And I said, “Yes we do and we use my book.” And well, he liked that answer and he went on to talk about the six principles Microsoft believes should guide the development of AI. And he said in his view, Seattle University stands out for really technology grounded in values and he thought we were a logical place to study the ethical implications of digital technology. And so he went on and he said, “Hey, would we be interested in starting an ethics and technology lab at Seattle University?” And we said, “We’d definitely be interested in doing that.”
Dr. Michael Quinn: So he committed a half million dollars and the expertise of Microsoft personnel to help get that effort started.
Guy Nadivi: That’s fantastic. Can you share with us some example scenarios a student would be asked to examine in your new course?
Dr. Michael Quinn: Mm-hmm (affirmative). There are some great scenarios. In the very first module there’s an extensive case study related to facial recognition and it leads up to the question “What uses of facial recognition, if any, should Congress decide are permissible?” So the idea is to give the students the opportunity to find out the history of facial recognition or just the various ideas around trying to infer things from looking at people’s faces, both going back more than a century, but also modern efforts that the Chinese are using, for example. And then trying to help students understand that there are discussions going on about regulation and to think about what would be the appropriate amount of regulation, if any, that Congress should enact in this area.
Dr. Michael Quinn: And then toward the end of the short course, students can select a case study based on their job title. And so for example, software engineers are asked to consider the question, “What are the unintended uses of the product or service you’re working on?” And “How can you code around them?” And so, for people at a higher level, for example, product managers, they’re asked to address the question, “How do you balance the customer’s best interests stated or unstated with your company’s, and what responsibility do you have to ensure fair market practices?”
Dr. Michael Quinn: So, what we’re doing is helping students understand that they’re coming up with ethical issues in their own work and giving them the opportunity to reflect on what they’re working on and how they might approach some of these questions in a new way after taking this course.
Guy Nadivi: Hmm. You know, when I was in college and in my professional life, the best classes I took, not only taught me something new, but prompted me to look at the world in a new way and incorporate that refreshed outlook into my thinking going forward. For the people taking your AI Ethics in Business course, what new way do you want them to look at their world after the conclusion of the course?
Dr. Michael Quinn: We want them to be more tuned in, I guess I’d say, more tuned in to the ethical issues that are constantly arising. It’s so easy for people in technical fields to become so narrowly focused on problem solving, just getting a system to work, that they can ignore the big picture. But of course the big picture is important. So how is this new device going to impact society? What are going to be the consequences of using this new technology?
Dr. Michael Quinn: We want them to be opening their eyes and taking a broader view. And so, the real question is how do you get software developers, for example, to pay attention to these issues? And we have found that one way is to help them become aware of rationalizations that they employ that keep them from engaging in ethical thinking.
Dr. Michael Quinn: So, what we’re doing in our online short course, we have this segment focusing on nine common rationalizations, also called moral excuses. And so, we run through these various excuses. And then what that really does is often the scales fall from the eyes of people who go through the course because they can see in their own experience how they or other people have acted according, or have used these rationalizations.
Dr. Michael Quinn: So for example, I’m just going to run through them very quickly. Here’s nine common rationalizations that people use to avoid ethical thinking. I’m just doing my job. Everyone does it. If I refuse to do it, someone else will. I don’t have time to think about this. I have a very important goal to achieve. It’s not hurting anyone very much. The long-term benefits will be greater than the short-term harms. I don’t want to be disloyal to my… fill in the blanks, my company, my coworkers. Who am I to say what is right or wrong?
Dr. Michael Quinn: And what we’ve found is once people get exposed to these ideas, then it’s very easy as you start looking at various case studies and how different people participating in a situation are responding, you can say, “Oh, that person is focusing on the very important goal they’re trying to achieve.” Or, “This person feels like they don’t have time to consider this, they have a deadline to meet.” Or, “This person is taking a means justify the ends approach.”
Dr. Michael Quinn: And so, by asking these or by making students aware of these rationalizations early on, then that really sets them up to have, maybe to be tuned in to some of these issues as they show up later on in the course and then of course in their real life as well.
Guy Nadivi: Hmm. Those rationalizations are actually something worth teaching students about even before they get to college. Now, Dr. Quinn, given the growing ubiquity of AI and other technologies, should a course in AI ethics be mandatory for computer science students, perhaps even students in any STEM major?
Dr. Michael Quinn: Well, of course, as the author of an ethics book, I think the answer to that question is “yes”. This is extremely important material. Some universities have been doing it for a long time. I started teaching computer ethics at Oregon State University back in 1994, I mean, more than a generation ago, right? 25, 26 years ago. So some universities have been doing it an awfully long time and others are newer to the game.
Dr. Michael Quinn: I was a little bit surprised to read in the New York Times a couple of years ago about a new course being developed at Stanford and MIT, Harvard, University of Texas, these universities realizing the importance of computer science students understanding the ethical implications of AI. And I haven’t done the research. I’m skeptical that they haven’t had any ethics education. What I’m thinking is that they’re introducing new courses because of the impact that AI systems are having on the world.
Dr. Michael Quinn: But at any rate, whether universities were early to the game or are later to the game, I do think it’s something that every computer science student should take. And really, if you think about it, why shouldn’t every university student be exposed to some of these ideas? Because we want everybody coming out being informed citizens, being able to have a perspective on these issues that they can communicate to their legislatures and people representing them.
Dr. Michael Quinn: So, I would say that everyone should have some familiarity, not just with how AI works, but also some of the implications of the misuse of AI.
Guy Nadivi: Now, what about in the working world? Should enterprises make some type of ethics course mandatory for staff involved in developing AI projects?
Dr. Michael Quinn: I think so. And that relates to just this idea that people should be thinking about these issues early on in the product development process. Ethics shouldn’t be some sort of checklist at the end. By that time there’s too much momentum for a project to be stopped if all this work has been done. I’m surprised actually sometimes by what I hear from industry. A group of us from Seattle University were visiting a tech company last year and we were talking about doing a short course for their software engineers and they told us, “You have to start at zero. It’s safe to assume our engineers have had zero formal education in ethics.”
Dr. Michael Quinn: And we were surprised to hear that, but it just made us feel that, wow, there really is a need out there for continuing education for people in industry so that they can come up to speed on some of these issues. Laura Norén at New York University says, “We need to at least teach people that there’s a dark side to the idea that you should move fast and break things. You can pass the software, but you can’t patch a person if you damage someone’s reputation.”
Dr. Michael Quinn: And I think that’s really an important step for people in industry to take is that, move fast and break things is a dangerous motto or way of operating if you’re developing products and services that can impact human lives the way AI can.
Guy Nadivi: Now, you mentioned Harvard. And their business school published an article not too long ago calling for the auditing of algorithms the same way companies are required to issue audited financial statements. Given that AI developers can incorporate their own biases into algorithms, even unintentionally or unconsciously, what do you think about the need for algorithm auditing?
Dr. Michael Quinn: We need to be able to trust the decisions made by AI systems. And in order for that to happen, we must get good at auditing these systems. Machine learning is currently the dominant technology for creating modern AI systems. Machine learning uses datasets from the past to create a model of reality that allows decisions to be made in the future. So, it doesn’t matter if the developers of the system are prejudiced. If the dataset contains biases, those biases can affect the model of reality that can lead to biased decisions in the future.
Dr. Michael Quinn: Also, it’s really important to understand what the AI system is trying to optimize. There can be different definitions of fairness and a system can be fair in one regard and unfair in another, and it can be impossible for the system to be fair in both dimensions at the same time. So for these reasons, there must be a way for the public to learn about the underlying design of AI systems making important decisions. What are the functions they’re trying to optimize? What are the examples they’re being fed?
Dr. Michael Quinn: Sometimes people have made decisions to reduce bias that actually make things worse. For example, you might argue that you don’t want information about people’s race included in a dataset because you don’t want the decision of the system to depend upon a person’s race. But that’s often a mistake for two reasons. First, there are usually other pieces of data highly correlated with race, so simply removing the race field doesn’t guarantee that the trained algorithm will not make decisions that are racially unbiased.
Dr. Michael Quinn: And then secondly, if you remove the racial information from the dataset, you take away the ability to do a simple audit of how the system treats members of different racial groups. So, this is a big area and it’s very important.
Guy Nadivi: So, it’s a strange thing to ask perhaps with regard to ethics, but this is something that is unquestionably going to be asked in corporations, for-profit organizations. But is there a single metric like ROI that best captures the impact of incorporating ethics into an AI project?
Dr. Michael Quinn: ROI, wow. I don’t know of a single metric like ROI that captures the impact of incorporating ethics into an AI project, but people do talk about bottom line benefits of ethics programs. We all know about Enron, of course. So, it’s certainly true that companies that act unethically can collapse. And even when a company doesn’t go under, unethical actions can lead to lawsuits, fines, regulations, harm to the brand, and any number of other negative outcomes that either increase expenses or decrease revenues. But I also want to say that I don’t think everything that is important can be quantified. I like Anthony DeAngelo’s quote, “The most important things in life aren’t things.”
Dr. Michael Quinn: And I tell my students, ethics is ultimately about two questions, “What kind of person do I want to be?” and “What kind of world do I want to live in?” And how do you put a metric on being able to look at yourself in the mirror in the morning? But shouldn’t we all aspire to living lives of integrity? I think we should.
Guy Nadivi: Well put. Overall, Dr. Quinn, given everything you know and have seen on the ethics front with AI, are you more optimistic or pessimistic about the future?
Dr. Michael Quinn: I’m definitely optimistic about the future. I’m excited about the many ways AI will be able to contribute to human flourishing in so many different parts of society really. In healthcare for example, AI can help us stay well and when we do get a disease, AI can help us detect diseases sooner and diagnose them more accurately and treat them more effectively. Or in transportation, there’s good reason to believe we’ll see a world with fewer accidents, a higher utilization of existing roadways, less traffic congestion. We’ll have educational experiences more tailored to the learning abilities of individual students. And of course these are just a few examples.
Dr. Michael Quinn: We’re going to see amazing breakthroughs through the next few years, we just need to keep in mind as we’re innovating that we’re not developing new technologies for their own sake. The goal of technological change should be human flourishing.
Guy Nadivi: Dr. Quinn, for the CIOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to ethics in AI?
Dr. Michael Quinn: Okay, one huh? I think company culture starts at the top, and employees know when a company is walking the talk and when it’s just checking a box. So my advice for C-suite executives would be, you need to ask yourself if you’re really committing, are we going to be using AI not just to enhance our bottom line, but to increase human flourishing? That’s the question they need to ask.
Guy Nadivi: All right. Looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Dr. Quinn, what a treat it’s been having you with us to shed some light on a important but often overlooked aspect of digital transformation, the ethics of AI. It’s been great speaking with you. Thank you so much for joining us today.
Dr. Michael Quinn: You’re welcome. It was a pleasure.
Guy Nadivi: Dr. Michael Quinn, Dean of the College of Science and Engineering at Seattle University. Thank you for listening everyone. And remember, don’t hesitate, automate.
DR. MICHAEL QUINN
Dean of the College of Science and Engineering at Seattle University.
Michael J. Quinn is Dean of the College of Science and Engineering and Director of the Initiative in Ethics and Transformative Technologies at Seattle University. He earned a B.S. in mathematics from Gonzaga University and an M.S. in computer sciences from the University of Wisconsin-Madison. He worked for two years as a software engineer at Tektronix before returning to graduate school to complete a Ph.D. in computer science from Washington State University.
Before joining Seattle University, Dr. Quinn was a computer science professor for more than two decades, first at the University of New Hampshire, and then at Oregon State University. He did pioneering research in the field of parallel computing, and his textbooks on that subject have been used by hundreds of universities worldwide. In the early 2000s his focus shifted to computer ethics, and in 2004 he published a textbook, Ethics for the Information Age, that explores moral problems related to modern uses of information technology, such as privacy, intellectual property rights, computer security, software reliability, and the relationship between automation and unemployment. The book, now in its eighth edition, has been adopted by more than 125 colleges and universities in the United States and many more internationally.
Dr. Quinn can be reached at:
Personal Website: www.michaeljquinn.net
Ethics Initiative: www.seattleu.edu/ethics-and-technology