EPISODE #69: Why AI & ML Engineers Should Incorporate Value Sensitive Design Into Their Models

Guy Nadivi
13 min readJul 16, 2021

( Click here to listen to this podcast episode )

Had the great modernist poet D. H. Lawrence lived in our time, perhaps he would’ve alternatively phrased this famous quote as follows — “Ethics and equity and the principles of justice do not change with shifting technological paradigms.” Today, many people realize that the shifting paradigms of AI, automation, and digital transformation will disrupt numerous human-involved processes, but few ponder how those disruptions will affect ethics and equity and principles of justice. Fewer still contemplate how to address these technoethical challenges, and what framework should be applied in doing so.

Enter Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. Steven emphatically advocates for Value Sensitive Design (VSD) as a systemic approach to ensuring human values are accounted for throughout a design process. Steven joins us to discuss his research, how to incorporate VSD into AI systems, and the negative ramifications of excluding VSD from the innovation process.

Guy Nadivi: Welcome everyone. My name is Guy Nadivi , and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. Steven is also Co-Editor-in-Chief of the International Journal of Technoethics, and Associate Editor of the quarterly peer reviewed scientific journal, Science and Engineering Ethics. As the pace of technology accelerates in automation, artificial intelligence, and other areas, it can be easy to overlook the accompanying ethical predicaments we face due to these advances. Nevertheless, digital transformation’s broad social impact demands that these issues be examined carefully and addressed properly. The high tech industry has an amazing track record when it comes to problem solving. And there’s no reason it can’t rise to this challenge. With that in mind, we’ve invited Steven onto the podcast to discuss some of the work he’s doing on these ethical challenges, and gain new insights on the intersection between high tech and humanity. Steven, welcome to Intelligent Automation Radio.

Steven Umbrello: Thank you for inviting me, Guy.

Guy Nadivi: Steven, what drew you to the field of technoethics?

Steven Umbrello: Well, I guess you can say that as a millennial I grew up on the cusp of technology being an integral part of my everyday life. I can still remember my first flip phone, but now we’re pretty far removed from those days, but we can still see how technologies incrementally build on their predecessor technologies like a scaffold on a high rise. I actually studied both history and philosophy simultaneously in university, and I think that having that historical perspective is quite useful in the philosophical realm, because it helps to give perspective. And you probably heard the saying that nobody sleep walks into war and something similar can be said that no technologies just stumble into existence either. They don’t just come out of nothing. So each technology that we see today came about because of the foundations of the technologies that came before it. Like ancient philosophers asking themselves the why of the world, I’m interested in doing the same. In our case, our world is markedly technological in comparison to other historic ologies to embody values al periods or better still, us philosophers of technology like to call it our world is social technical. Since we can’t really separate anymore the technological from the social, they’re kind of one and the same thing. So as someone who studies ethics, I like to explore questions like how can we design technologies to support rather than hinder human values? And all of these types of questions fall into the purview of a subfield in which I specialize, which is called engineering ethics.

Guy Nadivi: Okay. What is engineering ethics?

Steven Umbrello: Well, there’s a bunch of sub-fields in philosophy, with ethics being a relatively large umbrella of sub-disciplines that come underneath it. And when most people think about ethics they think about morality and what people ought to do, meaning what they should do in certain circumstances, and this is generally a pretty good way of understanding ethics more broadly. So like I said, regarding our social technical world, we can see how technologies have an inextricable impact on our daily lives, what we find important in our lives and also how we relate to each other. And all of this is based on how technology is designed. Engineering ethics then is fundamentally it looks at the practices, the day-to-day work of engineers, the nitty gritty and shows how these activities shape our technologies and therefore our lives. What engineering ethics fundamentally explores, I guess, is how we can be responsible for our innovations, how to design technologies for the things that we find to be important in our lives, to mitigate the potential bad things that may emerge from their design and therefore of their use in the world.

Guy Nadivi: Now that’s a good segue into my next question, which is about your research, which primarily focuses on something called Value Sensitive Design. What is VSD?

Steven Umbrello: Sure. So I’m glad you even used the the acronym, VSD. It’s shorthand, instead of always saying Value Sensitive Design every time, but Value Sensitive Design is an approach within engineering ethics and there’s many approaches, but value sensitive design or VSD was developed in the early 90s, and it’s often described as a principled approach to technology design because at its core it centers human values rather than relegating human values to afterthoughts, things like sustainability and human autonomy, safety, usability, accessibility, and so on. So I won’t really go into the nitty gritty of what VSD is methodologically, but I will say that it is itself designed in such a way that rejects value trade-offs, and these value trade-offs are quite popular in mainstream culture today. So think about the popular debate between security and privacy. This is something that a lot of people get passionate about and I think they’re right to do so. This also goes for technologies as well. Executives may be tempted to scoff at important values like safety and sustainability and privacy, and the list goes on. And this is because these values are often construed as coming into conflict with, or, they would probably say, at the opportunity cost of economic values like profit, but VSD rejects this binary thinking and instead frames values as in tension with one another, or at least potentially in tension with one another and not as mutually exclusive. This means that through creativity and salient design, many of these tensions can not only be resolved, but they can be done so in such a way as to maximize each other. So long gone are the days where environmental sustainability comes at the cost of profit, but rather designing for the former may actually augment the latter. And perhaps by not doing so, will decrease profits in the long run, thus risking the firm’s overall long-term sustainability. So VSD is attractive because it comes at no cost to firms. There are many different methods within the VSD toolkit that can be easily adopted, and these can range from more time consuming stakeholder interviews to even simple three-minute brainstorming sessions between designers using a tool called Envisioning Cards. And design teams can pick up VSD tools today and begin using them immediately with immediate results and at no cost.

Guy Nadivi: What does it mean for technologies to embody values?

Steven Umbrello: Well, it means that technologies, simply by the nature of the question, are never neutral, they’re always value-laden, that means that they’re a carrier of values. So many scholars in the past have realized that technologies inherit the values of its makers, its designers. And one of the most famous example was brought to light by a scholar named Langdon Winner in the early 80s. And he argued that the low hanging overpasses across Long Island, New York, that were built at the beginning of the 20th century, had been designed intentionally low. The urban planner at the time, Robert Moses, designed these low overpasses over the parkways of Long Island, so that the buses from New York City couldn’t access the beaches, a place that he loved dearly. And it was because of these design decisions that the urban poor, which were primarily African-American, couldn’t reach the shore because they were dependent on the buses for transportation. The beaches were therefore only accessible to the white upper and middle classes. So this is a clear, albeit low-tech, example of how values are deliberately designed into technological artifacts. And in this case, Moses’s racist values were the ones that were imbued into those bridges.

Guy Nadivi: What about artificial intelligence? Can VSD be used to design AI for human values?

Steven Umbrello: That’s a tricky question because AI systems are very different from other technologies. First, there’s a lot of haphazard use of the term AI that doesn’t actually refer to the AI we imagine. We see this a lot of times in news headlines, and then you actually read the content and it’s not exactly what we imagined, but a good way to think about AI is that it’s a class of technologies that are autonomous, they’re interactive, they’re adaptive and capable of carrying out human like tasks. And in particular, AI technologies that are based on machine learning, another buzz word, which allows systems to learn on the basis of interaction with and feedback from the environment are actually what often comes to mind when we think about AI. And the nature of these learning capabilities poses some pretty hard challenges for AI design and that’s because AI systems are more likely than not to acquire features that were neither foreseen nor even intended by their designers and maybe even they’re unforeseeable. And these features, as well as the ways AI technologies are learning and evolving, may be opaque to humans. Which means that we may not be able to peer into the system and actually understand why it does what it does. But VSD is still very self-reflective and engages directly with stakeholders and their values, so I would argue that VSD with some modifications is uniquely suited among the other existing design traditions to actually meet the challenges posed by artificial intelligence.

Guy Nadivi: Many AI projects, as they enter operational status, begin producing unexpected results, this can be positive or negative depending on stakeholder perspectives. But I’m curious, Steven, how can a Value Sensitive Design approach to AI be reconciled with the AI when it generates results that conflict with desired values?

Steven Umbrello: So one thing that’s fundamental to VSD is the philosophy of progress, not perfection. So trying to ensure absolutely perfect functionality after a system enters operational status is a non-starter. We have to allow for the potential for recalcitrance to occur for things to go wrong. But that, of course, doesn’t mean not doing our best to minimize it on the backend. Of course we want to do that and we should, but we have to also be prepared for unwanted emerging behavior. And one way to make VSD work is to not only ensure its design is aligned with important human values that we hold dear, but personally, and on a societal and global levels, we have to manage these emerging behaviors that we may feel are deleterious. So I would argue that we need to include full life cycle monitoring in AI systems as a foundational starting point in design. We need an internal mechanism that designers of that system, given that they are obviously the ones most familiar with its inner workings, can pull the system out of its context of use and begin another iteration of design, which I would call redesign. And this is actually a familiar process to many people within the business world, particularly those who use waterfall or agile approaches to design, given that they’re used to these short iterative sprints in their work. Think of this as short-term sprints with long-term envisioning where designers are always ready and prepared to extract the system and fix those errors as they appear. But this means that we can’t think of AI systems like normal products that we design and throw out into the world and wash our hands like Pontius Pilate when things go wrong. Meaningful human control of these systems can only be attained when we take responsibility for these dynamic changing systems over the course of their entire lifespans.

Guy Nadivi: Harvard Business School published an article not long ago calling for the auditing of algorithms the same way companies are required to issue audited financial statements. Given that AI developers can incorporate their own biases into algorithms, even unintentionally or unconsciously, what do you think, Steven, about the need for algorithm auditing?

Steven Umbrello: Well, I think it’s a promising first step. Firstly, because it acknowledges something that we talked about and that’s the value-ladenness of the technology, that technologies inherit the values of its designers. Here we’re talking about the negative value of unwanted bias. There’s a difficulty in the actual logistics of carrying this out at scale, I’m talking about algorithm auditing, but part of approaches like Value Sensitive Design is that many of these biases can be weeded out early on and throughout the design program of the systems, which would result in the reduction of the overall operational costs of firms who would need to engage in bias auditing, whether that’s on the back end or on the front end. So remember what we’re trying to do here is we’re trying to marry both moral values and economic values to make them complimentary with one another rather than at the opportunity cost of the other.

Guy Nadivi: It’s a strange thing to ask perhaps with regards to ethics, but is there a single metric like ROI that best captures the impact of incorporating Value Sensitive Design into a technology project?

Steven Umbrello: Well, if you remember, we’re not really looking for perfection in our designs, but progress. So VSD doesn’t have benchmarks or standards that needs to be met quantitatively that determines whether success has been met or not. VSD is meant to be integrated into the day-to-day practices of technology designers regardless of the domain. And therefore it’s not meant to be an overhaul or replace what they already do. If it did, then that would be a barrier of entry given the cost of retraining employees for a new type of design method. So VSD can easily assimilate existing ROI metrics to determine the success of a design. So, for example, I created a cool 15-minute field manual for designers who use the agile methodology for incorporating VSD tools into their day-to-day work, and I offer some questions of lessons learned that can help them gauge their own success in their own context of use. There’s no one size fits all. So there are various flexible ways that VSD can be used to achieve progressing results. It’s a primary driving factor behind VSD’s 20-year success, and it’s continued adoption.

Guy Nadivi: Overall, Steven, given everything you know and have seen on the ethics front with AI, are you more optimistic or more pessimistic about the future?

Steven Umbrello: I would say I’m more white-pilled in the sense, so optimistic, I guess. Although we’ve seen a lot of pushback and ethics-washing by companies, it’s mostly because they’re still thinking in binary terms and framing important values as coming at the opportunity cost of profit, but the veil is being stripped away and people can see that the emperor has no clothes. By acting in this way, these larger firms, which have shown their true colors despite their opportunistic waving of the rainbow flag once a year, may actually ultimately prove to be unsustainable and by doing so, they undermine the very thing they sacrifice those important moral values for, in this case profit. In my experience, designers and executives are actually aligned with these human values. They’re hungry for ways to make them work, to change the world. Unfortunately, the language of values is abstract and lacks concrete ways that engineers and designers can pick up and actualize in their day-to-day practices in the world. So part of my work is providing them with the Rosetta Stone for translating values, the language of philosophers, into design requirements, the language of engineers.

Guy Nadivi: Steven, for the CIOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you’d like them to take away from our discussion with regards to implementing Value Sensitive Design?

Steven Umbrello: Moral values are not at odds with economic values. In fact, sacrificing the former for the latter is a good way to long-term unsustainability. There are existing cost free and easy to adopt approaches to technology design like Value Sensitive that allow us to easily marry and compliment the often at odds set of values. So I would encourage these executives if they’re concerned with the long-term viability of their firms, particularly in the wake of this growing push towards conscious capitalism, to take moral values seriously and to design for them rather than waiting for something wrong to happen first.

Guy Nadivi: All right. Well, it looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Steven, we cover a lot of advanced technology issues on this podcast, but I never want our audience to lose sight of the ethical issues emerging as a result of digital transformation. As automation and AI expand the boundaries of what machines can do, I think it’s important we continue listening to people like yourself who remind us what machines should do. Thank you for coming onto the show today.

Steven Umbrello: Thank you for having me.

Guy Nadivi: Steven Umbrello, Managing Director at the Institute for Ethics and Emerging Technologies. Thank you for listening everyone, and remember, don’t hesitate, automate.

STEVEN UMBRELLO

Managing Director at the Institute for Ethics and Emerging Technologies

Steven Umbrello currently serves as the Managing Director at the Institute for Ethics and Emerging Technologies. His main area of research revolves around Value Sensitive Design otherwise known as (VSD), its philosophical foundations, and its potential application to emerging technologies such as artificial intelligence and Industry 4.0.

Steven can be reached at:

Website: https://stevenumbrello.com/

Twitter: (@stevenumbro) https://twitter.com/StevenUmbro

LinkedIn: https://www.linkedin.com/in/stevenumbrello/

--

--