EPISODE #17: Back to the Future of AI & Machine Learning
--
( Click here to listen to this podcast episode )
What do you think of when you hear the phrase “artificial intelligence”? Do your thoughts veer more towards Terminator or Robby the Robot (Will Robinson’s guardian from “Lost in Space”)? Do you envision future machines being more like Hal from “2001: A Space Odyssey” or Samantha from the 2013 movie “Her”? Nobody knows for sure how real-world AI will eventually evolve, but one person has a better idea than most, and that’s led him to predict where AI, machine learning, & automation will have the biggest impacts.
Manish Kothari, President of SRI International leads an organization that’s world-renowned for its innovations in a broad spectrum of focus areas. He joins our show to share his insights on the areas AI, machine learning, & automation are disrupting the most. Along the way we’ll learn whether AI will augment more people or replace more people, whether AI algorithms should be audited like financial statements, and which industry stands to benefit the most from automation, AI, and machine learning.
Guy Nadivi: Welcome, everyone. My name is Guy Nadivi and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Manish Kothari, President of SRI International. SRI, formerly the Stanford Research Institute, was established in 1946 and has become one of the world’s leading R&D labs for government agencies, commercial businesses, and private foundations.
SRI has a staggering number of focus areas they’ve raised the bar in, including biomedical sciences, chemistry, computing, earth and space systems, and much, much more. And they’ve got the results to prove it. To date, they’ve received more than 4,000 patents and patent applications worldwide, including my favorite one, patent number 3541541 filed in 1967 for the first mouse prototype. So everybody listening today has benefited from SRI’s R&D. And with credentials like that, we thought it only fitting that we bring their President on the show to talk about some of the really exciting stuff they’re doing with automation, AI, and machine learning. By the way, in the interest of disclosure for our audience, SRI is a design partner with Ayehu, the sponsor of this podcast.
Manish, welcome to Intelligent Automation Radio.
Manish Kothari: Thanks, Guy.
Guy Nadivi: Manish, I imagine that you’re involved with so many amazing new technologies that it must be impossible to pick just one that you’re most excited about. So instead, I want to ask you to tell us what you think are going to be some of the biggest disruptions we’ll see in 3, 5, or 10 years from now, with respect to automation, AI, machine learning, et cetera.
Manish Kothari: Yeah, that’s great, Guy. First of all, thanks for bringing me on the podcast. Pleased to be here. And thanks for that great introduction about SRI.
Regarding disruptions, disruptions are happening from many different areas. I’ll focus on the ones around automation, AI, and machine learning, as you said. There are disruptions happening and tremendous advances happening in other areas, such as biology, synthetic biology, which also involves AI, but let’s focus on a few that are very near to the future around AI and automation.
Automation, you can really think that there is really a convergence happening between the physical world and the cyber world in a way. That convergence is enabling us to do more and more things through the use of software, versus through the use of purely hardware.
AI in particular has been around for a while. It started back in the ’60s and ’70s as a rules-based system. It then lost popularity, as many people are aware of, and then it sort of regained popularity over the last 5 to 10 years, as neural nets came about as an alternative way to think through these problems. AI has been now moving and maturing from the application of neural nets for pretty much everything, primarily classification, to much more sophisticated ways of using it.
One of the real areas of disruption in innovation we’re seeing are the areas where AI can become much more explainable. Today neural nets are largely a black box. You put the data in and you build a model, and then you apply it to get an answer. You don’t exactly know why the answer is what it was, why it was feasible. This has traditionally been challenging for specialists to accept who like to understand the logic or the role model behind it.
The next generation of AI systems are starting, just starting, to go into the areas of explain-ability, and even beyond, into the areas of common sense reasoning. There are DARPA programs happening in all of those areas. We should anticipate to see, within three to five years, that these techniques are going to have a profound impact on AI, and even as importantly in the dispersal of AI within the common working ecosystem, because once you have those techniques it really does transform things.
I’ll make one more comment here, which is, one of the challenges with artificial intelligence has been, do you have a appropriate twin of the environment you’re working in so you can model AI appropriately? Do you understand the objective functions? The math around how to handle multiple objective functions, how to deal with those elements, are all coming to fruition right now as well.
And maybe I’ll end by saying there’s a Back To the Future moment taking place as well. We used to do everything on the edge. The last 15 to 20 years has been all about moving to the cloud, and as AI is maturing and becoming faster, lighter, quicker, we’re talking about moving back to the edge. Enabling decisions to be made using your own nets and AI methodologies, and machine learning methodologies, but do it back at the edge. And that is something that is happening quite rapidly right now, and in three years or five years from now we should really … We won’t be talking about just the cloud, we’ll be talking about both a hybrid-edge cloud-based approaches to AI.
Guy Nadivi: Given the advanced level of automation, AI, and machine learning that you work with at SRI, do you think that these technologies will augment more people, or replace more people?
Manish Kothari: That’s a great question Guy, and the answer is really both, with augmentation being, I think, the greater majority of the cases than automation, if you will, complete automation. And the real reason is, there’s two reasons, one, AI is not yet very strong in elements like common sense reasoning, and common sense reasoning is a fundamental portion of how we live our day-to-day and make business decisions. So there will be an element of humans-in-the-loop that will be required to develop these areas. There you’re going to end up with a lot of augmentation. Now augmentation is terrific, because what it does is, it relies on the AI to do some of the very complex tasks that human beings typically have challenges doing. When you have multiple independent inputs into making a decision, after a certain number — humans have a challenge in processing that dataset. AI can handle there. Making the decisions out of that will be an augmentation methodology, and that will be very important.
I will say one comment here though. When you do think about augmentation and automation, it’s important to think about that early. When you’re automating, human factors are not that relevant, so therefore you can design the AI systems to do many other things. When you augment, you have to think the human-in-the-loop and how that interaction is going to take place. And that, if you do that correctly, your productivity boost from AI can be tremendous. I’ll maybe go into this into a little bit more detail.
Let’s take an example right now of a physician. A physician is standing in front of a patient, let’s say she’s 85 years old, and the physician prescribes an ultrasound for an exam. You go ahead and you use your AI system, and the AI system says, “This person should get an MRI.” The physician looks back at the system and says, “I don’t think, I know that the right answer is an MRI. However this person’s frail, they’re old, the MRI is six weeks away, and there’s a 20% chance that the ultrasound will give me an answer. I’m going to go that route first, knowing it’s not the better route.” This is a perfect example as, if the AI system were intended to automate this process, they may actually come up with the wrong decision for the patient. But if it is intended to augment, they may indeed come up with the correct decision for this patient.
Another example of this is the idea of context. If you have an X-ray and you’re analyzing it using your own machine learning to evaluate whether there’s lung cancer, or a nodule or not, you may find a nodule. A physician is typically going to ask, “Where did you visit over the last five years?” And if you had visited, let’s say, southeast Asia, they will do a further exam to evaluate whether it’s tuberculosis or not. If you don’t ask that question you will probably conclude it is a cancer symptom and move forward. This is an example of an area where context becomes important, and augmentation is a really handy way to add the context to the system. The math around AI today is not very strong from context evaluation. It is a work in progress, and as I said in my earlier question, 5, 10 years from now the answer may be different, but as of now, the augmentation is really where it’s at.
Even in IT, I’ll end with an example. With the proper use of machine-learning based system(s), you could, instead of having a person who has 10 years experience having to handle any critical problem, have somebody with one year of experience, augmented by the AI system, which is allowing them to take the expert knowledge and make that person with one year of experience as valuable, or as effective, as somebody with 10 years of experience. So thinking about how you implement this, AI is actually a way to really democratize the ability of many, many different groups to achieve things that were previously possible by only being able to hire hundreds and hundreds of people. That doesn’t mean it’s replacing those people, it’s really democratizing and making those techniques available to groups that couldn’t have it previously. So it’s actually quite, very strong on the augmentation front.
Guy Nadivi: In 2017, Vladimir Putin said, “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” Do you agree with that?
Manish Kothari: I think there’s no doubt that artificial intelligence is going to have a major impact into the future of what humans do and how they perform their productivity, what they are able to achieve. It is by democratizing things that were previously available only to a select few extremely high-caliber people, we have now enabled a large amount of humanity to think and achieve things that previously … So think of, instead of 10 inventions, we are now going to have 1,000 great inventions. So there is no question that artificial intelligence is enabling a sea change already, and will enable stronger, more colossal opportunities. And it’s important to remember, this is not just going to be on the IT side or on others, it’s going to be in medicine, it’s going to be in healthcare, it’s going to be in biology. That’s absolutely true.
It is also true that, if you treat neural nets as the black box they can be, then it becomes difficult to predict what the outcomes are going to come from there, and if you give them a degree of control without managing it, then there’s a risk that there are threats that are difficult to predict that can come about. If somebody looking at a radar signal has to decide whether this is an airplane or a missile, and you’re relying completely on an AI system, that could be quite threatening for humanity. That I think is true. Whoever becomes a leader in this field will be the ruler of the world.
I think the good news is, the AI community is a very robust community that is providing a lot of its algorithms and developments in an open source environment, so I actually think very positively about how this technology is permeating the world. I think that there will be people who are strong at it, will have significant advantages, but I think there will be a lot of room for a lot of people to become strong at it. So as a result of that, yes it’s really important, yes there are some threats that are difficult to predict, but I feel really confident and positive of the long-term impact to society from AI.
Guy Nadivi: We hear a lot about AI and machine learning, but we hear much less about the underlying models they’re based on, whether they’re decision trees, neural networks, random forests, or something else. And these models are built upon algorithms. Late last year, Harvard Business School published an article calling for the auditing of algorithms, the same way companies are required to issue audited financial statements. Now given that SRI develops many proprietary and even classified AI and machine learning algorithms, what do you think about the need for algorithm auditing?
Manish Kothari: That’s a really interesting question, and I think algorithm auditing … First of all, let’s start by first saying a lot of the algorithms that are being developed are being put into open source, and the open source community is very active in building them. So there is a form of openness towards evaluating algorithms. That said, all companies are using their own mixes of those algorithms, their proprietary nature of developing those algorithms, and we don’t know exactly which way they are being used.
I think, two comments I would make. One, auditing algorithms is actually going to be very challenging. Even when you ask a few computer scientists to evaluate code written by another computer scientist, it’s not as trivial as it sounds. You would have to ask for a very high degree of standardization across multiple industries to be able to enable that form of auditing. It’s non-trivial.
The second comment is probably as important, if not more important, and it is, to some extent, the models that are underlying models that are built are really built on the training data that is put into these systems. The training data, while large, is a finite set of data that could be used, or audited if you will, to ensure, for example, there’s no bias. There isn’t some inherent racism that is introduced, or sexism that is introduced. So there are certain sort of biases that can be audited in a relatively manageable fashion.
So I think it would not surprise me if AI systems suddenly become responsible, for example, for setting the sentencing of potential criminals. Then you will need to be auditing that data that builds the underlying models, because you may find that, by using historical data, it ends up providing longer sentences for people of color than for others. And so therefore an auditing mechanism of the data itself would be something interesting to consider.
I think that, as the field evolves, it’ll be interesting to see where those decisions happen. I think a lot of whether the need is required or not will depend on just how much autonomy we’re giving these systems. Are they systems that enhance the intellect of an individual to make a decision, versus systems that start making decisions on their own, either because they’re programmed to do that or because the individuals start relying on them so much that they allow them to do that? Depending on that circumstance, one may need more formality in these areas.
Guy Nadivi: What kinds of skills does SRI covet the most when hiring engineers for automation, AI, & machine learning?
Manish Kothari: It’s a good question. I think you know the current race in the world is for, anybody, for engineers with a background in computer science, in machine learning. Today an undergrad coming out of college is getting both those basic skillsets. I think it depends, whether you’re talking about … AI and machine learning have many different steps. It has the very creative step of new algorithm creation, which is where SRI traditionally focuses on, so we focus a lot on the areas of hiring engineers with a high degree of creativity, and often Ph.D.’s in the background or Master’s in these areas, which enable them to create new algorithms, or conceive of new algorithms.
For the bulk of AI hiring today, there is a real migration taking place. Over the last few years all the industries, too, were looking for AI engineers and data scientists, because you have to put those both hand-in-hand, which had that very creative skillset. That is evolving. As AI itself is maturing, it is moving from it being needing for these extremely creative AI engineers, to execution-driven AI engineers, and those are a different set. That’s actually a very nice evolution taking place in the marketplace as well.
Maybe I’ll elaborate on this a little bit more with a little anecdote. So when my great uncle was involved back almost 50, 75 years ago, CNC machines, or computer numerically controlled machines, were starting to replace mills and lathes in manufacturing. So these are machines that you could program in that could do high-throughput manufacturing with extreme precision without the need for somebody who had had 20 years of lathe experience to be there. In the beginning, everybody who ran these machines needed to be a Ph.D. in this area of manufacturing, mechanical engineering for computer CNC manufacturing. My uncle was one of them. Fast-forward 20 years from then, it went down to an undergrad level. Fast-forward to today, somebody with a high school diploma, a high school degree, and an appropriate one to two year apprenticeship in these areas can run a CNC machine very, very well. In fact better than somebody with a Ph.D.
So I think we’re seeing a similar evolution in what’s required for AI. The first ticket on seeing this is seeing what’s happening in colleges. It used to be that Stanfords, MITs, were the places where all the AI work was happening. It’s migrated now to every college offering this. It’s now migrated to undergrads being able to offer this service in the college. And finally, I would not be surprised if things like data processing and data entry to build the underlying model, at some point we will even see technical degrees in that area.
Guy Nadivi: Which industries, Manish, do you think stand to be disrupted the most from automation, AI, and machine learning?
Manish Kothari: Look, all industries are going to face the impact of it, and it’s going to be … So all of them are in the process of being disrupted, already if not counting the disruption. So when you see things such as shopping and Amazon, as today Amazon is using artificial intelligence-based techniques and machine learning based techniques in everything they do, from giving you suggestions of what can be used in the marketplace to thinking through supply delivery areas. I think it’s important to, when you think about automation and AI, to also broaden the thought to think about IoT and distributed sensing.
Once you start having distributed sensing taking place, and distributed computing, and machine learning being thrown in, pretty much every industry, from warehousing, logistics, and manufacturing, just in time manufacturing, and education, all of them. The place which stands to have the most disruption from these areas is really healthcare. Because if you look at areas that have not adopted automation significantly, healthcare is one in which there’s still a lot of room for automation. If you take something as in contrast, let’s say farming corn. Farming corn has been automated for a long time. One person can handle thousands of acres. I don’t think AI or automation is going to really disrupt that in a tremendous way, however healthcare is maybe a few percent automated today. We should expect to see that.
In the immediate future, areas like cybersecurity, network operations, security operations, these are all being transformed today. And like I said, the democratization of capabilities so that more and more companies can adopt better and better security policies, network operation policies, taking to advantage this democratization, those are all great opportunities today.
Guy Nadivi: Do you see any economic, legal, or political headwinds that could slow adoption of these advanced technologies, or are automation, AI, machine learning, and others basically runaway trains that can’t be stopped at this point?
Manish Kothari: They are moving very quickly, but I do think we have a strong ethical responsibility to think how these techniques can be used. And many groups, Harvard has a responsible AI program, a couple of other universities do too. SRI has a design program that’s thinking about how to adopt these techniques; all of these techniques, because we’re going to be working in symbiosis or symbiotically with these methods, we’re going to have to put these frameworks in place. Some of these frameworks are already being established. I think there’s a lot of risk here too, in the sense that, how do you apply these techniques? Even the same solution, when applied to a country like India, or sub-Saharan Africa, versus to Japan or Germany, are very different.
And maybe I’ll give an example here. Let’s say you have an automated robotic welding system that uses artificial intelligence to think through how to do an optimal weld. If you’re in a place like Germany or Japan you’re looking to automate these processes, because you have a shortage of manpower. If you’re in India or sub-Saharan Africa, you are actually looking to use these techniques to train new workers because you have a surplus of manpower, and the economic argument is probably suited to you using that approach, and democratize the quality that’s available to humans.
So there really is different objective functions in different locations, and one needs to think about it. Just from a historical context, I spent half my childhood in India and half my childhood in Australia, and if you want to think about it, Australia is a country that was, is, and almost certainly always will, have a labor shortage. India is a country that had, has, and almost certainly in the near future will have, a labor surplus. So you have to apply these productivity techniques differently, and you have to apply the economic, legal, and political approaches appropriately.
That said, the productivity benefits, if done right, from AI are so strong that I think while there may be some headwinds adopting this, businesses will be quick to adopt these techniques.
Guy Nadivi: Manish, for the IT executives out there listening in, what is the one big must-have piece of advice you’d like them to take away from our discussion with regards to implementing automation, AI, and machine learning?
Manish Kothari: Yeah, that’s a great question. We have seen now, in SRI, through its own research and through partnerships with other companies, have seen that in many places we’ve seen initial adoption, rapid adoption of AI system, but poor stickiness of these techniques in that place. Partly it’s because AI technology itself is evolving and getting better, but part of it is, there is a lack of understanding of, that design is playing a very, very big role. If you go back to the early days of IDEO, IDEO walked in and really established a really de facto approach to user centered design that has permeated through the entire ecosystem, and today no piece of conventional software, or good piece of conventional software, is designed without the user-centered approach in mind. AI is bringing about a new set of challenges that cannot be addressed just simply with the original user centered design approach.
If you’re an executive out there, you should be asking yourself a question of, not just how do I adopt these techniques, but how do I really get stickiness of these techniques in my workforce? For that I would say the one thing you want to think about is getting a design team together, not just an engineering team, and focusing in on it, and understanding what the unique aspects of design requirements are that enable successful implementation of AI systems. SRI has always been at the forefront of design, back from the mouse example, technical design, the example of the mouse that you gave, to Doug Engelbart’s Mother of All Demos with user interfaces, over 50 years ago now. This is something that we feel very passionate about, and the data really supports, that if you don’t think about the unique aspects of design related to AI, you are likely to have some challenges, not necessarily in adoption, but stickiness. Get that right, and I think you’ve got a chance of both democratizing the abilities that you have within your organization, and enhancing productivity significantly.
Guy Nadivi: All right. Well looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Manish, it’s really been a treat having you on the show so we could learn from a genuine expert with deep insight about the state of AI and machine learning today.
Manish Kothari: Thanks a lot Guy, it was a pleasure to be here.
Guy Nadivi: Manish Kothari, President of SRI International. Thank you for listening everyone, and remember, don’t hesitate, automate.
MANISH KOTHARI
President of SRI International
As president of SRI International, Manish Kothari, Ph.D., jointly develops SRI strategy with the CEO and ensures that corporate resources and talent are aligned with supporting SRI’s research needs. In this role, he oversees SRI Ventures, Global Partnerships, Marketing and Communications, and SRI’s Japan office. He holds multiple patents and is the author or co-author of several peer-reviewed publications and book chapters.
Kothari joined SRI in 2013 as a business development consultant and entrepreneur-in-residence. He became a program director in the Robotics Program in 2013. In 2014, he moved to SRI Ventures as director of commercial ventures and licensing, with an emphasis on healthcare, engineering, and physical sciences. He was named president of SRI Ventures in 2015 and has been leading the venture creation process from concept through market development, team building, technology transition, seed funding, partner and investor identification, and launch.
Manish Kothari can be found at:
E-Mail: manish.kothari@sri.com
LinkedIn: https://www.linkedin.com/in/manish-kothari-5458453/