Episode #52: Chatbots Aren’t Human, So Don’t Expect People To Pretend They Are

Guy Nadivi
15 min readNov 2, 2020

( Click here to listen to this podcast episode )

Is conversational AI all it’s cracked up to be, or is hype eclipsing hope when it comes to deliverables? Has the gap between expectations and reality grown so wide that disappointment is inevitable, both for end-users and enterprise decision-makers? Or have the majority of chatbot vendors simply been targeting the wrong use cases, inadvertently leading their customers to insurmountable dead ends?

One man with a clear-eyed vision of the market opportunity uncluttered by misconceptions about the technology’s potential is Kevin Collins, Founder & CEO of Charli.ai. Following GE Digital’s acquisition of his IoT company Bit Stew, Kevin set out to build a personal AI Chief-of-Staff front-ended by a chatbot. With Charli.ai recently emerging from stealth mode, Kevin joins us on the podcast to explain why despite expert predictions falling short about conversational AI’s advances he’s still enthusiastic about the technology; why front-end conversational interactions must never exceed back-end automation capabilities; and how CIO’s should approach conversational AI implementations.

Guy Nadivi: Welcome, everyone. My name is Guy Nadivi, and I’m the host of Intelligent Automation Radio. Our guest on today’s episode is Kevin Collins, CEO of Charli.ai, which recently emerged from stealth mode to provide intelligent automation for everyday tasks in the digital workplace. Kevin and his partner, Alex Clark, previously co-founded and built a software company that was eventually acquired by GE Digital. Charli.ai then became their next project, which they began for the very personal reason that they were tired of doing their own administrative tasks, and what they plan to deliver to the marketplace is a tool that leverages automation, AI, machine learning, and chat to handle that administrivia, and as a bonus, requires zero coding or any technical skills to use or configure. That’s intriguing. So, we’ve asked Kevin Collins to come out of stealth mode himself and join us on the podcast to discuss Charli.ai and the overall state of the conversational AI market. Kevin, welcome to intelligent automation radio.

Kevin Collins: Oh, hi, Guy. Thank you for having me on the radio.

Guy Nadivi: Kevin, please tell us a bit about your background and what led you to launch Charli.ai, which I’ve read has been touted as a conversational AI chief of staff.

Kevin Collins: Yes. That’s a good description of where we want Charli to go. My background has been 30 years in the high-tech sector, and I think as you mentioned, Alex and I had started a company called Bit Stew that eventually got sold to GE Digital back in 2016. Bit Stew had a lot of AI on how we were doing integration. That AI capability is what I fell in love with and it’s something I wanted to bring into the world of Charli to really get rid of a lot of the pain that I go through on a day-to-day basis of just doing the administration work.

Guy Nadivi: Kevin, in 2019, Business Insider estimated the worldwide chatbot market is worth a bit more than $2.5 billion, but they forecasted that, by 2024, it will approach $10 billion. That’s a compound annual growth rate of over 29% a year. How do you think the COVID-19 crisis will affect this growth rate?

Kevin Collins: That’s a good question, and COVID-19 has impacted quite a bit from what we have seen just in the past few months alone. It’s created a lot of uncertainty in the market. It’s an area where we believe that uncertainty will continue at least for the near-term future and something that we at Charli are trying to navigate. But I do believe that COVID-19 will have a positive impact on that growth rate. If you’re looking at people’s attitudes to working from home, the remote work, that’s putting more pressure on software companies to automate. Automation is going to come with heavy AI involvement. It’s also going to come with an easier interface that people are looking forward to work with their software. Instead of being in the office where they’ve got more tolerance for the enterprise way of doing work, they’re going to want simpler ways of working with their software, and this is where I believe that chatbot innovation is going to come from and where that investment into chatbot technologies, especially when you’re looking at the enterprise and the corporate world.

Guy Nadivi: Okay. Now, speaking of chatbots, according to Gartner, there are 1,500 chatbot providers currently. Inevitably, there is going to be a shakeout that will cause that population to decrease precipitously, I believe. At the TechExit.io Conference earlier this year, you and Hans Knapp of Yaletown Partners had a discussion about creating barriers of entry for competitors as one strategy to preserve your market position. What barriers to entry can Charli.ai erect to defend itself against 1,500 others competing in the same market?

Kevin Collins: I completely agree. I believe that entire market for chatbots is ripe for consolidation. There’s far too many of the technologies that are out there. There’s also a massive amount of hype that’s gone into the chatbot area and people are now realizing the reality of chatbots will only take you so far. They’re really, for us, a channel into the intelligence within the systems we have. If I look at Charli’s approach to chatbots, we want the chatbot to be an input into Charli, but we need Charli to automate all of those administration functions that need to happen, and the automation needs far more intelligence than just a chatbot. We use the chatbot for that natural language processing, but we don’t use the chatbot for natural language understanding and how to translate that understanding into action, and that’s where the real intelligence on Charli is. So, we’re building defensible technology into how we automate those tasks for that chief of staff function that you mentioned earlier, and that requires heavy lifting from an AI perspective. Far more than what we would see just on the chatbot, which needs to be natural language processing. We really have to do natural language understanding and translation of that. So, two questions there. Will the chatbots consolidate? Definitely. I believe there’s far too much out there and it’s a natural consolidation. For us on the defensible side of it, it’s more than just a chatbot. It’s now understanding and translating that into action.

Guy Nadivi: Sticking with Gartner, in 2017 they predicted that “By 2020, 40% of all mobile interactions will be via virtual assistance.” Virtual assistance, of course, being a type of smart chatbot. Clearly, we’ve fallen short of that forecast. So, let me ask you, when do you predict we’ll reach peak app as it were, and start transitioning to conversational AI as the predominant user interface going forward?

Kevin Collins: Definitely have fallen short. I think the reason for that is a reality check. This is hard. It’s really hard. There’s a combination of not just that conversational ability. There’s also translating that into the action as I mentioned previously. And then you also have to have a conversation which is a behavior change for a lot of people. I think the virtual assistants that we see today are simple one task. You ask it to do something. It’s very simple. You spoon-feed it. You get the task done. But for a human to have a full conversation with their computer is a lot different and a lot harder. It’s easy to tackle low-lying fruit opportunities around customer support. Walking a user through a particular scenario and then manually following up with it at the end. It’s a completely different challenge to have a human converse with a computer and that computer to completely automate what the human wants to do and have a full-on interaction. I believe that we’re going to take some pretty key baby steps on really having the human converse with the computer in order to get an action done rather than get onto full conversational, which is going to take many years for that to happen.

Guy Nadivi: You mentioned low-lying fruit. So, let me ask you in broad terms, what are some of the lowest hanging fruit best suited for conversational AI applications within an organization?

Kevin Collins: Some of the ones that we’ve seen today that I think are perfect for it are the customer service, the customer support. Even if I look at it for me personally, I much prefer to chat through my support issue, really to understand the frequently asked questions and answers or to walk me through getting a refund on a purchase I had made. The flow for that is fairly predictable, and I would prefer to chat with a computer over chatting with a person to get that done. That is low lying opportunity for people to address, and it does take a big burden off of corporates and enterprises that have to invest heavily into their customer support.

Other areas that we see are prime opportunities for this are areas that Charli is targeting as well, and that’s the administration side. We’re finding people spending 20% to 50% of their day just on the minutiae of administration, and that’s distracting them from a lot of the work that they have to get done that’s a real value. We can spend a lot of time on automating that and having a chatbot in front of that. So, the conversational ability to really instruct the computer to get administration done is another low-lying fruit opportunity. One that we certainly want to jump on.

Guy Nadivi: Now, there are some concerns about biases, intentional or otherwise, creeping into the AI that powers things like chatbots. In order to root out bias, Harvard Business School published an article not too long ago, calling for the auditing of algorithms, the same way companies are required to issue audited financial statements. Kevin, do you think AI algorithms should be audited in the same way financial statements are for publicly traded firms?

Kevin Collins: Very interesting question. When I hear audit and I hear regulations, the hair stands up on the back of my neck right away. I think that’s just a natural reaction. As CEO of companies, I understand the need for regulation. I definitely understand the need for audit. It’s a bit of a balance between the innovation you want to see and then getting into that regulatory red tape. Biases are very real, and biases get introduced by the data scientists that put the models together. It gets introduced by the training data that’s going into these models, and those biases are very real, similar to the biases you might have in an organization just dealing with people, and you have to ensure that diversity gets introduced into your data science and into your AI. It’s one of the key areas that we love about the AI that we’re innovating at Charli because we have to test, and we have to test out the biases, and testing becomes an automated routine for how the AI needs to get deployed because we want to always act on the best interest of our users and our audience. That means that that needs diversity in the training sets. It needs diversity in the models. When you’re getting into auditing of the algorithms, I feel it’s far more important for the auditors to look at how these models are tested and continuously tested and implemented in order to avoid the introduction of bias, rather than just auditing of the algorithms. I don’t think that’s a fair approach. I believe it’s far better to continuously test these models as they’re operating and they’re being trained.

Guy Nadivi: Given the current state-of-the-art, what do you think are some of the most unrealistic expectations currently plaguing the field of conversational AI?

Kevin Collins: I believe the biggest AI missed expectation that we’re seeing is that the users, from a behavior perspective, don’t want to converse with their computer. That’s a big one, and that was a big highlight for me recently is watching how the users want to interact with their computer, either on their mobile device or through their laptop and desktop. Conversational interaction with a computer or a piece of software just became unnatural. Humans are expecting the computer to behave like a human, and we’re nowhere close to that today. There’s a lot of nuances on how human beings interact and how they ask for items. There’s a lot of expectation that the computer is completely automated behind the scenes. So, natural language processing that conversational is really just the front end. Then, there’s a missed expectation of this translating it into intents and having it fully automated by the computer. It’s just not there. That automation can’t match what the user expectation is, so when you get into conversing, there’s a lot of edge cases. There’s a lot of failure scenarios, and we end up spending a lot of time addressing the failure scenarios, the error conditions. We also try to spend a lot of time in putting up guard rails to guide the user conversation. What we’ve had to do is take a step back from just understanding conversational AI to maybe the user just wants to make a request and then wants some clarification on that, rather than having a conversation, and it’s avoiding the unrealistic expectations that we’re seeing. We don’t have the cognitive ability in AI to match what the consumers and the users are looking for today.

Guy Nadivi: Speaking of expectations, in technology, to paraphrase the economist, Rüdiger Dornbusch, things take longer to happen than you think they will, but then they happen faster than you thought they could. Kevin, what are some of your predictions for conversational AI over the next three to five years?

Kevin Collins: Things do certainly take longer to happen than you think it will. So, looking forward, I get impatient and it certainly can take a longer time than what I’m anticipating. But hindsight is, “Oh, that actually went pretty quick.” And that’s just, I believe, human nature. It is going to take longer to get conversational AI where we need it to be. If I look at what we have to invest time on is I think stepping back from just a full-on conversation with an AI to this request-response to clarification elements that can happen just with language understanding.

I believe the other area that we have to get into with the innovation that needs to happen over conversational AI is the automation to support the conversations or the interaction, and that automation is where I believe in the next three to five years the innovation is going to be. It’s going to be on no-code solutions. It’s going to be on the ability to have these models trained such that the user gets more out of the conversation through the automation, rather than having to get frustrated. We need to get the automation matching what the natural language processing can do, and that requires more than just the scripting and the coding that happens today. So, a lot more around this no-code capability, a lot more around the continuous training of the models and tweaking of the models to match the expectation.

Guy Nadivi: Interesting. Last year, there was an article in MIT Technology Review about Artificial General Intelligence or AGI. In that piece, the author, Karen Hao, who’s been on our podcast, wrote, “There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist. It’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm. Deep learning, the current dominant technique in AI, won’t be enough.” Kevin, what do you and your team at Charli.ai think it will take to achieve AGI?

Kevin Collins: Really pertinent question today. This one has come up a number of times, especially now that Charli is very focused on the AI technology that we’re doing. Our belief is that AGI is a long ways off. We’ll see in the various studies, and it can be anywhere from 10 to 40 to 50 years, depending on who you ask. I do believe it’s a 20-, 30-year journey before we see the massive innovation that’s needed for an AGI. But the other side of us goes, “Who cares?” There’s a lot of brilliant technology in the AI world today, and that’s what needs to be leveraged. If we’re looking at it from a corporate and an enterprise perspective, AGI is coming at some point with new algorithms but the reality of what we have today is brilliant. We’ve got deep learning, we’ve got machine learning, and I believe that the paradigm shift that we do need is more around the scale and the assembly. We’re completely missing that in the world of AI. You have to be able to scale your AI, not just to perform, but it has to scale from the perspective of models have to work for the individual, as well as the corporation, as well as an industry. You have to scale that because you have to apply context, and context awareness is one of the keys that we needed within Charli. How do we achieve context awareness at scale? The other part of it becomes assembly, and assembly of the models is a critical challenge. This is why I believe it’s the paradigm shift of scale and assembly because you need to bring context and you need to bring continuous learning and continuous testing of those models. You also have to assemble those models because the decision-making isn’t just a machine learning algorithm. The decision-making becomes a collaboration of various models to take in various inputs and to resolve conflicts in order to take action. This is why I believe that paradigm shift is all about scale and assembly. That’s what we need. Regardless of new models and methods that may come, scale and assembly is still a massive problem that needs to be solved today.

Guy Nadivi: Kevin, for the CEOs, CTOs, and other IT executives listening in, what is the one big must-have piece of advice you’d like them to take away from our discussion with regards to introducing new technology that leverages conversational AI and, in particular, what change management considerations do you think they should keep in mind?

Kevin Collins: Fantastic question. We’ve actually had the benefit of comparing and contrasting various enterprise organizations that we work with to see how to introduce this into it and what the success or failure rate has been. I would say there’s a high failure rate if you’re expecting too much around the conversational UX and that you can go and converse on any number of things simply because your automation on the backend cannot match the user expectations on the conversation. I think the biggest advice that CIOs need to take away from this is that you need to go in this eyes wide open and tailor the chatbot experience or that conversational input to what you can automate on the back, and make sure that you’re keeping the user interaction with your software guided. But the other big thing around that is that I do believe that this conversational AI element is where the future is going. We want software to work with the user far better than what it is today. We don’t want the user to have to learn the software. We want the software to learn about the user. So, there has to be a big innovation and a big investment into the conversational AI, but take these baby steps. Make sure that you are allowing the user to interact with the computer and interact in a way that you can automate, and you’re not frustrated. You don’t want to go down to say, “I’m just going to do conversational AI and put in a chatbot.” There’s a significant investment into scripting how the user flow needs to go. Similar to how you had to build up the UX or the user experience with your web-based interface, you’re going to have to invest into how you guide the user on their conversational flows, and that is an area of innovation that CIOs really need to look at.

Guy Nadivi: All right. Well, it looks like that’s all the time we have for on this episode of Intelligent Automation Radio. Kevin, thank you very much for joining us today and sharing your thoughts about the current state of conversational AI. We’ve really enjoyed having you as a guest.

Kevin Collins: Well, thanks, Guy. I really appreciate it. These have been fantastic questions. Obviously, one where we’re pretty excited about, but I’d love to follow up if there’s any follow-on questions from folks.

Guy Nadivi: All right. Kevin Collins, CEO of Charli.ai. Be on the lookout for them as they get closer to general release. Thank you for listening, everyone. And remember, don’t hesitate, automate.

KEVIN COLLINS

CEO & Founder of Charli.ai.

Kevin is founder and CEO of Charli AI, a startup focused on helping busy workers get more life back in their work-life balance. As a serial entrepreneur and technology company founder, Kevin has experienced the explosion of productivity software and the tools we use to work, yet productivity is declining and people are more stressed than ever. Enter Charli, a novel conversational AI that eliminates productivity killers from your workday. Part workflow automation, organization wizard, and search engine — Charli simplifies some of your most time-consuming tasks.

Kevin brings more than 30 years of experience in architecting and designing software, both as a start-up entrepreneur and as a corporate executive. Kevin has extensive knowledge about artificial intelligence and machine learning. Before founding Charli, Kevin was CEO and Co-Founder at Bit Stew Systems, a data intelligence platform, which was acquired by GE Digital for its AI and ML capabilities in 2016. Prior to his time in Silicon Valley, Kevin worked in the high-tech networking and security field, and led technology firms specializing in cryptography, public key infrastructures and high-performance and scalable networks. As a second time founder, Kevin is passionate about sharing his expertise in building successful startups.

Kevin can be reached at:

Website: www.charli.ai

Twitter: @charliai

Newsletter: https://charliai.substack.com/p/hey-were-new-here

LinkedIn: https://www.linkedin.com/company/charliai/

Facebook: https://www.facebook.com/charliai

--

--