Nikola Mrkšić ADT podcast cover

Episode 125

Nikola Mrkšić - Winning back trust in the time of AI

Posted on: 29 Feb 2024

About

Nikola Mrkšić is one of the masterminds behind Apple's digital assistant Siri, as well as the CEO and co-founder of PolyAI.

In this episode, we speak about winning (back) trust in the age of artificial intelligence. We discuss the importance of trust for effective customer relationships, as well as look at the bigger picture with regards to the overabundance and hyper-refinement of AI-generated content where there are not yet well enough established standards and regulations.

Links & mentions:

Transcript

"There's a bit of a discontinuity between the public discourse and what people are actually willing to try and how they think about the whole thing, right? What I can tell you is, like, among our customers, you know, when we surveyed people around, like, generative AI, like, all we found is, like, a huge amount of optimism and people embracing and wanting to try things out to see how they can further improve their customer service."

Intro:Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, Content and Community Manager at Agiledrop.

Tim Butara: Hello everyone. Thanks for tuning in. Our guest today is Nikola Mrksic, one of the masterminds behind Apple's digital assistant Siri, as well as the CEO and co-founder of PolyAI.

In today's episode, we'll be discussing the importance of trust in the age of AI, and we'll be focusing on how companies and organizations that are pursuing AI can establish and build back trust.

Nikola, welcome to the show. We're really happy to have you with us today. Anything to add before we dive into our discussion?

Nikola Mrkšić: No, no, no. Thank you for having me.

Tim Butara: Awesome. So as we were just talking about before we started recording, I think that this is a great topic, both in the business perspective and cultural perspective, the global perspective. And I want to start our discussion with asking you about why we're talking about winning back trust rather than just, you know, establishing trust or something like that.

Why... have people lost trust in organizations and companies that are pursuing AI? If so, why have they lost trust in them?

Nikola Mrkšić: Yeah. I think that there's like many lenses to this, right? Like when we think about AI in general, like it means a lot of things to a lot of people in different product categories. And, you know, there are places where we've had AI powered solutions for a long time that have just like not worked well.

And then we have places where it's new and people just don't know what to expect. And then there's the whole overlay of data security and what you're really getting when you, you know, use someone's data and how they feel about interacting with something automated.

And then there's like a whole anthropological argument around like interacting with AI, like, do people know it's AI? How do they feel about the whole thing?

So in terms of winning back trust, I think. You know, insofar as what we do, which is build voice assistance for enterprise, for customer service. Right. I think there, especially in America where people have deployed voice AVR at much greater extent than in Europe, that's where we, you know, have people that have been frustrated with this for a long time, right?

Like, you pick up the phone, it's something automated, and it's like, what's this call about? And you say something, you're not understood, you get frustrated, and then, you know, you try to get through, you try to either circumvent the automated thing and get to a human, or you try to do something with AI, but it doesn't work, your frustration builds, you feel like you're a second tier citizen for having to go through that interaction, and at the end of it, you know, you get a whole nation very frustrated with voice automation and very unwilling to give it a second chance.

Tim Butara: That makes sense. Yeah. So maybe if we focus first on those companies and organizations that are developing AI solutions, how can they ensure that, ,you know, trust is a priority during this process of innovation?

Nikola Mrkšić: Yeah. I mean, look, I think it's like the greater, like philosophical aspect of how you even go about these things where a key aspect of trust for a voice assistant is like. Do you know it's a voice assistant?

And that's like slippery, right? Because if you make, if you kick too much of a fuss around the fact that it's automated, like then, you know, you know, something's wrong, you're going into a call and you're like, oh my God, like, I've just heard so much about it that I can just tell that this is going to be a really, really, really bad experience. Right. In which case, you know, you probably don't want to do that because like, you're not going to get a good performance out of it. You might as well not provide an automated experience.

So I think that's like a big piece of it. Right? Like, what we try to do is get to the point where we have systems that will escalate, if they don't work, right, they can detect frustration or they can tell that simply they're not making progress through a conversation.

And if that's the case, well, then they'll just hand off. And then, you know, if as a user, as an end user, you've interacted with it and it didn't really work, like, if you didn't spend 40 seconds trying to get through, like, you're not going to hate that experience that much. You might just give it a go next time. Right.

Whereas if you spend like 30 seconds screaming "agent!! Until you're like handed off, then, you've lost additional trust and you made it much more unlikely that in the future people give you a serious shot.

Tim Butara: So it's, like, one aspect is transparency, obviously, and kind of just, you know, developing all of this responsibly, but also like the experience itself, the functionality, the value that it brings.

Nikola Mrkšić: Yep. Yeah, absolutely.

Tim Butara: Okay. So now if we take a look at the other aspect, so now you're focusing more on AI innovation, but what about companies that want to, you know, adopt existing AI tools, implement AI, incorporate AI into their existing processes, into their existing customer experiences and stuff like that? How can they establish trust with their customers?

Nikola Mrkšić: What are you thinking of? What kind of tools?

Tim Butara: Well, tools like, I don't know, ChatGPT, you know, generative AI, the buzzwords, so... or just, you know, any kind of, you know, just one example, if we take a look at maybe... but we'll return to that later.

But if you just allude to like the bigger picture of using any kind of algorithm or, or any kind of thing like that, that in the past five years or so, like ever since the public came more aware of that, there's been kind of a, not a loss of trust, but trust has become more important because there's been a greater awareness and more discussions regarding all this. So, just like seemingly benign AI implementations.

Nikola Mrkšić: Yeah. I mean, look, I think there's a bit of a discontinuity between the public discourse and what people are actually willing to try and how they think about the whole thing. Right.

What I can tell you is like among our customers, you know, when we surveyed people around like like all we found is like a huge amount of optimism and people embracing and wanting to try things out to see how they can further improve their customer service. Now, granted, these are people that have already rolled out like very sophisticated AI solutions with us and are just like saying, hey, give us more. So sample bias there is definitely a thing, right?

But even in conversations with a wide selection of prospects, and you know, we have pretty good stats around this, people are keen, right? So I would say that trust as a whole. It's not the first thing they ask about, right? Because for a lot of these companies, they've already, like, built processes around, like, you know, are we trustworthy, are we, like, doing things the right way, are we handling your data with care, are we recording calls if we have to, are we keeping them in the right place?

So that, like, when it gets to the point of, like, using AI, I wouldn't say that people need to win back trust in like ChatGPT, right? Like it's a new category, right? So the enthusiasm is unbelievable by like everyone - buyers, by vendors, by I think like the world at large. And I think like. It's ours to lose by doing things badly.

We've seen some examples. So recently, I don't know if you saw the, the DPD announcement where basically they reached a point where the system started writing like haiku about like how it's a bad company and stuff. And that was just because like the whole thing wasn't configured right.

So I think, you know, we'll see more examples of that. It's a very sharp tool. It's not hard to hurt yourself with LLMs if you, if you don't do it carefully.

But I think if people put in effort and test the experience the way they would test any other UI that they're, they're releasing for their customers, like, I think they actually have a chance to surprise and positively, you know, delight their customers rather than lose trust, right? As long as they're transparent about what they're putting in front of them.

Cause you know, if you're interacting with a brand, you need a complex answer to a question, it's 2 a. m. You're probably not expecting them to have a contact center. So if you are able to use an LLM almost as a search engine and it works well and gives you a precise answer, and then you go and you validate it, you know, as long as they're clear about that, I think like, you know, we're just increasing the number of interfaces that they have to their customers and that can't be a bad thing, right?

Tim Butara: Yeah. It depends on the situation, I'd say, I guess.

Nikola Mrkšić: What do you think when you imagine like the bad scenarios?

Tim Butara: When I'm imagining the bad scenarios, like of there not being enough trust or like a bad scenario of a company...

Nikola Mrkšić: Like what are the risk factors of trust to kind of like have in mind that maybe I can like double click on how they relate to our industry?

Tim Butara: Well, one thing that really comes to mind, if we look at like the bigger picture of AI and generative AI and all that is stuff like AI generated content and how trustworthy that can be. We highlighted multiple times during the conversation that transparency is definitely a priority, right?

But for somebody who wants to, I don't know, gain traction or spread a certain idea or a certain thought through AI generated content, I'd say that trust would not be a priority in that case, because you know, it would actually be detrimental to then to tell people beforehand that, hey, what you're reading is actually, or the video that you're going to watch is actually AI generated, deep fake or something like that.

So what does trust look like in a world where basically any kind of digitally enabled experience is able to be produced by AI, is able to be generated by AI, and you don't really know, you know, maybe someone who's not as experienced, who's not as adept at doing all that, at distinguishing between those. How does that look like for someone like that?

Nikola Mrkšić: That's a really good point. So yeah, I think, like, you know, we've worked on making our voice assistants as human as possible. We found that that always just increased engagement, and gave us a shot at, you know, a real conversation where, you know, a more robotic sounding voice assistant would just be something that people immediately assume it won't work.

So they're like, you know, screaming for a human representative, even though the whole thing could be identical to ours in terms of understanding and capabilities and flow. So, yeah, I mean, obviously, then we have a lot of people who sometimes aren't sure or don't even care to ask if it's automated or not.

But then, yeah, the potential for abuse is huge, especially, you know, if you think about like, I mean, yeah, AI generated content online, propaganda, bots, etc, right? If it's outbound calls, I think you already see this happening at large. It didn't need... actually, I've always found it almost perversely impressive how good some of these things that do scam calling are, right?

Because they would record a human and they don't even have a dynamic... they don't even have dynamic behavior. Right? Where they're basically just like, hey, is this Nikola? Cool. Oh, okay. Okay. I can hear you. Okay. Hey, whatever, like we're calling cause we got a report you were in an accident.

Like, and then like they record just the flow where, you know, people, like, their blood pressure goes up. They start talking. And Okay. Okay. Let's clear it up. Right? Like, so what's your date of birth? And like people just start saying things and on the other side, there's someone who probably reviews all calls that lasted for more than like 10 seconds and looks at whether they've been able to like scam someone out of a credit card.

So I think around 95% of what we do is inbound. And if it's inbound and you call a number that is verified, let's say it's, you know, 1-800, you know, company name for a customer service line. Well, then, you know, like if you built a good thing there, that's not really... you shouldn't lose trust there, right?

Cause you, you presumably built something. You've been clear about what it is. You give them a notice of, you know, this called maybe record for training purposes. And then we'll be in the first instance answered by our AI powered voice assistant, right? Like, that's not a big deal. That's just like a technological decision.

But I think all the other risks you're highlighting are there, right? And we're going to see a lot of it. I mean, when you look at online comments, I mean, that's, it's getting pretty hard to know. whether something's AI generated or not. Equally, you know, you look at like deepfakes and cloned voices.

You know, I... at one point we cloned my voice and, you know, I've got like a Serbian accent in English and, you know, if you know Serbian accents, you can tell that like it's a Serbian accent, but no one's ever built like a text to speech model for it. Right. And like, why would they?

So we cloned my voice and they gave me 10 samples of me versus like AI clone voice. I actually thought... I got 8 out of 10 wrong. And like, that's like pretty, and like, I'm an edge case, right? I don't have like your typical accent. In the end as I, as I listened and listened again, cause it was shocking, like 50, 50. Okay. Amazing. Right. But like 80, 20 is almost like, yo, what's going on? Right.

And then I realized that actually what the base model, which was like an American English speaker, it sounded better than I sound in English. That's why I thought it was real. But then as I listened more carefully, like, you know, the Slavic, like, you know, whatever "d"s instead of "th"s and stuff were a bit harder and I could be like, no, that's me. Right. But like, it's almost like an accent coach.

And like, it took me a lot, long time until I was like, okay. That has to be me because that; like, with how fast it's going, it won't be long before even that becomes like unnoticeable to the human ear. So yeah, trusting any content is about to get really hard.

Tim Butara: A lot of really great points here. And just, I think that that everything that you mentioned here just highlights one of the main problems, and also answers the previous question of why we're talking about winning back trust because. Because of all this hyper innovation and because of the speed of innovation and the abundance of AI generated content, there's... people are inherently less willing to trust in this era where there's so much of everything.

There's just an inherent connection between like, okay, maybe, you know, because we know the risks, we're aware of every risk that you've broken down here, that's why we need to be extra careful, like in every situation, whether that is interacting with companies that we've been interacting with before.

And in those cases, you know, if we look at it from this kind of perspective, it would make sense why, even between like with established brands and with established customer connections, they would still need to kind of reestablish trust because everything is so volatile and kind of uncertain because of everything.

And as you just said at the end, right, things are moving so fast that, even if you do pinpoint some ways and approaches for like, okay, I can see that. AI content tends to be really refined in ways that human content isn't. So for example, the, the Slavic accent gets kind of polished up and kind of removed, and that's what you have to focus on, you just said that as it develops further, even those distinctions will be much and much harder to make.

Nikola Mrkšić: Yeah. Yeah. I think it's like, you know, while we have... I mean, like, well, the other thing basically is, can we get to the point where, you know, we have just like trusted mediums of accessing things. So, you know, like your, your blue tick on Twitter or like verified accounts on LinkedIn and stuff like, you know, maybe we'll always just have to get a bit better at like, you know, encrypted communication and expecting a certain like level... you know, most of us are used to locking our houses. Right.

I was speaking to an Uber driver last night and he was telling me how, you know, he wants to move to the UAE from London because there you can leave your house unlocked and like. You know, maybe that's a nice thing. I remember, like, you know, my parents generation stories of communist Yugoslavia where you could safely sleep on the bench.

I'm not really sure that you can no longer do that, but, you know, people are nostalgic about old times. But, you know, it's not so bad. Like, locking your house is not really, like, such a tall order. So, I think, like, just in terms of, like, validating the counterparty. Like, that's okay, right? Like, that's just like a bit of like technical education, and you always have cybersecurity attacks, but I think we can like commoditize those things to the point where like, we know what we're using and what to expect in most situations.

Tim Butara: So with all this in mind and to kind of bring the conversation to a close, what are some notable recent wins regarding AI and trust?

Nikola Mrkšić: AI and trust. Well, I mean, I don't know. It's a difficult question, right? I think that what I can tell you is that there are a lot of wins for AI, right? And its capabilities. And I think that in terms of the general disbelief around AI being useful or good, I think that like those assumptions are being like shattered left and right, where people now see that, like how fast the whole thing is advancing and, you know, how much it can do to improve our quality of life, quality of services we receive, everything, I think that like trust in its capabilities, it's growing really, really rapidly.

Like trust and AI powered products. Well, as we discussed, I think that's like a, just a whole new field where we have a lot to discover. And I think there, you know, I don't know what the recent wins are. I think you have to look at it kind of like that product by product. I think that again, like it's people's willingness to use these things that gives you like a general vote and like, do people think it's a good thing or not? So, like, all the tools that people are using to generate text, collateral, images, right? It's being used for a lot of creative work.

I think that, like, like that's a lot of confidence, right? I think in terms of trust and general, like overall attitude towards these things... I mean, people are taking very different approaches, right? You have like the EU, which is as always rushing ahead to regulate just so they can say they're regulated first.

I think that's like one way of like trying to set up standards. You know, I think like, at the end of the day, as much as we all moaned about GDPR and that, you know, that we had to click accept cookies on every website, which definitely like reduced the user experience. You know, I think very few would say that like that hasn't probably made like the consumer safer.

I don't think we have these standards for AI yet. I'm not sure that the things that those guys are proposing are the right ones, but you know, I think that like... it led to at least like other jurisdictions stating their approach, right? Like with America and the UK being a lot less heavy handed. So we'll see how the whole thing develops, right? I don't think we've seen catastrophic mishaps from like AI being used in malicious ways just yet. So, you know, fingers crossed there isn't really anything that really profoundly changes our way of life just yet.

Tim Butara: Well, I think that we're definitely in that period where all of this is still being fleshed out and developed. So I'm guessing that these are the years where we'll see a lot more wins with regards to these two subjects.

Well, Nikola, thank you so much for joining us today, for this awesome discussion. Just before we jump off the call, if people listening right now would like to connect with you or learn more about you, where would you send them to?

Nikola Mrkšić: Our website or [email protected], Nikola with a K. So yeah, looking forward hearing from everyone.

Tim Butara: Okay. Awesome. Thanks again. This has been great.

Nikola Mrkšić: Thank you for having me.

Tim Butara: And well, to our listeners, that's all for this episode. Have a great day, everyone, and stay safe.

Outro:Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes. And don't forget to share the podcast with your friends and colleagues.