Episode 158

Radhika Dutt - Responsible product development

Posted on: 07 Nov 2024

About

Radhika Dutt is an accomplished entrepreneur and product executive, as well as the author of the book Radical Product Thinking.

In this second conversation with Radhika, we revisit some of the points & issues raised in the original episode by talking about responsible development of both physical and digital products. We discuss the opioid epidemic in the U.S., expand upon enshittification and talk about the importance of & approaches to radically rethinking OKRs.

 

Links & mentions:

Transcript

"We often think about building ethical products as, you know, ourselves as heroes in waiting, that one day you're going to be asked to cross a line in the sand and you're going to very bravely say, Nope, I'm not going to do it. But what happens is there's never a blatant point when you're asked to cross this line in the sand. It's all really quite fuzzy. It's all quite gray."

Intro:Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, Content and Community Manager at Agiledrop.

Tim Butara: Hello everyone. Thank you for tuning in. I'm joined today once again by accomplished entrepreneur and product executive Radhika Dutt.

She's the author of Radical Product Thinking. And in our first conversation, we dove deeper into this concept of radical product thinking. And today we return for a second conversation in which we will expand upon this original discussion by talking about responsible product development. Radhika, welcome back to the show.

I'm really, really excited about diving deeper into everything that we discussed last time. And it will, we said that we'll dive deeper into more.

Radhika Dutt: Thanks for having me back, Tim. I'm excited to be here again.

Tim Butara: Awesome. As I said, I'm also excited to have you back and yeah, we finished our last conversation by talking about this responsibility and how we build products.

Right. We talked about and certification and we talked about. The Hippocratic oath of product and we tied that to the Hippocratic oath in general and how one example of, you know, how this has not been kept up is the opioid epidemic in the US. And I want to, I want to maybe examine how, how responsibility in product development ties into this.

Radhika Dutt: Yeah, let's just do a quick recap of what that responsibility really means for us in product development. I think one of the key realizations that I've had as a product person over the years is if you're building a product, you are creating change in the world through your product. You know, we very often just think about our little bubble that we're in.

Our everyday work, and we forget that we're actually having a massive impact on people, on society. We're affecting their way of life, right? Uh, and it may be in small ways, but you do affect people's lives. And when you affect people's lives, then you have to start to think of yourself as a doctor because you're You're creating change in their lives.

You can't then say, Oh, well, you know, whether it's good or bad or how you use it, it's, it's on you. And so this is where we have to be responsible and think about, are we making responsible choices in how we are building the product?

Tim Butara: So how does this apply to, to something like, you know, this, this, I guess it wasn't probably the opioid epidemic wasn't obviously caused, I mean, it wasn't caused the intention of causing it.

It was, you know, a medicine that was probably, you know, medicines that, that were intended for a purpose, but, but how does responsibility in product development relate to, to how the whole thing evolved into an epidemic?

Radhika Dutt: Yeah, the main thing that I'll say is, it's not just any one decision that was made.

And this is what you discover always about products that end up being unethical. There was never one single point where you say, Nope, I'm not going to do this. It's a series of points, right? It's lots of decisions that are made. And then each step. At each step, if you just took a step back and said, hang on, how would this affect people?

You know, if you could think differently at that point, it's the cumulative effect of all of these decisions that causes the bigger problem. So let's look at that opioid example, right? The way that went down was the company behind it, and there was the Sacklers, which was the family behind this company.

Purdue Pharmaceuticals. So what happened was they were focusing a lot on numbers and driving sales and then driving all these incentives of their sales reps, high pressure that, you know, you have to get doctors to buy more, prescribe more. And when doctors pushed back a little saying, wait, you know, but isn't it causing addiction or how does that work?

Uh, they gave a lot of false information saying, Oh, it's not that every patient gets addicted to opioids. It's only the addictive type of people. Things that were completely not founded in science, right? So, there were all these different steps. It was, you know, of course the pressure from the Sackler family.

But then afterwards, it was, you know, the decisions made by the sales reps. It's the person writing up the marketing brochure. You know, there are so many points where you can step back and say, hang on. Oh, and of course the doctor's prescribing it, right. Uh, because they would get all these incentives for prescribing it.

So put the concept a little more broadly. One of the things that I've realized is. We often think about building ethical products as, you know, ourselves as heroes in waiting. That one day you're going to be asked to cross a line in the sand and you're going to very bravely say, nope, I'm not going to do it.

But what happens is there's never a blatant point when you're asked to cross this line in the sand. It's all really quite fuzzy. It's all quite gray. And it's only a matter of, you know, you're doing incrementally worse things ethically. It's like boiling the frog where you never quite realize how far you've fallen.

So, you know, in our last episode, I talked about vision versus sound. Where, you know, you start to take on vision debt, like if you draw up the X and Y axis of what is a good vision fit and your X axis is what is good for survival. Vision debt is when you're doing things that may not be good for the vision, but hey, they help you survive.

They bring in money, for example, right? So as you start to take on vision debt, just. The amount of vision debt, you start to lose track of it because each time you take on vision debt, it becomes normalized in your mind. Uh, and the same thing with ethics, right? And so you never realize how far you've fallen.

And the biggest example of this that I saw more recently in the news was there was the Russian state anchor for TV who quit after the Ukraine invasion. And what she said was, you know, at each point, as she was delivering the news, she was often pressured by the Kremlin to spread some propaganda about Ukraine.

And, you know, she kept doing that because it felt harmless at the time, but it was all of those series of what seemed like inconsequential decisions that accumulated over time. And she didn't realize how far she had fallen. Until that point when Russia had invaded Ukraine, and that was when she decided to quit the national news.

Tim Butara: Yeah, and boiling the frog is a perfect example of this, right? We've all heard the metaphor and you've just, you've just highlighted and explained it really, really perfectly. And I'm guessing, right, in the context of, you know, number crunching and incentives for doctors for prescribing these medications in the concrete example of the, of the opioid epidemic, I guess this is the right time to, to, to discuss the concept that you just introduced to me before.

Before we started, before I hit record, which is the radical rethinking of OKRs, right?

Radhika Dutt: Exactly. So, one of the things that I've been thinking about a lot is, you know, what drives this sort of short term behavior and, you know, why do we end up taking on so much vision debt? How does that work? I realized that our approach to this number crunching and especially, you know, how we formalized it now with the popularity of OKRs, Objectives and Key Results, it was popularized by Google, but of course had been used before, but The point about OKRs is that it often sounds very good.

It sounds like, you know, we're doing the right things. We're focused on the long term. You know, we're going to align our teams. And so let's talk about why do we even write OKRs. One of the main reasons we create OKRs is because we want to align teams on the, on the impact that we want to have. And, you know, OKRs sound good.

And I'm going to give you a concrete example. One example of an OKR that my husband's company actually used. The objective was improving the quality of our code. Now that sounds long term, right? Fantastic. What could go wrong with that? And so they said the key result and the, the way they were going to measure this was getting 90 percent code test coverage.

And, you know, Hey, that sounds good too. Like you're going to start doing more testing. Yay. Right. Who could argue with that, but now let's look at how this behavior actually manifests to be able to achieve that goal. The problem is that you're setting a goal, which you're going to measure in the short term.

So. The way this affects people is that they want to show you that, Hey, I'm meeting this goal that you set for me. It's like the end of the year final exam. I don't want to fail that exam. Of course, I want to show you that I'm going to pass this. So what do they do in reality? Well, it was supposed to be investing in the vision where you're writing good test cases to be able to get to 90 percent coverage.

What they did instead was that they wrote a bunch of bogus tests just to be able to get the numbers up to 90. So in reality, what they were actually doing was just adding more vision debt to their code.

Tim Butara: Wow.

Radhika Dutt: So this is just one example, right? But in my experience and over my career, I keep seeing this, that what, when we set goals and targets for people, that the behavior that it incentivizes is it gets people to.

Uh, just game the system and show you short term results because it feels like an end of the year exam. I've seen other examples where the objective was increasing customer delight. Sounds long term. Sounds great. The way that was actually implemented in terms of key results that were being measured was getting, you know, 4.

6 out of five for a rating. Well, how did they get this rating? Very often they would tell customers, listen, you're going to get a call to evaluate us. Anything less than a five is a fail for us. And, you know, of course, when I'm told that I'm a good person, like I want to make you happy, I'll give you a five out of five, but does that tell you anything about whether your product is actually doing well, it doesn't, right.

Tim Butara: Yeah, that's a bit shady. And also, also in both examples that you gave the goals that were set were very vague, right? It was, it was, yeah, better code coverage, you know, it wasn't like, even when they were specific, it was just numbers. So it could be like, Even if you got specific, even if you got, so, so how can we rethink these OKRs?

So, so how can we, you know, actually make sure that they align with what the vision wants and, and we want and the users want also?

Radhika Dutt: One thing that I've realized is exactly the words you used just now, which was, you said, well, maybe it's a problem with OKRs and can we be more specific, etc. You know, this is kind of what Every person I've heard says when they find that there is a problem with OKRs, they say, you know, maybe I'm just not using these OKRs, right?

Maybe there's, if I just did this right, if I just found the right way of using these OKRs, everything is going to be better, right? And in reality, it turns out it's not about how you're using OKRs. So let's talk about how you can radically rethink it. I've realized that You know, back in 1976, there was someone who came up with a description of why OKRs don't work.

And the quote goes as follows. It's Campbell's Law. It says, the more a given metric is used to evaluate performance, the more likely it is to be gamed, and the less reliable it becomes as a measure of success. Isn't that fascinating? Like someone came up with this in 1976, and it's exactly what we see.

That the more a metric is used to evaluate performance, the more likely it's to be gamed. There's another one called Goodhart's Law from 1975, a year before that. When a measure becomes a target, it ceases to be a good measure. That absolutely, it blows my mind that we've known this for so long and yet, you know, this is what we do.

So, okay, now let's talk about how do you radically rethink OKRs. What we need to do is not use metrics for evaluating performance. So this is where I'm going to come up with Dutt's Law. You know, if two men can name laws after themselves, I'm going to do it too. So here's Dutt's Law. Metrics are only effective If they're used towards collaborative learning, right?

And I'll say that one more time, that metrics are only effective if we're learning from them. And so the point is not to set a target or a goal for what you're supposed to hit. The point is to measure metrics, to be able to learn what is working and equally what is not working. So I'll give you an example of how you use this, right?

What we want to create is not just a An objective of improve code quality and. Then set a target that we want to get to 90 percent test coverage. What we want to do instead, right, is write a hypothesis. And the way we frame a hypothesis is we say, If we do this experiment, then this is the outcome we expect, because, you know, this is the connection.

And so, what we then say is, if we write better test cases and improve test coverage in our code, then we're going to see less fragility in our code because our Tests will make sure that before we merge things in to the main branch that things are, you know, going to work, right? And so now we can start to actually measure this outcome based on, is it improving fragility?

Like how long is it taking us to do these mergers or like measuring our fragility in other ways, right? Like, um, And so you might have leading indicators and lagging indicators. And what you're not going to do is set a target, right? Cause that's not necessarily what is useful. What you want to then talk about is as a team, you'll measure different things that you agree on.

You will measure, okay, how's our code. Test coverage looking, you know, how many of these push requests or merge requests? Are we actually approving, not approving, you know, and what are we seeing? Like, that's the qualitative discussion that we want to have. And then we'll say, are things improving? That collaborative learning is the most valuable part.

And that's what, you know, both creating hypotheses and asking very good questions. And then Measuring things with those questions based on those questions rather to answer those questions. That's what I mean is collaborative learning and that will actually improve our test coverage and get us better code quality over time.

Tim Butara: And are there any important differences, distinctions between digital and physical products when we're talking about this?

Radhika Dutt: I think there is an important distinction because I think in digital products it's easier to measure things in an agile way because you have leading indicators that you can measure right away.

Digital products you put them out there in the market And, you know, as soon as something is out, you're able to get a leading indicator of, is this experiment working or not? When you have a physical product, often it takes a really long time for that product to get out in the market. And so more often than not, you'll have lagging indicators and you'll have to, but, but I think the important.

Angle is when you have such a hypothesis. It's important to think through what are leading indicators and what are lagging indicators. And so the time frame for physical and digital products might be different. But by thinking about what are leading and lagging indicators, you can start to Start to think about, okay, in what way can I start to measure things early?

You know, if I look at a physical product that is built, like if you have manufacturing, et cetera, you might actually have a leading indicator, which is how often is manufacturing? Are we running into issues in the process of manufacturing? You could do prototypes, like physical prototypes. And you want to get qualitative results in terms of, is this working or not?

You might not have quantitative results, for example, but that could be your leading indicator when you try out, you know, prototypes for your physical product. Like, is this working? Are people actually, you know, observing them? Are they using it well, et cetera, right? So thinking through leading and lagging indicators, regardless of what kind of a product it is.

Tim Butara: One other thing that we discussed last time and which was really interesting and in which we didn't really dive that deep into. We're already talking about it. So basically, you can't blame the users for using the product the way that the people who built the product intended it to be used, right? And I, and we mentioned this in the context of social media and how it has, you know, evolved, I guess, or rotted from something that was, you know, that was connecting people to now that's something that's, you know, very sometimes isolating and divisive.

How has everything that we've discussed today caused or contributed to this devolvement or this devaluation of social media? And how do you think that things will progress? And is there a potential solution or a positive outcome out of this?

Radhika Dutt: I think this is the kind of stuff that often keeps me up at night.

I have been thinking a lot about it. When it comes to social media specifically, I think there was until now a true lack of a vision. Like, yes, the vision was connecting people, but if you really look at that, what is it, what does connecting people mean? What does a connected world look like? You know, social media and their founders, like Mark Zuckerberg, uh, with, with Meta, I mean, it's not like he ever defined, uh, or an exact end state that he envisioned for the world.

What does a connected and open world look like, right? And in the absence of this clarity, what we have optimized for is metrics and numbers. We've optimized for user engagement, like, again, this is going back to O. K. R. 's types model, like, Let's increase user engagement and, uh, you know, hit this target. So, you know, everything that was built, the like button, for example, originally Mark Zuckerberg was actually against the like button because he thought that it was going to make the interactions more shallow and therefore reduce engagement.

But when they tested it and they found that it increases engagement, they decided to implement it. It didn't matter at that point anymore whether the connections were going to be shallow or not. And so, you know, this is, this is the enshittification effect that I mentioned that, you know, when we focus on the short term approach of thinking about, okay, is this increasing user engagement and therefore my revenues?

You know, then we think short term. And so what happens is the product continues to get worse and worse for the users, because we continue to add vision debt in the sense that, you know, vision debt, again, it's what is worse for the vision, uh, the end states. And, you know, usually that's. Wellbeing for users and it's great for survival, right?

And so the more vision that you keep adding over time, you know, first of all, it's obsessive sales disorder. That's the product disease that comes up. That's the short term version of it. And in the longterm, when you have so much. so much vision debt that you're long past obsessive sales disorder, that's really where you see end certification.

Well, the product gets so bad that people start to quit the product, right? And we're starting to see that with Facebook, that there have been more and more people swearing off social media saying that, oh, this is just too much. It's not good for me. And then, you know, you start to see as more people quit it, then advertisers start to flee it too.

So. And I think Meta is at the start of that, whereas X, you can really see that happen with X, that so many people, including myself, I have not been on X at all since it changed names. Um, I, and even Twitter, I had a hard time with, with 140 characters, but with X, like, that's it. I was done with it. Right. And that's the example of enshittification.

I do think that n certification over time, it means that the network effects that we saw when, you know, it was growing, um, and, and the more people were on it made it grow faster because there was more reason for everyone else to join. N certification for social media is going to be the opposite and this is what Cory Docter actually talks about.

You know, I want to expand Cory Doctorow's idea to other things that I've seen. Let's look at airline and travel as an example. You know, if you think about travel and has that experience improved since travel was invented, air travel was invented, right? Like if you think about the 1970s, air travel was an experience, like people enjoyed traveling.

I remember flying when I was a kid, and it was so exciting, it was a fantastic experience, and I think about it now, you know, it isn't the same, right? We all dread traveling, we dread going through security checks, but even the airline experience Very specifically in itself, you know, the, the, the little leg room that we have, et cetera, that's insidification over time, right?

And so this is what happens when we consistently keep measuring in the short term. And, and, and we do need to think about how do we change that and, uh, be a little bit more long term focused.

Tim Butara: Yeah, I think also for me, one big aspect of certification that you haven't mentioned yet is like on the one hand, you have the worsening of the tools that you're using, but at the same time, there is a proliferation of options.

So right in the case of the airline example, like the whole experience is much worse than before, but you can get anywhere much cheaper From, you know, maybe before, if you wanted to go to a specific destination, you would have to, you know, there would only be one flight per day, but now you have, like, several flights going from, you can have this, this airline, you can choose this different airline, but all of them provide an experience that's, that's not on par with, with what the, the single like lacking in, in quotation marks provider used to provide back then.

Radhika Dutt: But, you know, actually Tim, you bring up a really good example and a good point in that what you described is true of Europe. I think that is the reality in Europe that you have many more choices and, you know, it is easier to get from one place to another in Europe. If you look at the state of air travel in the U.

S. You know, it is actually much harder, uh, if you're traveling, you know, not just across the coast or, you know, to the major hubs, et cetera, because there's been so much consolidation over time that there is now less competition. Flights are more expensive if you pick some obscure place in the U. S. It is much harder to get between places, uh, over time.

Mm hmm. I think what you describe in Europe partially is the result of regulations, and this is where the role of government comes in, right? How do you prevent enstitification? I think some of this is really the balance that is needed between three pillars. One is the government as a regulator, two is the private sector, and three is communities.

You need really this three legged stool to be stable. If any one of these is not stable, you start to see this whole stool collapse, right? So let's, let's look at what that means. Private sector. The private sector is always going to think short term. What you need to counterbalance that is the government thinking about the long term and forcing.

This, the, the private sector to think more longterm and think about what is good for communities and in terms of communities, this is where we do need to be engaged as communities and have more of a feeling of a community and, you know, representation in government, et cetera, so that we have governments that really represent what communities need.

And it prevents the breakdown of communities, right? I think these are the three things that we're sort of seeing disintegrate in society. Like our communities are becoming more fragmented as left and right wing starts to split further, right? As there are all these fractures in our communities. And then we see that in government as well.

Um, and so this whole balance of long term versus short term, the yin and yang, it starts to not look like such a clean balance anymore.

Tim Butara: Oh, okay. Now we're opening up some, some heavy stuff again, right here at the end of the conversation. Maybe we'll have to do another one, but, but this might already be kind of out of, out of reach of the topics that we cover here, so, so,

Radhika Dutt: I agree.

Maybe we should bring it back too. Let's bring it to transformation. And how do you apply some of these things that we talked about to transformation, because transformation really is a long term journey, right? And one, maybe I'll mention just a couple of things that I think are really Important for transformation leaders.

The agile approach to transformation keeps you focused on just thinking about short terms. And if you're trying to use goals and metrics to align teams and what you want in transformation, it's a really dangerous path because it might really burn people out. As you focus on short term things, you get more gaming.

in how people are applying those ideas and therefore transformation isn't working that well, that then, you know, it becomes a feedback loop and creates more resistance to transformation. So rethinking this, what can you do differently? You do want to think more long term for transformation. So As you work on initiatives, think about the hypotheses for those initiatives and you can write it in this format of if we do this initiative, think about it like an experiment.

So basically, if we run this experiment, then this is the outcome we're expecting because this is the connection between the experiment and the outcome. Like, why do we think this outcome is going to come true? And then you can try. Um, and then you can think about what are the leading and lagging indicators to measure whether this outcome is working.

And as, as you define the leading and lagging indicators, have regular sessions with the people in the team that are working through this transformation. and talk about what are you learning, and engage in this collaborative learning together, and then decide, okay, we tried this experiment, what tweaks do we need to make?

Engage in this collaborative learning, and this is what will give you momentum and transformation.

Tim Butara: Some perfect practical tips for listeners right here at the end. Thank you so much Radhika for another great conversation. Another round of great insights. Uh, we'll make sure to, to interlink both episodes for easy access, but if anyone listening wants to connect with you or learn more about you in any other way, or, or learn more about your book, uh, where can they do all that?

Radhika Dutt: You can find the book anywhere books are sold. The title is Radical Product Thinking, The New Mindset for Innovating Smarter. You can also look at the Radical Product Thinking website for digital transformation courses and this is the training and workshops that I do with organizations. To help you transform and apply this product thinking mindset, um, to bring everyone with you on the journey and build better products.

And then lastly, you're very welcome to connect with me on LinkedIn. I always love to hear how people are using radical product thinking for transformation. So you can find me on LinkedIn, Radhika Dutt.

Tim Butara: Perfect. We'll, we'll include everything in the show notes. Radhika, thanks so much. It was great having you on again.

Radhika Dutt: Thank you again for all the insightful questions, Tim. This was so much fun.

Tim Butara: Well, thank you for all the insightful answers and I agree it was super fun. Thanks.

Radhika Dutt: Thank you.

Tim Butara: And to our listeners, that's all for this episode. Have a great day, everyone, and stay safe.

Outro:Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes. And don't forget to share the podcast with your friends and colleagues.