Episode 135

Eric Siegel & Greg Kihlstrom - Leveraging agile to unlock the full potential of AI

Posted on: 23 May 2024

About

Eric Siegel is a former Columbia professor, leading ML consultant & author of The AI Playbook, and Greg Kihlstrom is a CX and marketing technology transformation consultant, author & keynote speaker, as well as the host of The Agile Brand.

In this episode, Eric and Greg return to the podcast to discuss how tapping into agile principles would enable businesses and organizations to get more value out of AI and machine learning initiatives.

 

Links & mentions:

Transcript

"The powers that be haven't really fully wrapped their head around the fact that this isn't just just because you're using the best, coolest, most advanced, most potent technology doesn't mean you're actually establishing value. You may be creating value, but capturing it requires change to business operations. And that's what the project needs to revolve around."

Intro:Welcome to the Agile Digital Transformation podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, Content and Community Manager at Agiledrop.

Tim Butara: Hello everyone. Thanks for tuning in. I'm joined today by not one, but actually two returning guests that we had on the show just recently. Eric Siegel and Greg Kihlstrom, both are accomplished speakers and authors with Eric specializing more in machine learning and Greg in customer experience, marketing, and digital transformation more broadly.

And today we'll be talking about how to leverage agile principles to unlock and tap into the full potential that artificial intelligence brings to the table. Eric, Greg, welcome back to the show. It's a real pleasure having you both here with us today. 

Eric Siegel: Yeah. Thanks for having me, Tim. 

Tim Butara: As I said, thank you for joining me. Thank you for joining us again. Both of the conversations that we had initially with, with both you, Greg, and you, Eric, were really great. So, so I thought it would be really neat to have you both on to discuss this topic that kind of connects, connects both of your expertise. So I want to start with this and maybe Greg, you can lead here and Eric, you, you can also add your thoughts, but I want Greg to, to start off. Why is being agile so important and so valuable in today's digital economy? 

Greg Kihlstrom: Yeah, sure. So, you know, I think it's, it's definitely a cliche that the pace of change continues to increase, but it doesn't make it untrue just because it's a cliche, right? So. You know, I think first to say, and, and when we talk about agile, there's, there's a lot of ways of, of looking at this.

I am not a dogmatic, you know, you've got to use scrum and do this, this and this on these certain days. Like it's, there are people, there are places that do that. There are people that, that believe strongly in that it can be a religion in my opinion, if you, if you adhere to it too strongly. 

But I think what anyone and everyone should do, adapt and adopt are the principles of being more agile. And so, you know, I'm, I'm going to speak mainly to that. And, and certainly, Eric, you know, curious your, your thoughts on that as well. But, you know, I think, you know, the principles of agility are things that can be applied to, to any company and they help us listen to our customers.

They help us work together more collaboratively and, you know, there's 12 official principles. But, you know, really when I think about it, it's, you know, how do we do those things and how do we continuously improved and improve and adapt over time? And if you happen to implement according to a very specific framework or methodology or whatever, you know, however you apply it. That's great. 

But being agile is important because things are not going to stop changing and organizations don't stop collecting data. They don't stop making new decisions and they need a feedback loop to be able to make better, better and better decisions and learn from the past. 

Tim Butara: Eric, any thoughts here?

Eric Siegel: Well, I mean, I think agility is absolutely key for any machine learning project and AI in general, depending on how you define that. But certainly with generative AI, you know, in my new book, The AI Playbook, I present a practice called bizML, and it's a six step iterative practice which requires a lot of backtracking and iteration it's experimental when you're doing data analysis of any kind it's an experimental venture and you need to try certain things out and then go back to the business people and say, well we can't quite pursue this predictive modeling, we don't have quite enough of the right data, we're not getting the results that we need. 

So that iteration as much as it needs to be led by the business objective and the deployment goal also needs to incorporate that kind of agile response as you iterate, as you try things out, I think, but there's a slippery slope here too, because the main problem with machine learning projects and most enterprise machine learning projects fail to reach deployment.

The main syndrome, the problem, the error, that's repeated, the pitfall everyone falls into repeatedly is to jump right into the number crunching very rapidly. Let's just make a model and then we're going to deploy it to help retain customers or mitigate our fraud management or whatever it is, instead of having sort of a proper business practice.

So, in the one hand you want things to move quickly but you also need to make sure it's within that an established business structure, a paradigm, a playbook where you're pursuing... ensuring that you're pursuing business value by paving a path well planned towards deployment from the get go even if there you also need to allow for that agile iteration along the way.

Tim Butara: So basically if you want Agile principles to help you alleviate the biggest downsides and pitfalls of AI technologies, you need to implement Agile in the right way, I guess. 

Eric Siegel: Yeah, exactly. Be agile in the right way. 

Tim Butara: Yes. So do you have any words on like what that might be or maybe both of you, both Greg and Eric?

Greg Kihlstrom: Yeah. I mean, I can just to kind of mirror what Eric was saying, you know, the, the concept of business value. So, you know, I think misconceptions about agile approaches are that it can be reactive versus methodical and you know, scientific, if you will, and how it's approached. 

And so, yeah, Eric, to what you were saying, it's like, absolutely, like, it's got to start with. What does the business need? And then what, what does value mean to the business and then optimize according to that iteratively, not just, hey, we're going to figure, you know, we're going to build the plane as we fly it or whatever, you know, whatever cliche is, is often said in those failed projects, Eric, you probably know more about that than, than me on the, on the, on ML side, but you know, I, I do think it comes back, there's got to be a foundation for it or else, how do you even know what success looks like?

Eric Siegel: Mmhmm. Well, if I'm more of a machine learning person, I've been in the field more than 30 years and you two are certainly more agile than I am. And I don't mean that necessarily on the football field. I just mean, yeah. 

Well, let me, let me just real briefly outline this six step practice the way I define it. Cause I would love for the agile community to look at that and be like okay look the agile principles apply within that framework but from the machine learning perspective this is where we see that the failure and it's more than just, okay look you need the structure to plan the business value, it's more specific than that it's the value proposition, which in the case of deploying a machine learning model is a pair of things, what's predicted and what's done about it, which customer is going to cancel in order to target retention, which transaction's most likely to be fraudulent in order to decide which transactions to block or, or audit. 

So you're driving millions of decisions and that's at the heart of all the large scale operations with probabilities, with predictions, with the scores that are output by predictive models that have been generated over data. So it's a very particular endeavor that really requires a specialized business practice that plans from the get go that pair. What's predicted? What's done about it? That's the deployment goal. 

The second step, and that's only literally the first step, and then the second step is to define more specifically, what are you predicting? Because it's, it's got to be defined very detailed, not just who's going to cancel. Which customer's gonna quit, but, you know, which customers have been around at least two months, who have been spending at least this, at this level, will decrease their spend by 80% within the specific time frame of three months, and not increase their spend elsewhere, in a different channel, because that doesn't really count as losing the customer, et cetera.

So all those caveats and qualifiers, it becomes a semi technical yes, no question prediction goal in many cases. And it's got to be informed from business side considerations. 

Step three is defining exactly the metrics that are going to apply, how well it predicts and how much value it will generate. And then the last three steps are the technical steps that define any machine learning initiative since the 60s for credit scoring and targeting marketing, which is prep the data, apply the machine learning. That's the rocket science part. Learn from the data. And then deploy that model, actually integrate the predictions into operations. So they change that they potentially hopefully improve. 

And that's the culminating step. That's the deployment. That's what you have been planning from step one from the get go. So that framework, I call it bizML, those six steps. It's very specialized for what it means to integrate predictions and plan for that into operations and the only way you get predictions is from machine learning and the machine learning depends on data. 

So it's all the particulars around that that use of that technology from a business standpoint not just the rocket science, but the actual launch of the rocket. So within that framework, go like, tell me what your, how agile best practices sort of, what does it mean to you within that framework? Cause there is a lot of backtracking on those six steps. Oh, this doesn't quite work. We got to revisit. And the whole thing has got to be very deeply collaborative between the tech and business, between the data scientists and the stakeholders.

Greg Kihlstrom: Yeah. I mean, I can, I can take a stab. I mean, again, you know, we already talked about the, the business value driving it. So, you know, won't repeat there, you know, I mean, I look at the beginning of if we just look at things in terms of sprints, and, you know, we're going to accomplish one set of work in a certain period of time that could be brand new work, or that could be revisiting something that needs to be improved and iterated upon.

But at the beginning of that, I look at that as, you know, what's the hypothesis that we're, you know, proving, disproving. It doesn't sound dissimilar to what you're talking about either, which is, you know, again, how do we know that we're achieving or not achieving? And then if we do achieve, sure, move to the next step.

If we, if we're not, we, you know, we redo, we revisit, retest and all that. I mean, that's a very, that's a gross simplification of what you just laid out, of course, but it's like, if you think about it in small enough increments, that's, that's essentially what it is. It's just a series of, okay, we're going to, we're going to do one thing or multiple things, test, see if it works based on our hypothesis. And then either again, move, move forward or move laterally. 

Eric Siegel: And that, that framework around agile with where you establish the hypothesis that applies for any, any kind of project even outside AI, is that right? 

Greg Kihlstrom: I mean, yeah, I've done some very simplistic, you know, like are people, are more people going to open emails, you know, in a marketing context or whatever, but there's still a hypothesis, you know, you're, I think you're talking about much more complex hypotheses in, in a lot of these cases, but, you know, it's still. It still stands. 

Eric Siegel: Well, the hypothesis, I guess, if you want to put it in those terms is, you know, do we have enough with the data we already have? Because normally you're operating on found data. So the data that was already naturally accrued in the course of conducting transactions or business as usual, no experimental design, no collection of data for this project. That's typical. Not always. 

So, with the data we have now, can we generate a model that predicts well enough, better... significantly enough better than guessing, because we don't have a magic crystal ball, but better than guessing is usually significant, to improve this operation, such as how auditors spend their time investigating transactions for fraud or any, you know, which satellite should we investigate is potentially running out of battery, which train will might fail. You know, there's there's a million different operations in order to make a dent. A meaningful dent in terms of KPIs, right? 

So that's sort of the hypothesis. It turns out that in the vast majority of time, the answer is yes. We, I mean, cause any large scale operation, you're already collecting the data that you need because it's a large scale operation. So we have plenty of examples, both thought positive and negative from which to learn. The, the, the failure comes in the fact that there is a lack of business planning, because there's that lack of business practice that I frame as bizML and the lack of stakeholder understanding of the semi technical machinations so that they can collaborate deeply across all six steps.

So, ironically, if the hypothesis is, is this business ready to implement a predictive model to deploy it, then unfortunately the answer is often no. And you find out repeatedly the hard way. 

Greg Kihlstrom: Yeah. Yeah. And I, I mean, I, I would just say the, the trick is to make the question small enough that we get to, you know, the real, the fundamental reason why it's not.

Cause yeah, to your point, if, I mean the, that's where the null hypothesis, you know, it's like if we have enough data, but. Yeah. We can't find any statistically significant results from that data. Then we can have all the data in the world, but you know, who cares? So, yeah, I think it's breaking it into small enough chunks to start so that we start finding, we start poking the holes in the wall before we build the rest of the wall, I guess. 

Tim Butara: That makes sense. Yeah. And so maybe Eric, you're, you're the right person to, to lead with on this next question. What would you say are the main pitfalls that leaders should be mindful of and keep top of mind when they're trying to, you know, adopt some of these principles to really tap into AI and ML? 

Eric Siegel: Well, I think the main pitfall, as I mentioned, is, is jumping right into the number crunching, the modeling, the, application machine learning on your existing data before addressing the sort of pre production business steps with a holistic business practice playbook paradigm that, that joins together in deep collaboration with business stakeholders, such as the line of business manager in charge of running the operations meant to be improved with predictions and planning from the get go for that deployment.

Not just the idea of, hey, we want to retain, you know, customers, but much more precisely, we're going to engage a new targeted marketing meant to retain customers who are at high risk of defection and everything involved with that needs to be part of the goal. 

It's not just the rocket science, it's how that rocket's going to get launched. It's not just the number crunching, it's how you're going to actually use predictions to change or Implement a new business operation. 

So, you know, it's a consulting gig, not a technology deploy or install, right? You know, you might have like a, some new database system that you don't have to do that many, many business changes. People kind of under the covers can change the technology and all of a sudden the database operates two or ten times as quickly. Great. 

But that's not what this is. These enterprise machine learning projects are meant to improve your existing large scale business operations. That's change. It requires change management and planning for that change from the get go.

So everyone wants to be fast, but I guess there's a difference between fast and agile, right? I mean, you can't. I mean, you can, right, do a preliminary spend a few days pulling together a preliminary data and try out modeling but know the whole time that this doesn't count because they're, nobody's going to deploy it I mean this is what happens over and over again.

Is the data scientists get the green light the powers that be haven't really fully wrapped their head around the fact that this isn't just, just because you're using the best coolest most advanced most potent technology doesn't mean you're actually establishing value, you may be creating value, but capturing it requires change to business operations, and that's what the project needs to revolve around.

Tim Butara: Greg, any thoughts here, maybe? 

Greg Kihlstrom: Yeah, no, I mean, I like that you made the distinction between fast and agile, because I agree, it's like, agile is often faster to some kind of result. But it's not always faster to the end result that we want to, you know, that we thought of, you know, way back when, when we first started.

Sometimes it is, many, in many cases, it can be, but it's not a guarantee. There's a... when done well, there's a higher likelihood that you'll get to the end result versus planning a 10 year transformation project that goes nowhere or never launches or whatever as well, but again, none of it's a foregone conclusion.

And I do think, you know, that and the idea that the misconception that agile is simply a reactive process to things versus a more, you know, essentially using the scientific method on a, you know, on a recurring basis to get to, you know, to get the right answers to the right thing. Like, I'll I think those are some misconceptions that people, again, they rush to things. They, they think they're being agile, but really they're, I don't know, they're throwing caution to the wind and like just doing stuff. 

Tim Butara: Yeah. Just, just doing, doing stuff. They can say that they're agile and they can be calm about that. 

Eric Siegel: Yeah. I mean, that's the life of a researcher, right? I mean, when I was in graduate school and this is decades ago, that was the thing, I would get a great idea and then I could spend the weekend just implementing it all by myself. No bottlenecks. Right. See how well it works. 

Greg Kihlstrom: Right. 

Eric Siegel: But there's a big difference between sort of. showing that number crunching can potentially create a model and an enterprise actually changing operations by using the model. 

So I'd actually like to turn a question to you two. I've kind of turned the conversation more towards predictive AI, predictive analytics, those types of predictive enterprise use cases of machine learning.

And that's the focus of my book and a lot of my work and, and most of the sort of machine learning community up till a couple of years ago. Now, a lot of the, obviously the public attention has turned to generative AI, which is really apples and oranges. I mean, generative AI, you're using it to create a new content item, such as writing a draft, writing draft, piece of code, a draft image that generally needs the human in the loop.

I think that we're still waiting for the killer app, and I think some people would say, you know, coding is a killer app, but I mean killer app in terms of the expectations that have been set publicly, which are much broader than that, and I, and I think they may be hard to meet, but there's certainly value in first drafts, huge amount of value in terms of efficiencies.

My question to you, and you can answer as well as I because it's the wild wild west now, there's nothing there in terms of enterprise value there's not that much to know about generative AI because it doesn't require that kind of understanding of what it means to act on probabilities. 

It's a very different use of machine learning under the covers. And in the end, it's super accessible. You use natural language and you generate prompts, or you as a human create prompts and see what it gives you. How do you apply Agile concepts if you're an enterprise, and this is what all enterprises are asking, how do we jump on the bandwagon? How do we make value out of this? What's the proper Agile approach? 

Greg Kihlstrom: Yeah. I mean, I can start at least to agree with you there, like AI, very broad umbrella of things. And these days when someone says AI, they're probably, if they don't know better, they're probably just referring to ChatGPT, even not even to be as broad as generative, but like to actually talk about AI or even just to talk about generative AI, I mean, I apply, I mean, I work with plenty of companies and enterprises that are looking into the stuff, exactly as you're saying.

And they're, you know, most of them are there, you know, with a wink and a nod, they are jumping on the bandwagon. I mean, they want to be strategic as they can, but they know they're jumping on the bandwagon. It's like anything I, you know, test and learn, right? So it's like, okay, first, what are, why are we testing and learning? You know, what's, what's the, what's the business objective we're trying to achieve, and then, okay, what, what are the tools available? What's it going to gain us? 

You know, it's a lot of efficiency gains and even a 1st draft to your point, like, that's an efficiency gain. It's most of it's not ready for prime time yet. So it's not something that's almost nothing is going to, you know, reach a customer directly at this point, unless there's some serious guardrails in place, but I would also say, I mean, to me, the fun stuff, the most fun stuff that I'm working on right now is combinations of, well, what would we do if we combined predictive with generative?

But to me, that's some of the fun stuff that, again, exploratory and R&D phase on, on a lot of this stuff, but it's, that's where I think it gets exciting because then, we get to, we get to play with, okay, the, one of the biggest problems that I see is that feedback loop of like, we collect data and have dashboards, mountains of dashboards, but then we never do anything with it.

What if our predictive feeds us first drafts of things, and then we get closer to actually completing the feedback loop. 

Eric Siegel: Can you be a little more specific? Like what would be an example of combining them? 

Greg Kihlstrom: Yeah. So, you know, to predict churn, you know, customer churn, let's say, so, you know, we have all the data we understand, we have a, we have a target for customer lifetime value. We see somebody is taking the actions to churn. And we want to prevent that. So, okay, let's generate an email to them that is okay, hey, Greg, you know, a custom offer, content, whatever. It's not, you can do that at a broad level and just say anyone likely to churn gets this one email, but we've all received those and probably, you know, closed them.

So like to be able to do stuff like that, where it's hyper personalized based on predictive intelligence. You think that I'm a, a product is a good fit for one particular customer, but they're taking actions that actually say, well, you know what, they need this whole other product that you sell, move them into an entirely different journey, you know, buyer's journey and you know everything about them from their previous experience. So let's hyper personalize that as well. 

So, you know, that's, I think, to me, that's some, that's some exciting stuff. Again, what, what the generative generates right now may not be, may take a little human oversight for the time being, but I think we're in, in short periods of time, probably it's going to be ready for prime time.

Eric Siegel: I don't know. I think it might be several years. So we could sort of scope.The definition of what you're referring to said, as you said, reaching customers directly in this customized message. If you want to scale, right, over hundreds of thousands of customers that are risk of defection, right? You can't have the human in the loop. If you if you really trying to use technology in a cost effective way. 

And if you think the effectiveness of the retention message would be increased by having it totally personalized automatically with GenAI, that's the question is, when is that ready for prime time? Much the same as a specialized chatbot to rebook a flight.

That one actually might be hairier. But let's pick another example, because that's, there's some hairy stuff, I think, in the air, air travel industry. But let's say, you know, how do I fix my dishwasher. And then there's a bunch of specialized knowledge and the advantage that the chatbot, you know, can handle however the customer wants to talk in casual human language.

I think those two are kind of analogous because they're both very constrained kind of topic areas. So there is some feasibility that it will become ready for I have a lot of skepticism and I covered in both of your podcasts, especially this and I, when I was on both of you hosted me on your podcast previously on this podcast with Tim. I really got into that. 

We were really, I just revisited that episode yesterday and we had a lot of fun just deconstructing the overhype and the problems with them. And along those lines, just the idea that a general purpose chatbot about any topic. But when you start to hone the topic down, then the question is, when's it ready for prime time when you kind of get it under control, it's, it's trustable, it's trustworthy in that it'll have the right answer as often as you would hope the best human expert would, and it won't come up with incorrect information any more than you would hope from the best human expert, right? 

So something like that, and it's parameterized, it's constrained, it's only a little bit of information. Now we have the holy grail the generative AI has been promising, right? So you're like, it'll be pretty soon. And I'm like, man, I don't know. So I think it'll be several years, a lot of people are acting like it'll be several months, and that's a really big difference, right? 

Greg Kihlstrom: Yeah, so, I mean, for what it's worth soon to me is, is a, about a year or two. So, like, I, not. 

Eric Siegel: No, I don't, I don't think that your opinion is crazy at all. A lot of my friends are, they're like, and I'm not completely like, oh, it's all baloney. It'll never happen. You know, on a broad scale, the difference between one, six years, isn't that great, obviously. 

So, but it is a question and no, we don't, it is a research. It's not just product development. It's a re open research issue. So in the meantime, I don't know, what are the, what are the agile methods to help companies sort of test out whether it's ready for prime time for their particular use case in a way that's cost effective? 

Greg Kihlstrom: Yeah. So, I mean, some of this is just risk mitigation too. So, you know, like what, what are the factors that could go wrong? I mean, you know, so the enterprise platforms that, that I'm familiar with, you know, they're trained only on the enterprise's data. So, you know, they're not going to go out and hallucinate to, you know, to your earlier point, it's like the more we can limit the question that's being asked and the answer that's being given, you know, it helps us tremendously.

So, you know, if I go and ask an airline's chatbot, like, hey, what medication should I take for my, you know, X, Y, Z? Like, that's a, that's just a mismatch of, of both of those things. But if I ask a software company, like, hey, where do I find the menu item for this button in your software, that becomes easier and easier.

You know, some, sometimes it's to the point of not being terribly useful, but you know, you, I think you get the point. So like, I, I think some of that is actually. It's actually ready already. Like if you're only training on a very subset of, of, of information. And again, you're limiting the questions that can be asked. I think that's already ready, but that's, that's a, that's a pretty simple use case. 

Eric Siegel: Well, and I would argue that that particular, that the use case that's that well scoped, there's a big question of whether it's worth generative AI. Cause it's already, you're getting just as much value just from search.

Greg Kihlstrom: Yeah, yeah. So then, so then to, you know, to your question, it's like, well, how do we make, how do we make an investment in something that's bigger than, yeah, then either, either search or just a simple, if then, you know, algorithm, you know, cause you can do a chatbot like that too. 

You know, how do we do something a little bit more, but not so much that we're going to get either errors or some other kind of risk, you know, just customer dissatisfaction or, or, or something like that. So it's, I mean, again, it's kind of back to like, let's, let's take a step forward test and, and just keep pushing, pushing, pushing to, I think to your other point, that's costly. 

And that may not be worth it until we have reasonable confidence that.

We can take a big enough step that it's actually going to save time and increase customer satisfaction, lifetime value, all, all those, whatever our KPIs are. So in that way, that's another factor in, is it ready for prime time? Cause again, if, if we can do it as one question, but should we do it? It's a much bigger one, right?

So I'm not really answering your question, I guess, but it's sort of, it's maybe that's the spirit of the, the, the... 

Eric Siegel: Well, again, I, did I say earlier, I kind of consider this a wild, wild West, right? Everyone's an expert. Nobody's an expert.

Tim Butara: This has been such a fascinating and awesome conversation. I love, I'm loving that I'm just basically letting you two talk and discuss. And, and, you know, I, I figured that, that, you know, the more I, I, I try to budge into, into your conversation the less cool insights we'll get. 

But just, just to, you know, to, to start driving everything to a close. Do you have any final tips for listeners who are maybe, you know, trying to implement agile, trying to, to implement agile, to unlock the potential of AI at their companies, but are maybe having a hard time with doing that or having a hard time with seeing real value out of all this.

Maybe Eric, you can lead here. 

Eric Siegel: It's such a trite saying that you got to lead with the business objective, the business intention. And what I was saying about predictive, where you got to start with a deployment goal, which is what's predicted and what's done about it. The same basic concept applies for any generative project.

It's not just like put some intelligence into your operations. It's got to be like, okay, we're going to draft more, more customized retention messages. We're going to give new customer service agents who operate on a chatbot draft paragraphs for them to consider altering and copy pasting into their customer.

You know, it's gotta be a very specific deployment goal, a very specific operation, operational way that the new generated content is going to be used. If you're going to systematically get your coders to use draft code, it's one thing for an individual coder to, in an ad hoc way, experiment and see what works best for them.

And that may be a large part of it, but if you're trying to sort of systematically get it across your entire population of engineers, again, it's, it's a, it's a very tricky thing to define exactly how you're going to try to instill and define best practices and what scope of what type of coding does it make sense for.

So these things need to really, really be well defined before you even sort of give a trial deployment. trial initiative a chance. 

Tim Butara: Yeah, that makes a lot of sense. Greg, any tips? 

Greg Kihlstrom: Yeah. I mean, you know what he said, but also to go back to, you know, definitely agree with the business value part of it. I would just also recommend just, you know, when you're planning things, I mean, don't forget that.

This is a, this can be a scientific process of, you know, trial and hypothesis and, and all of that. And it's really should be, you know, again, don't get lost in this. Like we can be really quick and agile and, you know, agile in a small a agile, I guess, or whatever you, however you want to say it to be agile is not to, you know, to mirror what we were saying earlier, like it's not necessarily to be quick. It's, it's to get iteratively to the right answer in the best way possible. 

Tim Butara: That was an awesome way to put it. And I, and I love how we kind of went full circle and circled back to, to some of the initial points that we made about, about fast versus agile. And I really loved getting both of your perspectives. I think, I think that they both really enrich the conversation. 

So, yeah, just before we, we wrap things up and jump off the call, if listeners would like to learn more about your connect with, with the both of you where can they do that? Greg? 

Greg Kihlstrom: Yeah, sure. So I'm really active on LinkedIn. So, you know, look me up, Greg Kihlstrom on LinkedIn.

You can also go to my website just at theagilebrand.com. 

Tim Butara: Awesome. Eric. 

Eric Siegel: So my book, The AI Playbook just came out a couple of months ago. So which espouses that BizML practice bizml. com. And I've been running this conference series, Machine Learning Week, previously Predictive Analytics World, since 2009.

Our conference in the U. S. is June 4th through 7th in Phoenix, Arizona, and then in November in Munich. And we have a new sister conference, Generative AI Application Summit. So those two conferences are co located the first week of June in Phoenix. 

Tim Butara: Awesome. I think this should go live just before, just a few weeks before the first conference, you said that it was at the beginning of June.

So that this will be great timing and we'll make sure to link everything in a show notes, together with both of your episodes in our show that we already did, so that listeners who haven't heard either can, you know, also revisit and kind of get like the whole perspective from both of you. So thanks again, Eric, Greg, it was really great having you both on. Thanks for taking the time. 

Greg Kihlstrom: Yeah. Thanks Tim. 

Tim Butara: And to our listeners, that's all for this episode. Have a great day, everyone, and stay safe. 

Outro:Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes, and don't forget to share the podcast with your friends and colleagues.