Jan Pilhar ADT podcast cover
Episode: 112

Jan Pilhar - Unlocking the power of AI for enterprise-wide transformation

Posted on: 09 Nov 2023
Jan Pilhar ADT podcast cover

Jan Pilhar is the Executive Director at IBM iX DACH and Co-Leader of their Digital Advisory Practice DACH.

In this episode, we talk about unlocking the power of AI for business-wide transformation, focusing more on the possibilities & challenges for larger enterprises versus small/medium businesses.

One of the key points in the discussion is also the question of "buy vs. build" when implementing AI models and what the best practices for enterprises will be in that regard.  We close with tips on how to drive a more holistic rather than siloed AI transformation, as well as a few more general tips for streamlining your AI strategy.


Links & mentions:


"And that rush, the speed is good, the energy is good, but if it's really driven by fear, I think it's misguided. And sort of balancing that, doing the right thing, doing it quickly, but not just doing anything just for the sake of it, I think that's a big challenge for many organizations right now."

Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, Content and Community Manager at Agiledrop.

Tim Butara: Hello, everyone. Thank you for tuning in. I'm joined today by Jan Pilhar, Executive Director at IBM iX DACH and Co-Leader of the Digital Advisory Practice DACH. In today's episode, we'll talk about unlocking the power of artificial intelligence in order to drive transformation for your whole enterprise.

And welcome to the show, Jan. Thank you for being our guest today. Anything to add here before we begin with our discussion?

Jan Pilhar: Tim, thank you for having me. No, thank you very much for the introduction. This is a topic that we've discussed with clients right now all the time. I think we are all aware we are sort of at peak hype when it comes to AI, especially generative AI. So, really happy that we have this discussion today. 

Tim Butara: Okay, so right off the bat, you say that we're kind of at the peak height and peak hype maybe, and that this is something that you're talking to clients a lot. So from your experience, from your conversations, from your work with client, what have you seen to be the main organizational challenges when kind of adopting and properly implementing these new AI technologies?

Jan Pilhar: A very good question. I think it comes down to stuff that relates to everything you are introducing in terms of new technology. That would be, do you have a sound vision and strategy? Technological questions, should we buy or build? How do we orchestrate AI? How does it integrate with cloud strategies; other topics that are closely related.

Do you have the right people? Do the people have the right skills? How do you manage that change? These are typical things, but they're probably not unique to AI. I think what's very unique to AI is really how can we navigate also the regulatory landscape? How can we find the right technology?

And very specifically, how can we derive value without falling victim to very palpable fear of missing out that's in the market. We really see there is, and actually our research shows that a lot of executives express, I feel under pressure to show that I have some generative AI projects running. My investors are asking for it. My board is asking for it. I got to do something. So let's, let's pilot something. 

And that rush, the speed is good. The energy is good, but if it's really driven by fear, I think it's misguided. And sort of balancing that, doing the right thing, doing it quickly, but not just doing anything just for the sake of it, I think that's a big challenge for many organizations right now.

Tim Butara: Yeah, I think this ties back a lot to what you said initially about peak hype and this peak hype kind of goes hand in hand with the pressure and the fear of missing out that you also pointed out. And it's just this feeling of like, if I don't do it, then I'll get outsmarted by my competitors even faster.

And then you add the regulatory or legal kind of considerations on top of that. And you just see that there's probably a lot of caution and consideration that a company or a business needs to needs to take if they want to do this properly.

Jan Pilhar: Absolutely. And this is exactly, I think the balancing act. Doing it, we absolutely believe companies should do it. They should leverage and harness that technology, technological shift also in AI, but do it wisely and don't fall into the many traps that are on the way.

Tim Butara: And have you seen any notable differences in this context? So when it comes to AI adoption and innovation between smaller companies on the one hand versus larger enterprises on the other?

Jan Pilhar: Yes. I think both in terms of obviously complexity. It's very different if you, let's say you have a team of 10, 20 people, you're a startup, you're nimble, you want to use the technology. And I think especially a lot of the stuff we see on social media is geared towards smaller companies. Like just use these 10 tools. Look, there's this new stuff coming out, just... 

And that is, can be very valid; if you have a small team, you just basically buy or rent some tools. They will already make you probably quicker, will make some of your tasks easier and more efficient. That's great. It's a totally different ball game if you're a large company, let's say having a hundred thousand, 200, 000 employees, very different setup. You have a very different IT landscape you need to work with. You have a very complex organizational setup. So obviously that's very different. And sometimes it's all mixed up and it's all the same, but we think there's a big difference. 

Tim Butara: And one thing that you also mentioned, the phrase buy versus build, right? So I think that this is also probably really important in the context of our conversation today. So maybe how should companies proceed with deploying these AI models? When should they kind of, how should they balance this act of kind of building their own AI solutions and using stuff that's already existing that somebody else has has already developed and innovated on?

Jan Pilhar: That's an excellent question. I think it's one of the fundamental decisions company have to take is really build or buy, or use or buy or whatever you want to call it. And obviously the use or buy is you just take something off the shelf. You might boost it a little bit, enhance it a little bit, but basically you're buying a tool that somebody else built for you. 

And what we're seeing now in the market is a very rapidly evolving landscape around readily built tools. I mean, we see this predominantly in, I think the fields of marketing, sales, service. That's also what studies show. This is where the most immediate value is for most companies, where you can really make stuff so much more efficient that there is also a huge economic benefit in it. 

And then we see just new tools coming out all the time, and it's probably very prudent to look at these tools and really ask yourself, should I build something if I can readily buy it? And basically somebody else is working on the product is improving the product all the time is managing everything for you. 

I think a great example for example in the market I think it's one of the fastest growing products right now is Jasper, which is a tool for writing text basically aimed at your typical content marketeer. It's just a fantastic tool. It's built on top of GPT. And you basically, for a small fee, subscription fee, your people can just use it. 

On the other hand of the spectrum, we have build, which is you basically build your own models with your own data, train your own models. And that is something that has been going on, obviously, for the last 20, 15 years in most companies, larger companies. They already use AI, often not generative AI, but machine learning algorithms and stuff, and often also have very mature processes around this and this is where we think there is really a competitive edge because make no mistake, obviously, everybody is using something like Jasper or the now upcoming Microsoft Copilot in Word and Excel.

And so, I mean, everybody's going to have that. So everybody basically gets the same advantage that technology brings. If you start building, you can really build proprietary stuff around your own data. The data nobody else has. And really, you can create a competitive edge. 

And I think a great example of a very early champion of that route was, for example, Bloomberg. Bloomberg built Bloomberg GPT. And they basically took all their financial data, all the stuff they have as basically one of the largest financial information provider and now sell access to their model to their clients. And they have something nobody else has. And it's basically a new business model for them.

So it really depends, should you buy or build, on what you want to do, where you want to go, what your strategy is, both is viable. And we think most company will probably run both approaches at the same time. For some use cases you buy and for some you build and you need to orchestrate all of that. 

Tim Butara: That makes a lot of sense. Yeah. One thing that also came to mind is, you know, since AI is basically majorly dependent on good data and accurate data and kind of good amounts of data, you know, not just sparse data. 

If you want to get max value out of it, then you would have to give it all the data possible, but for a, for an open tool like ChatGPT, giving it proprietary data can backfire, especially, you know, both in terms of business and in terms of legal and regulations.

So I'm also guessing that, you know, for these more proprietary data type cases, companies will opt for their own solutions, maybe based on an existing one. And for something like, you know, maybe creating content and stuff like that, they'll resort to Jasper. 

And I'm also guessing that as regulations develop, this will also kind of start dictating best practices around this build versus buy, because we will kind of see, you know, what, what gets set, I guess.

Jan Pilhar: Absolutely right. And I think the buy versus build decision, and especially if you go for build, which makes it obviously more complex. I mean, you have to think about your model. You have to think where you host your model. You have to think about how you govern and control your model and ensure that it basically stays stable over time. There's no model drift and things. How do you manage that? It's obviously more complex than if you just buy and use, and somebody else manages all that for you. 

And I think the fear of, okay, my proprietary data is now going into GPT when I use it; I mean, it's possible now to close that and basically still use one of the large language models proprietary commercial models and still keep your own data private. 

I mean, that was obviously necessary for GPT to make that possible for enterprise clients to actually consider using it. You can also, even if you're a private user of ChatGPT, you can say, basically, don't share my data. But on an enterprise level, you can absolutely shield and protect your own data.

What we're seeing, and I think that's also a question is, I mean, these large language models are huge, and because they are so big, they also use a lot of energy and energy basically produces cost. So a question is also really around how do you create a very feasible and economic viable setup?

And what we're seeing is that for a lot of the enterprise tasks, you don't need the huge model. There's an analogy I think is very fitting is if you use, let's say, GPT, you're using a model that has a PhD in physics, in marketing, in finance, in psychology, basically everything. And you're paying for that because you're paying for a model that is trained on all these domains.

But maybe you just need a finance guy or a marketing guy for your use case. So why pay for all the other PhDs your model has? Just take one that only has a PhD in that very specific field. And you get the same results if you do it right, but you have a much smaller model and that basically directly correlates with a smaller footprint, smaller costs, it can be much smarter choice. 

And we think we will see, especially in enterprise content, a lot of different models running of different sizes for very specific tasks. Some will be generative, some will be more traditionally AI, but it doesn't always have to be those huge foundational or large language models, which are just very energy consuming.

Tim Butara: That was a great analogy. Yeah. Rather than looking at like single use cases and trying to tie all of them together piece by piece, which can probably get more complicated the larger the organization or enterprise gets. So, rather than doing this, how can you, how can businesses and business leaders doing all this, or how can somebody listening right now, most effectively zoom out and, and kind of uncover how they can tap into this power of AI more holistically to make a more fundamental change? You know, we talked in the intro that we'll be talking about enterprise level transformation. 

Jan Pilhar: First level obviously is strategic vision. Where do you want to go with this? Is it, I just want to infuse it in specific work processes or employees. Or am I aiming to become an AI driven company, really make this a backbone of how I create value in the future. 

And we see different routes. I think a lot of companies are aiming for, we want to be AI driven at some point, but then you also got to consider what that entails. A big question here is, do you have your company data readily available for these kinds of use cases? Is it spread across different systems? 

Basically, if you don't do your homework in terms of the data prep, data consolidation, you're probably having a lot of difficulties introducing AI at a broader scale and it might be very prudent to maybe first start with some quick wins where you can immediately make learnings and then work on that bigger piece, consolidating your data, consolidating your IT in a way that basically is, is conducive to AI. 

What we're seeing, and that's actually the big play also we as a company are taking on AI is that what you will need in the future is a platform, and a platform doesn't necessarily mean a technological platform. It means an approach, a layer in your company on which you can build AI models, run AI models, govern AI models to make it available to all your teams in the company. 

Because what you don't want is what happened also with prior technologies that each department, each division, maybe even each market is buying their own solution, setting up their own solution, running it, and then you have a very fragmented set of islands, silos, nothing works with nothing, and you're basically creating a new nightmare. 

We've seen this in a lot of other domains, for example, with marketing data, customer data platforms, and a lot of IT teams have just begun to clean up sort of such a fragmented thing. And I think the worst would be if now through the new wave of AI you're doing it again. And again, fragmenting everything. And then it takes you five years to clean it up. And basically instead of building it up from the ground, you're just clearing up the mess that all these departments produced. 

So what you want ideally, in an ideal world, and we know it's never ideal. You want a platform, a layer on which you can really run these models, where you can run commercial models like GPT, like Anthropic's Claude, where you can run the open source at the same time, often you want to go for open source if they have a commercial license, where you can run different models of different sizes, and really build on all your use cases, but you have it in a way that each team in your company can use it. 

You have a governance across it that makes it possible for you to track what actually people are actually doing with it. We also see this as crucial with a lot of the legislation that is coming. With the question of, can you explain what you're doing? Can you show us that there is no bias in there? Let's say you have credit score decisions or you have certain prices produced based on data that you are working on. 

You might have to show that this is fair use and you might have to be able to explain which model, which data came to the conclusion or the business decision you took. And to enable that and not say I'm building all these different black boxes and I have no idea what's going on, you need to also very heartily think about governance and that basically entails having these kind of this kind of layout platform in your company.

Tim Butara: That makes sense. Yeah. Because, you know, just silos or islands are just inherently antithetical to effective transformation, right? It's basically, they're like stoppers to proper transformation. If we're not all going in the same direction, then, you know, transformation means that one department or one island might go here while another goes there. And then actually the more you transform, the more pronounced these discrepancies become if there's not proper governance, as you mentioned. 

Jan Pilhar: Exactly. And we've seen this with technological wave after technological wave again, always with a new technology and new tools come in, everybody is rushing to get this into their processes, and then basically, IT needs to clean up again. And we have a lot of discussions now that this, let's not do this, the same mistake again. This is basically what a lot of transformation, digital transformation has been all about in the last years is cleaning up. 

And now you're basically cleaned up. Let's not use this new technological wave to create new chaos. And then you have to clean up again. And that's a big issue because, I mean, we mentioned the fear, we mentioned sort of the pressure, also the excitement in the market. And that obviously leads to, well, we just built this POC with our agency or with our company, and it's great and look at it. Yeah, but it's isolated. It stands there and it has no connection to the rest of your company, to the rest of your data sources and everything, and therefore you probably will throw it away in three months because it was nice, but it's not where you want to go. 

Tim Butara: I think this is definitely an invaluable conversation for anybody listening right now, who's going through this, maybe who got caught in the pitfall of doing cleanup after cleanup. And hopefully this conversation is able to shed some light or kind of show a path forward that's most effective. 

Jan Pilhar: I know exactly what you mean. Absolutely. And really getting to where you can leverage it at enterprise scale versus having, yeah, the silos. And also I think in the previous, but the cost thing is really also now coming to the forefront of a lot of people because it seemed, I mean, everybody tried it out with chat GPT and it seemed free, right? I mean, you basically would use the free version. It did amazing stuff. And then somehow people, and we really see this also with enterprise clients. Yeah, but let's just build this thing. Let's use GPT. 

And now we are having clients who said, we built something and it's becoming so expensive for us. Basically the invoices we get every month for, let's say the knowledge management tool, we started and opened it up for all employees, that we really have to question: is the enterprise value in terms of knowledge transfer, quicker processes, really worth the money we're paying? We never really thought about this. We just tried it out so quickly. 

And we're seeing sort of a new, almost prudence that people realize now it's not free. And there's really big invoices, especially with the big models coming in every month, can go up to the millions. If you, for example, if you open up, let's say we have our company name GPT, and we just build it. And we just rolled it out to 300, 000 employees and they're using it like crazy. That is huge cost. 

You really have to ask, okay, everybody playing around with it in your company. Is that the value you're asking for? Is it really making you so much quicker? And so much, or is it just people, I don't know, just asking questions and everything? Is there really the value in it? 

We think there can be a lot of value. You should absolutely do it, but you should also think about the cost and really... make sure you have a business case that balances cost and benefit and not just starting something and then later on figuring out, this is expensive because we bought all those PhDs and maybe we should have just bought that for this team and that for that team.

Tim Butara: Yeah, that was a very, very good point and a great add on to what I said, right, about this conversation being super helpful to people listening. And this piece of insight was one crucial bit of info that's, you know, very relevant to them. So Jan, with all of this in mind, especially with everything that we mentioned in the last few points, what's the key or the most important priority in terms of strategy for successful AI deployment and eventually, I guess, transformation ? 

Jan Pilhar: Starting with a clear understanding where you want to go, not in a week or a month, but let's say in the next 18 to 24 months, I think in that timescape, because a lot of the stuff you might also be thinking of building now might be there off the shelf and you could just buy or use. 

I mean, we're seeing when we look at the roadmaps of the software solution vendors we're working with, everybody has that on their roadmap. And often it's just, hey, before you buy something else or go somewhere else or start to build, let's see what's basically in your licenses or on top of your licenses, possible soon. We're seeing that, I mean, just think about Microsoft. They already announced that there will be copilots based on GPT in basically the Office 365 thing.

I think they even mentioned the price now. It's like 30 bucks a month per employee. Then you already have like a basic copilot in all the Office stuff, making people probably much more productive. Think of Excel. How did, how again, did I do that pivot table and what was the formula, the Excel macro? You just prompt it in and it does it for you. I mean, this could be a game changer. So that will be out of the box there. And a lot of stuff will come. 

So I think also this use versus build decision is a moving target. What might seem a good idea to build now might be buy use case tomorrow. So still have a vision, evaluate what's coming in the pipeline of the companies you work with, and then really be smart about how do you leverage this at an enterprise scale.

Don't think just POCs, MVP, scattered stuff around the company, really think, this is not going away. You want to leverage that as a competitive edge for your company. And then you need this kind of platform around building it, governing it, running it, managing the data. That's what you want to be thinking about if you are a larger company. 

If you're a team of 10 people, I would just say, buy some cool tools, leverage them. And once you get big, you can still sort of reevaluate and who knows where AI is then when you're basically in need of such a platform. But a real larger corporation needs to think about this holistically. And I think this is all the conversation now moving from the excitement of what GPT can do to, okay, how do we actually bring it in a safe, secure, reliable way into the enterprise? 

And you touched upon this. I mean, there's also regulation, there's copyright issues, there's infringement stuff. You want to navigate this, and you don't want to go into a legal gray zone and basically don't feel confident about what you have, or even be on the hook for violations and fees you're going to pay or or worse. 

Tim Butara: Jan, this has been such an awesome conversation. I really enjoyed it. A lot of great and super valuable insights. You know, we have a lot of discussions about AI, but I think that this is one of those that's really focused on this business case for AI. And it's really aimed at, you know, business leaders. So as I mentioned a few times, I think that everybody listening right now will get tons of value out of this conversation. But if any of these listeners would like to connect with you, would like to reach out or learn more about you, where would you point them to?

Jan Pilhar: Anybody's invited, just reach out. If you have any questions, you want to follow up on any of the points we mentioned, just reach out via LinkedIn. That's probably the easiest. Yeah, that would be the easiest. Otherwise, you can always check out our company website. I think I've listed there as well, but LinkedIn is the way to go. And don't be afraid to send me a DM. Always happy to chat anything related to AI experience or any of the other topics. 

Tim Butara: Great. We'll make sure to also add everything to the show notes just in case. And Jan, thank you again. This has been great. It was really great to have you here and thank you for joining us.

Jan Pilhar: Tim. It was a pleasure. Thanks for having me on. And yeah, can't wait to, to discuss more. 

Tim Butara: Yeah, definitely. I think that we will have to have another discussion sometime soon because things are moving so fast that, you know, it might be time to just kind of rediscuss everything or there might be new angles coming up that we just must follow. So we'll definitely need to be in touch. 

Jan Pilhar: Absolutely. Looking forward to it. Take care. 

Tim Butara: You too, Jan. And to our listeners, that's all for this episode. Have a great day, everyone. And stay safe.

Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes. And don't forget to share the podcast with your friends and colleagues.