Skip to content
agiledrop logo
    • Agencies
    • Organizations
    • Product teams
    • E-learning
    • Media & publishing
    • Staff augmentation

    • Dedicated teams

    • Turn-key projects

    • Drupal

    • Laravel

    • Moodle

    • Storyblok

    Front-end

    • React
    • Next.js
    • Vue
    • Nuxt.js
    • Angular

    Back-end

    • PHP
    • Laravel
    • Symfony
    • Node.js
    • Company
    • History
    • Team
    • Careers
    • Slovenia
    • Blog
    • Podcast
Get developers
Footer Agiledrop logo
Agiledrop Ltd.Stegne 11aSI-1000 LjubljanaSlovenia, EUEU flag
gold creditworthiness
Services
  • Support & maintenance
  • Drupal 7 upgrade
  • PHP staffing
  • JavaScript staffing
  • Legacy PHP development
About
  • Company
  • History
  • Team
  • Careers
  • Slovenia
  • Brand materials
Contact us
  • Email:
    [email protected]
  • Phone:
    +386 590 18180
© 2013-2023 AGILEDROP Ltd
  • Privacy policy
  • Terms of service
  • Cookie policy

Episode 145

Chris Cooney - How data observability helps enable digital transformation

Posted on: 01 Aug 2024

About

Chris Cooney is the Head of Developer Advocacy for Coralogix, a SaaS observability platform that analyzes logs, metrics, and security data in real time.

In this episode, we talk about data observability and how it helps enable digital transformation. We discuss why it's important to prioritize observability from the start, how to optimize observability-related costs, the importance of responsible data use, and the impact of AI technologies.

 

Links & mentions:

  • coralogix.com
  • linkedin.com/chris-cooney

Transcript

"The two major challenges in the digital transformation, loosely, are the people challenges and the tech challenges. And observability has an extremely important role in both of those things. And it never comes up until the very end, because observability is seen as like a slightly more advanced capability."

Intro:
Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, Content and Community Manager at Agiledrop.

Tim Butara: Hello everyone, thank you for tuning in. I'm joined today by Chris Cooney, developer advocate for Coralogix, a SaaS observability platform that analyzes logs, metrics, and security data in real time. Our topic for today's episode is data observability and how it can help enable digital transformation initiatives.

Chris, welcome to the show. Thank you for joining us and great having you here with us today. Anything you'd like to add before we begin?

Chris Cooney: Thank you very much for having me. I'm very excited to be here. Yeah, just to add a little bit about Coralogix very briefly. Indeed, as you mentioned, we process the full stack observability, as they say, so full gamut of logs, metrics, traces, security data. And then we have a lot of integration to the open source tooling as well as cloud native tooling as well. And we're very, very big on open telemetry. We're core contributors to open telemetry, meaning that CoreLogix kind of doesn't really buy into the vendor lock in model of customer retention.

In fact, we we. If we do onboarding for you, which is something we do free often, we will install OpenTelemetry. So the idea is, is that if we're trying to be the good guys, observability to be, to summarize, and then trying to be a bit more focused on creating a great product rather than relying on, shall we say, shady commercials to keep, keep customers held in.

So that's a rough summary of where we're at in the market.

Tim Butara: So, just to make sure we're all on the same page here, can you tell us what exactly is OpenTelemetry?

Chris Cooney: Sure so, I'll explain the problem, and then I can talk about how OpenTelemetry comes in. So, for a long time, there were a selection of options and tools that were available.

That would allow users to collect that telemetry data. So for example, if their application is producing logs and they want to view those logs in a central place, there were a bunch of open source tools like fluent D fluent bit file beat metric beat, they just go on and on and on. So what happened was many of these tools did different things well, but none of them really were a great.

standard to cluster around. And so observability, SaaS observability vendors, what they started to do is make their own. And it was understandable because they would build something that worked for everything the customer needed. And then somewhat unintentionally, to be fair to them, what happened over the years was, was this thing called vendor lock in, which is this idea of you kind of stuck with a particular vendor because migrating away would be a very expensive engineering endeavor.

And so rather than people sticking around, cause they love the product, they're sticking around because they feel like they have to, and this was a big problem still is huge. So that's where OpenTelemetry comes in. OpenTelemetry is a standard. It's a collection of protocols, OTLP and a bunch of others. And it's also a series of libraries and tools where you can collect logs, metrics, and traces using just open telemetry, and then all of the major vendors integrate with open telemetry.

So your application is instrumented using open telemetry, which means that your application is using OpenTelemetry to collect all that telemetry data. And then OpenTelemetry acts as a piece of middleware that then pushes it to your third party vendor. Or to your open search cluster or whatever it is.

The power of this is that if at any point you want to change vendor, it's a few lines of config as opposed to a giant overhaul. And so open telemetry really aims to solve that problem of, of vendor lock in of and frankly, different collectors have different features. And so moving from one to the other was very painful because you had to kind of emulate the feature of one and another.

It was very hard. So open telemetry is currently our best shot at that. All the vendors integrate with it. Most of the major vendors contribute to it as well, like we do Coralogix. And the goal really is to turn this into a a simple, consistent standard that engineers can use. Breaks the vendor lock in model, forces companies to compete on quality features and costs rather than the pain of migrating.

So that's, that's the, that's the ambition of OpenTelemetry.

Tim Butara: I'm glad we started off with this because we're also at Agiledrop, we're also big proponents of open source. And we also often talk, you know, we we've had an episode, I think a few months ago where we talked about building successful businesses based on open source software.

And one of the main points of that discussion was the problems with vendor lock in. And how opting for, for an open approach from the, from the get go helps you avoid a lot of that. So, so really glad that we started off with this and now we can proceed to like the meat of our discussion and, and so data observability and why that is so important for digital information.

And let's start with this. Why is observability so important for digital information?

Chris Cooney: I actually have faced this head on several times. So in a previous, in a previous life, I was the principal engineer at a retailer here in the UK called Sainsbury's, the second largest retailer, like 12 billion pounds in revenue every year and growing a billion dollars profit every year.

So it's, it's an extremely extraordinarily competitive market and one in which a few years ago, I say a few years ago, probably around sort of 10, 15 years ago roughly, they started to work out that digital capability is really, really important in this space. It used to be an industry of very, all about local knowledge.

So it was like, oh, I know a farmer down the road and he can get, get us a big, you know, he can get us, I don't know, cabbages extremely cheaply, you know, and we can, it used to be all this like natural tribal knowledge, if you like. And then this digital knowledge came in and, and, and companies like Amazon started to moving themselves into the, into the retail space, they bought Whole Foods and suddenly everyone went, oh God, we actually, we have to be able to compete with these guys because these guys have an amazing digital capability. And so I watched a digital transformation happen in a very large company. The observability didn't come into the conversation

until way towards the end at which point this was by the way the point at which I became really interested in observability this was the moment because I saw we had all these problems constantly and I was a java engineer before that and we had all these problems of how do we know if a change we just made was good or bad And you know, what is good?

How do we know if, if we're actually pushing the needle and the things we want to push on? What are our KPIs? Why do they matter? How do we collect around our KPIs? How do we collaborate around them? And so on. And sort of over the time, over the over the years. It became more and more apparent to me that this is really a data problem.

What's happening is we have lots of teams doing things and none of them have anything central that they can sort of sit around and go, this team did X, this team did Y, X didn't work, Y did work. We should emulate Y or that way we should take lessons from that. And so what I learned over the years was, okay, so what does that mean in a technical?

Context. What does that actually mean? Well, we need a repository of information that we can access freely. We need to retain it for a very, very long time because these lessons will span years. So we need to be able to retain it for a while. We need to be able to freely query it and search it. And we need to be able to visualize it in lots and lots of different ways.

So that was one component of it. And the second thing was, We're doing all this digital transformation. At the time it was lots of on premise servers and we were migrating them into the cloud using, oh God, vMotion and, and, and for some of the older apps, we had like Solaris servers and we were using middleware in the cloud and all sorts of stuff to migrate things.

And we had, I would say, pretty good idea of whether it was going to work or not. But to say that we had observability would have been a stretch. So. The two major challenges in the digital transformation, loosely, are the people challenges and the tech challenges. And observability has an extremely important role in both of those things.

And, but it never comes up until the very end because observability is seen as like a, a slightly more advanced capability. It's like, okay, we'll worry about that. But when we're, when we're moving forward and when we've got some really great wins under our belt, we'll think about observability. And it's like, no, actually You, you need to drag it out much, much earlier on into the conversation so that when you're, when you're moving forward, you know it, you'll be able to do it in a data driven way, much more efficient, much more effective.

And it's much more, it's in my opinion, a higher collaboration model than traditional ways of let's just hire 150 devs and get them all doing stuff and see what happens, which is a scary proposition to say the least.

Tim Butara: What does it look like if you do, if you fail to do this? So if you only tack on observability at the end, after you've already invested probably a lot of time and a lot of resources into everything else.

Chris Cooney: So it's, it's kind of it's a fun one. I can't talk too much, obviously about the internals of Sainsbury's specifically, but I've done this to several companies. So I'll kind of extrapolate across all those companies. The first thing is the engineering cost is enormous at the end because you're, you're not implementing observability for the handful of teams that you had at the start.

You're implementing observability. If you're in a large company, like I've been several times, you're implementing observability for. 30, 40, 50, 60, sometimes hundreds of teams. That is a different prospect. That's a different game entirely because they all have weird and wonderful, different use cases. So much harder.

Whereas when you pick a platform to start with, whether it's SaaS, whether it's in house, whatever you decide to do, you can grow with that. And if you're a larger client, you can influence that backlog. You can kind of change things as you move. And if you. If you're very careful to avoid vendor lock in, you can pivot as you go, as opposed to having to do it in a big bang two, three, four years down the line.

So that's one, the cost and the effort. But the, the pain of it is that nobody over the years, the teams have learned to operate in what I would describe as a non data driven way, because the data hasn't been as available as it should be. So all these weird reasons for doing things have had a chance to run rampant for years.

And so they, and, and as we know, as things, the longer things go on for the more legitimacy they claim. And so even though there's very little basis for doing a few things, they survive. And so not only are you going to bring in observability to benefit things, you're going to end up. shining a spotlight on lots and lots of bad ideas and lots of ideas that really shouldn't have taken off the ground.

And then you're into the politics. And so that often means that rolling out observability in a meaningful way, especially in the context of digital transformation is very hard. Yeah. So it's, it's, it's an uphill climb.

Tim Butara: Probably also very expensive, right? So, so what can the firms and organizations do to kind of optimize these costs that are related with observability?

Chris Cooney: Sure. So there's, there's lots of things you can do. I'll talk, I'll split it in half. I'll talk about SAS and I'll talk about in house, in house first. So. The first thing that I see people do, logs are typically the source of most of the costs. So your log data, it takes up the most space, it takes the most effort to query, it takes the most indexes, that kind of thing.

The best way of optimizing your costs around your logs is to be what I call use case driven, with how you optimize these logs. So a very typical pipeline is this. Data is ingested, logs are ingested, all logs go into, for example, an open search cluster. And then from the open search cluster, they stay there for two, maybe three weeks, and then they go to an archive or they're deleted of some kind.

Essentially, all the data is moving through. And the base assumption of this architecture is that the age of the log is proportional to its value. That's if we're saying brand new logs go into really expensive storage and all the logs either go get archived and compressed and hidden away or they're deleted, in other words, they're valueless.

This, this is the thing that Coralogix we investigated very thoroughly and we found that no, some logs are only valuable for a certain period of time. That's true. Some logs are never valuable. Some logs are really valuable. a lot. Some logs only have historical value. They're only valuable after a few months.

And some logs are only valuable for, you know, the few minutes that they need to, they trigger an alarm and they're never used again, you know? So what we say is actually be use case driven. And what does that look like in technology terms? Instead of sending everything to your most expensive storage, some of your data can just go straight to your archive.

Like some of your data, if it has historical importance, but you're not going to need it now, straight to the archive. Save yourself a ton of money, a lot of processing, a lot of effort. Also make your high performance storage a lot more efficient as well, because it's easier to, the queries run faster, but it's also much less noise to sift through for the engineers.

So that's one way of immediately cutting costs. But then the other Your archive, the process of rehydrating and reindexing is very CPU memory heavy and in the SaaS space it's very expensive. So look for a vendor that allows you to query your archive directly or build a solution that allows you to query your archive directly without the need to rehydrate.

Because historical data usually requires big batch queries, and they're run every few months, even sometimes once a year. You don't need to rehydrate all that data and really expensive storage for that, just query it directly. So treating your archive like a data source and a working part of your data set, albeit a less frequently accessed one, really powerful, and that's something that a team should do.

In terms of traces, in terms of metrics, And you'd be amazed how far you can get with S3. As a back end storage, you'd be, I mean, I'll talk a bit about Coralogix in a second, you know, how we solve this problem. But you'd be amazed how far you can get with cheap storage, because metrics are very typically very low I, you know, they're very, they don't, they don't have a large data footprint.

They just have a, and they're very, very high performance. So that, so yeah, so I would think about cheaper storage for metrics. You can get away with it. And then for traces the most obvious thing to do is a thing called tail sampling or head sampling. So open telemetry has this concept of tail sampling, where most traces, if your system is working as most systems do, about 1 percent of your traces will be errors and the others will be 99%.

Everything's good. And of that 99%, lots of them are just repeated. Boring, dull traces of like, someone just requested an image, they got the image. Someone just requested an image, they got the image. You know, thousands and thousands and thousands of times. It's very expensive, extremely expensive. So instead, the tail sampling will remove those repeated traces.

It will, it will keep the errors, but it will remove the, the sort of, Traces that don't really add much to your insights. So that's a way of immediately cutting your costs. And by the way, even though that will work for open source, it will also work for vendors because vendors typically will charge you by number of traces or gigabyte or something, send less data, spend less money.

So that's the open source side. The SaaS side is a little different. Firstly, abstract the vendor, open telemetry. We discussed it already. Just do that because then you have a negotiating position. Secondly, you, it's really, really, really important that you look at the fine print of the pricing model. So two vendors, vendor A and vendor B may have.

a same cost per unit, for example, cost per gigabyte, cost per thousand metrics, whatever. But vendor B might also charge you per user, per host, per database table, per whatever, per thousand metrics. And this can result in a few things, double charging, you end up paying several different ways for the same gigabyte, depending on the services that you use.

But it can also mean that you have unpredictable costs. So finding a pricing model that is predictable is essential. This is the most important thing. I think more important actually than the unit cost is, is the, is the predictability of the pricing model. And then the one thing to be mindful of is overages.

When you get your initial sizing from any third party SaaS vendor in the observability space, they will, they, they are incentivized. If they charge overages, they're incentivized to give you a lower base bill. because they will win the deal and then they'll just charge you overages later on when you inevitably go over and they'll say well you used more than your quota you know whatever they give you make sure you can do the maths yourself and validate it just make sure that the sizing is accurate the best green flag is when somebody is data driven so they'll do a proof of concept with you and say hey you know here's a rough volume we're going to do a proof of concept with some of your data that will give us an idea and then we can extrapolate that's a really good green flag that shows that they're actually serious about actually measuring your data in some way and actually pricing you accurately.

Coralogix, the way we solve this problem is really simple. We charge by gigabyte and only by gigabyte and nothing else. No per users, no per alarms, no per dashboards, nothing like that. That makes us very predictable. And then we couple that with two key features in the cost optimization space. One is what we call TCO, total cost optimization.

Total cost of ownership optimization, which is allows users to select three different use cases for their logs and their traces. And they can say, for example, some of my logs really, really important. I need them indexed in high performance storage. Some of my logs, I want to generate alarms, metrics, visualizing dashboards.

I want to query them still, but I don't need high performance querying. All of that is, is going to be basically meaning that. The only difference is between frequent search and monitoring, by the way, is that you don't index in high performance storage for the mid, for the medium kind of level. And same thing, we have a compliance level as well.

Now, what does this mean for the consumer? For Coralogix, it means that you get a 60 percent or 70 percent discount on the medium level and a 90 percent discount on the compliance level, the lower level, which is straight to the archive. So it means that they can literally say how much is this gigabyte of data worth to me, and they could classify it accordingly.

That, that, that alone, those two things together, the simple pricing model plus that feature, massive cost optimization potential there. When you couple with the fact that we support direct archive query and because of our pricing model, we do not charge per query. So you send to your archive, you can query it as much and as often as you like, it won't cost you a penny.

So this means that you can send way more to your archive. So these three features, these three facts of Coralogix. Work really well in tandem to drive down the observability costs. And even if you don't buy Coralogix, whatever, I don't work in sales. It's not my problem, but look for these kinds of solutions in your vendor in the SaaS space, that will give you a sense that they're serious about helping you cost optimize.

Tim Butara: Yeah, those are definitely some great solutions coupled with some some great tips. And so I'm guessing that there's really a strong relationship between data observability and responsible data use, right?

Chris Cooney: So when you're rolling out an observability strategy, the most important thing is to realize that you're essentially shining a spotlight on the data that you've got.

What this means is that if you have any sensitive information in those weird data silos, suddenly that's accessible. And so as an organization, not only do you have to worry about what this means for you from a regulatory perspective, you know, GDPR can be up to 4 percent of revenue. That's a scary prospect.

And you know, that's a file sat on a server somewhere that nobody knows about and it just suddenly costs you 4 percent of gross revenue. That's scary. But also from that regulatory perspective, that's the monetary side. Also from the ethical perspective about like, what's the cost? Are you, as an organization, handling that data in the most appropriate way?

How do you know? What observability does is it shines a spotlight on that and it lifts the question to the surface. Some people interpret this as causing problems, but actually all it's doing is making you aware of a problem. You had all of the liability and all the risk in the first place. You just now know about it.

And that's what observability is essentially. It's knowledge of your data and that knowledge, as all knowledge does, comes with some kind of burden. And that's, that's the relationship. If you like observability drives the responsibility because you can no longer ignore it. And the responsibility has, it's like a virtuous circle though, because as you're more responsible with your data, you become more confident with where it is and what you can do with it and what's available.

And suddenly you drive this whole data driven culture with the guardrails of whatever regulatory system you have to abide by. So it's, it's, it is a virtuous circle. It's just a very painful one to start as it were.

Tim Butara: That was a great answer and a great way of putting it. Right, and that, that was exactly what I had in mind where when I asked this question, right?

It's like, oh, yeah. Yeah. Observability is a key component and enabler, as you pointed out of, of responsible data usage. Yeah. But also the more responsibly you use your data, the, the better observability you get and the more you get outta your data. Which makes it, I guess, makes it a bigger incentive to be even more responsible with your data.

So it's just like, it never ends and it's, yeah.

Chris Cooney: That's the thing, it's the fear, the thing with data, with sensitive data in my experience, is that 9 times out of 10, you're just afraid that you're going to do something wrong, quote unquote wrong, based on whatever regulatory framework you have to deal with.

And so, when you can observe your data in good detail, with good resolution, you can say confidently, no. This data source over here is fine. Like you can do whatever the hell you want to that, turn it into a spreadsheet, share it, do whatever you like. Just that mindset around, around a data source is amazing because that's like the gloves are off.

No constraints, have fun with this data, do whatever you need to do. You know, otherwise you've got teams constantly in this hot. Horrible cycle of like, I would like to access this data. And it's like, well, fill out this audit form to get access, or you're only licensed to do X, Y, and Z with this data. Well, I need to do something else.

And you can't, I'm sorry, because there may be PII in there and maybe there's not, you know, understanding the home of your data is the beginning of really, really utilizing it and giving people confidence to do that without being big fear that they're going to end up losing you know, several hundred million dollars or whatever, depending on the, on the size of the company.

Tim Butara: Yeah, yeah. Also, also the problem with silos, which you mentioned earlier in writing data silos are one of the biggest impediments to effective and responsible data usage.

Chris Cooney: Yeah, yeah, definitely. And the, the, the, you know, the thing with silos that people, it's, I'm, I'm fascinated, actually personally fascinated by the way that Is the relationship between the technological solutions that we have and the long term behavioral impacts within an organization of what that actually drives.

So one of the things that I've observed for example is an absence of data in a conversation tends to drive a kind of almost medieval, you know, selection process of like the loudest voice who shouts and the one who makes a sort of witty argument wins the day, you know, and it, because it's, that's what we fall back on naturally.

The thing with data silos is the behavior that I believe that drives is we'll say it a lot. It's almost a cliche now that data is a form of currency. These days, information is a form of currency and. If you have your own little area of a business, I'm thinking large corporate at this point, but it's also kind of true for medium sized companies as well.

If you have your own little area of a business with your own silo of information in there, that's like your, your little goldmine and people have to come to you and ask for that information. Like having your own little fiefdom, you kind of rule over this natural resource that you have. That drives a weird, weird culture of an organization behaving like a weird federation of lots of loose things.

That's what data silos drive because the data and access to the data is a valuable thing. So this, this culture of like federation within an organization is driven by segregated, siloed data. And that's why when you're breaking down data silos, you often, you run into technical problems often because it's like, you know, they are.

like a few hundred gigs over there, you know, a terabyte over there, a few gigs over there, a few gigs over there, all volumes that are manageable. But when you times that by like 500 silos and you bring them together, you have a petabyte of data. And it's like, oh God, what do I do with all of this? And that becomes a different problem.

So there's that issue. And it'll always be painful getting all that, getting all that different to some And to some central place, I always felt for the data teams. And I worked at Sainsbury's because they had a really hard job. There was data everywhere, you know, all over the place. And while each individual team was very responsible, there was no central view.

And there were teams that were responsible for doing that. And it was hard, hard work for them. I always felt. felt them. The other side though is the people. You have to convince people that it's worth them making their data available via an API. You know, maybe it's just a giant file on a server, but it's got really valuable data about customer behavior or something in there.

What do you do then? You're like, so do you FTP the file over? Do you, what do you do with that? You know, it becomes very challenging in that space. So that relationship between if you like data topology and the downstream behavior that that causes. is clear and it's also fascinating and that's what observability does right is it kind of like shines a light you have all these data silos here's the amount that's in each one here's the kind of data that's in each one what what does that actually do for you like what does that actually mean behaviorally for your organization oh it just so happens that this guy has a great deal of influence in the company because he has this two terabyte stash of data.

So it happens all the time. And like I say, if, when you treat data like a currency, observability is like a mine, right? You're, you're breaking through, you're mining the currency that's already there. And so when you think about it like that, it's a no brainer. You're just doing the work to free up the value that's already there.

You already have the value. So you, you know, all your company's generating this value all the time. You're just refining it and turning it into something that you can trade and use. And that's the, that's the trick of observability in this context.

Tim Butara: It should be collaboration, not competition.

Chris Cooney: Hopefully, yes.

I'm constantly baffled by organizations that try and drive internal competition. I get some of it. I get the idea of like, you know, people competing for different roles and that kind of thing. But when you set two, three, four teams in competition with one another, and I've seen it a few times, Oh yeah, competition is the best way to get a good solution.

And I'm like, is it? Is it? Like, are you sure? Like, maybe between organizations. Yeah. Within an organization, you're going to pay for four teams to do the same job four times. And then it just, the whole thing seems crazy to me. It's like, why not just pick the best people and put them on the challenge and remove blockers and just see what happens with them, you know, and give them some deadlines and some goals, you know.

Yeah, the ideological one that mixes with the with reality, unfortunately, it seems.

Tim Butara: Yeah. And another thing that we really need to talk about that we kind of, we kind of started talking about a different, different direction, but it's still, I think it's still very much related to responsible data use and, and just, just, you know, the, the general big picture of things.

And that's obviously the explosion in AI innovation that we've witnessed recently. So how is that impacting data observability?

Chris Cooney: Yeah. So I'm reasonably optimistic about the impact of AI actually. Thus far, it's basically an extremely sophisticated search engine for observability data, where before you needed specialist knowledge of a certain syntax or a certain graphing library or whatever it is that you need to know.

Now you just plain text. You can do it in Coralogix. You can give me the top 10 IP addresses by count or something, and it'll, it'll build a query for you and it will run the query. Lots of other platforms do this as well. That's really cool. The problem I have and the, the, the, the sort of the worry I have is with regards to pricing models.

And I'll, I'll explain. So some pricing models are all about the, the, the size of queries, the amount of data that's being scanned, if you like. So, If you're writing the query and you know what you're doing, you can optimize. In other words, you can control the amount of data that you scan. If you're, if I'm a database engineer, for example, I am going to know the best way of scanning the least possible data to get the information that I want.

It's the same thing with someone who's an expert in certain query syntaxes. However, the the reality is that when you're using AI, you're no longer writing the query, you are writing a plain text thing. And then some engine, the AI is converting that into the query. And that is a problem for those kinds of pricing models where data scanning is part of the part of the cost model.

Some companies that do it, that's, that's a, that's going to be an issue. And we have to kind of resolve that as we go. But the things that I'm really positive about is there's a lot of heavy lifting when it comes to using an observability vendor, whether it's in house or not. Typically it's, you know, oh, when we move over, we have to make all these new dashboards, new alarms.

We have to make new passing rules, blah, blah, you know, Coralogix, we try and we fill that gap by, we have managed onboarding and we help As much as we can, but a world where you can just say like, Oh, make me these five dashboards that show me these things. And here's a log I need to pass into Jason for me.

And I need views. It's going to tell me P95 performance of all my traces in this subsystem, you know, just be able to instruct that the onboarding is going to be so much faster. Time to value is so much faster. And I would say the effective access to your data is much broader. It opens up what would normally be a realm of purely technical people to anybody.

Anyone in your organization that has a question about your data can go and ask that question. And with a bit of tuning and a bit of prompting, they'll get that. So, you know, it's that whole dream of democratizing data. I think that Gen AI is the key to that and I think it will do it. We just have to make sure that the economics works as well and that teams aren't finding themselves scared of running certain AI queries because of the impact of their costs.

That's one, that's one of the things that kind of comes to mind about the potential dangers. And then of course, there's the PII and PCI component to it. You know, if you have mixed data and you are someone that knows the data, you can structure queries in such a way to guarantee that you're not touching any of the sensitive data.

And if you haven't separated your data properly, which a lot of organizations haven't, if you're using gen AI, it can be. There's that guarantee goes away. You have a probability space at that point. And yeah, this thorny, thorny ground there, if you like. So, so, so it's, it's very exciting, but it's not going to be without its challenges is my, is my feeling.

And the real impact is the democratization and access to data. That's going to be unparalleled.

Tim Butara: So are you, are you optimistic as to when we might achieve these benefits from democratization?

Chris Cooney: Well, when we might achieve democratization is we already have the, you know, the LLMs we have, they're already capable of doing everything that I just listed.

It's just a case of somebody doing it, you know, that's one. In fact, there is a bunch of, I was, there's a small company based in Israel called Merlin with two Ns, really great little company, but that's that's The idea is you have a Merlin Slack bot and you can ask Merlin like, oh, there was an outage recently.

Tell me about X, Y, and Z and it will send you messages describing exactly what went on. Almost like having an engineer looking through the observability tool for you and surfacing the information. So it already exists in many ways. And it's just a case matter of. a matter of time and someone getting the funding I suppose to do it.

So that's like, so that's, I'm very optimistic about that. But your question was when we'll get the benefits. And that I think is going to be a little while because people are going to have to adjust and work out what using these tools looks like. I mean, we've had GitHub Copilot now in the engineering space for, we've had gen AI in the engineering space for a long time now, complete, I think about, I'd say by a year, maybe two years of people regularly using generative AI tools to build code.

And it's still not got particularly wide adoption. You know, a lot of engineers that I know don't want to use Copilot. They don't want the code that generates. It's not something that works for them. They always find themselves tweaking it and prompting it. So the, the adoption of AI, and then this is, is barrier number one in that space.

And then barrier number two is, okay, we've adopted it. Great. What do we do now? You know, and that's when the benefit comes in. So, so the benefits will take a while to trickle down, but we will get that. I'm very optimistic about we will get them, but I think the benefits will take a few years longer than the actual adoption will.

Tim Butara: That makes sense. Yeah. And, and that's actually a great note to finish this really, really fascinating conversation on Chris. Just before we wrap things up, if listeners would like to reach out or connect with you, learn even more about Coralogix, what's the best way to do that?

Chris Cooney: To learn about Coralogix, the easiest way is coralogix.com. C O R A L O G I X. com. Nice and easy. For learning about me, I lead a particularly dull life, so there's not a lot to lead I'm afraid, but I'm on LinkedIn. If you just look for Chris Cooney, Coralogix, I'll, I'll, I'll pop up and I'm regularly posting about this kind of thing organizational dynamics, observability, DevOps, might have a background in DevOps and then software engineering and engineering leadership as well.

So super interested in all this stuff. If you have any questions, I love getting messages on LinkedIn from people. It's like, I'm sad enough that this is like a little, little boost for me. I love it. So, so yeah, feel free to reach out. Love to hear questions and insights from people.

Tim Butara: Chris, thanks again so much. It was really great having you here and speaking with you about this.

Chris Cooney: My pleasure. Thank you very much for having me.

Tim Butara: And well, to our listeners, that's all for this episode. Have a great day, everyone. And stay safe.

Outro:
Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes, and don't forget to share the podcast with your friends and colleagues.

Listen on

  • Spotify
  • Apple Podcasts
  • Youtube Podcasts