Metify ADT podcast cover
Episode: 97

Michael R. Wagner & Ian Evans - Powering Major League Baseball’s infrastructure with bare metal

Posted on: 22 Jun 2023
Metify ADT podcast cover

Michael R. Wagner and Ian Evans are the co-founders of Metify, providers of the bare metal as a service platform Mojo which powers the digital infrastructure of Major League Baseball.

In this episode, Michael and Ian tell us more about Mojo and how it has benefited Major League Baseball. We first define the concept of bare metal, what kind of use cases it's best suited for, and why now is the right time to be talking about it. 

The second half of the episode then focuses on Mojo and the MLB case study, in particular the key considerations when implementing the platform and what kind of specific benefits it has brought to MLB.

 

Links & mentions:

Transcript

“The challenge is the underlying tools that allow the update and the mobility and everything else that’s notoriously a pain in the neck in the data center. A lot of hands and feet working on these things all the time, they don’t want to deal with that, they don’t want all that staff in place to be able to do these things; they want a more DevOps centric approach.”

Intro:
Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, content and community manager at Agiledrop.

Tim Butara: Hello everyone, thanks for tuning in. Today is actually one of the rare episodes when I’m joined by two guests. I’m happy to welcome Michael R. Wagner and Ian Evans. They’re the founders of Metify and its Mojo platform. Mojo also powers the infrastructure behind Major League Baseball. And this is exactly what we’ll be discussing today. 

We’ll start by defining the concepts of bare metal and bare metal as a service. And then we’ll move on to talking about Mojo, and Mike and Ian will then tell us more about the Major League Baseball case study. Michael, Ian, welcome to the show, thank you so much for joining me today. I think we’re in for a great episode. Do you want to add anything before we begin with the questions?

Ian Evans: Ready to jump right in.

Michael R. Wagner: Nice to be here. Thank you.

Tim Butara: Awesome. So, the first question, I think we’ll start with you, Mike. And I want to ask you, can you first tell us a little bit more about what even is bare metal?

Michael R. Wagner: It’s different things to different people, frankly. But the agreed upon definition from an industry perspective at this point is essentially a machine that’s singularly dedicated and is not virtualized in any way. So, you’re taking a server and you’re giving access for the applications directly to either the CPU by itself or other peripherals and chips that may be on the server so it can run in an optimized state.

And when I say there’s sort of many definitions, because there’s many form factors of what the bare metal can take on. And for us specifically, the most important thing is that it has a BMC on the system board somewhere, and that allows us to do our magic with it, with the Mojo platform.

Tim Butara: Sorry, can you explain to those of us who are not aware what a BMC even is?

Michael R. Wagner: Oh. Ian, do you want to take the BMC?

Ian Evans: Yeah, I’ll definitely take it. So, basically it’s a baseboard management controller. And essentially what it is is a dedicated connection and resides on the server that allows a customer to connect directly to that in a what we call lights-out fashion. So, essentially, almost imagine the server being off, it has a power cable in it; the BMC has a persistent connection which allows them to effectively manage that server through that persistent connection. That includes powering it on, powering it off, they can run firmware updates; it basically manages the entire lifecycle of the server.

So, what we’re finding now with the BMCs is that any enterprise server that’s taken seriously within large data centers or enterprises, usually is going to have some type of baseboard management controller connectivity in it to be able to manage the lifecycle of that server.

Tim Butara: Ok, that makes sense, yeah. And what is the role and the benefits of bare metal in the context of digitalization, digital transformation?

Ian Evans: You want to take it, Mike?

Michael R. Wagner: Yeah. The benefits in terms of digitalization. Well, so, it’s again use case driven. So, in particular, the fields that are driven by bare metal include high-performance computing. And now all of the AI-driven things, most of the edge driven devices. So, where we are playing most importantly is on the edge. And one of the use cases we’ll discuss today related to Major League Baseball is an edge, a hybrid application.

And that’s the other thing too – the term “data center” has morphed, really. It’s much more than just a single place that resides where all your computing storage resides. Now it’s really stretched out to micro data centers, if you will, or edge POPs (points of presence). And, yeah, more than ever, having access to the bare metal, being able to discover, provision and manage those devices, those endpoints, wherever they are, whatever form factor they take, is more and more important.

And you can especially appreciate now what we are seeing from a scale perspective, and this is a supply chain problem as well, but you probably have heard of a little thing called OpenAI, or ChatGPT-4. The workloads associated with ChatGPT-4 are highly GPU-driven, 100%. And the builds associated with that are all on bare metal. 

And so we’re seeing some really interesting use cases come about related to that, and we’re doing all we can to fulfill demand associated with managing these large high-performance compute instances that include these GPU-driven applications. So, things are just kind of blowing up on the side of bare metal and we’re just trying to keep up. So, it’s a lot of fun.

Tim Butara: So, it’s definitely the right time right now to be discussing this. It’s just getting big.

Michael R. Wagner: It is. I feel like – Elon Musk’s starship just launched the other day, and watching that massive rocket take off was really cool – I feel like the ChatGPT-4 rocket is still on the platform, but the engines have been lit. And, yes, we’re just beginning to experience– I think it’ll be a revolution as big as, really, the dotcom from an overall industry impact perspective. I think it’ll be maybe even greater in some ways. So, it’s going to be a fascinating four, five years here. And we look forward to the run, it’s going to be fun.

Tim Butara: Hard to avoid talking about GPT and generative AI and stuff like that. But I agree with the point that, I think it’s going to be even bigger than the dotcom bubble and stuff like that. I think right now we’re at a point where we can’t even determine and see how big it’ll really get, because, as you said, we’re still on the platform. Once the rocket takes off, that’s when we’ll see how far it takes us, basically.

Michael R. Wagner: That’s right. It’s going to be a lot of fun. It’s one of those things where the use cases and the applications for society in general, we just can’t guess. No one could’ve guessed that TikTok would be the big thing coming out of the dotcom revolution. You know, many years later, the morphing of where these technologies take us is just chaos driven. But it’s going to be a really fun ride. 

And the bottom line is, the workloads require massive bare metal as close to the consumer as possible, and that’s the part of it that’s going to get really interesting, I think. Because it’s an ever connected society now. If you think about it, everyone is walking around with a very powerful computer that costs over $1000 on them at all times, right. And that little computer, of course, is your iPhone, or equivalent, your smartphone. So, yeah, I think we’re just scratching at the surface, the engines have just been lit and it’s going to be a lot of fun to try and help enable this growing infrastructure.

Tim Butara: Yeah, definitely an exciting time to be alive. Back to bare metal before we move on to Mojo. And, Mike, you were talking about this access and how everything has to be easy to use and easy to manage. And I’m guessing that this is where something like bare metal as a service comes into play, right? Maybe, Ian, you can tell us a little bit more about bare metal as a service and also introduce Mojo already.

Ian Evans: Sure. When I look at bare metal as a service, it’s essentially in my mind, we’ve already identified customers need a lot of servers in a lot of these situations. For me it’s, how fast can they consume those servers? And not only that, but what type of intelligence does a platform have to be able to identify very specific attributes on those servers?

And the customers, of course, want to be able to do that from one area. They want to be able to quickly identify those assets, they want to build very specific workflows for those assets, and then they want to get their application up to an operational state as soon as possible. So, when I think about bare metal as a service, it’s the automation platform. But it’s also consumption, you know, how fast can they get it, and what type of tool is in place to allow them to not only identify, but also look at multiple different OEMs in the same way?

And that’s one of our value adds with Mojo, and we’ll go a little bit into that, is that we think that customers should be able to bring their own white boxes, they should be able to bring their own servers, and there should be a platform in place that’s easy to use that allows them to consume those quickly, allocate those, put role-based SaaS controls around them, establish level governance around all of those nodes, and then essentially carve them out to teams to do what they do best with it. So, a lot of efficiency type stuff in there for our idea of bare metal service in general.

And Mojo, it’s really built to be a governing software, essentially, over those. So, it essentially establishes a chain of custody over all these nodes, so the organizations also have a much better idea in terms of how things are being placed. We want a tool that could be positioned, that allows, say, a CFO even to go in and run a report, and see exactly how that stuff’s being consumed, what do the billables look like, you know, chargebacks. So, there’s all these interesting pieces that we see added into the overall bare metal as a service value, and we’re continuously starting to push into those areas where we see a lot of that demand.

Tim Butara: So, as Mike pointed out earlier, Major League Baseball is exactly the type of client that was really well positioned for using bare metal as a service. And obviously we need to talk about that, because this is the meat of the episode. So, can both of you tell me a little bit more about how you ended up working with Major League Baseball, how they ended up choosing Mojo, stuff like that?

Michael R. Wagner: Yeah. So, we were brought in – we have a very broad partner network, so that’s one of the things that we really believe in, the channel model overall; so, channel partners, for us, are really the lifeblood of our organization and where we focused our sales efforts from the very beginning.

So, we have a channel partner that’s based out of Toronto, Canada, by the name of Arctic. And they’re just an outstanding group of SIs, and they really know in particular the Kubernetes space extremely well. So they were really a foundational partner for Google Anthos, and did a lot of Kubernetes installations across the globe.

And they brought us in – they knew about our bare metal expertise – and they were working with Major League Baseball and with Google on this very cool solution. Major League Baseball, they’re in a significant partnership with Google, a far bigger partnership than we could ever dream of having with them. 

So, really, the two partners that we leveraged there that brought us in were Google and Arctic, and specifically to assist with this bare metal part. Which is not easy to do, bare metal is difficult, especially remotely. So, it’s an area that we have a lot of expertise, and we were able to get things up and running in just a couple of weeks.

It was one of those situations where it was just a perfect fit. And the team at Major League Baseball is highly technical, they were their own professional services organization before MLB acquired them – very wise move on MLB’s part, I must say. They’ve really got some excellent folks across the board there, both in networking and infrastructure, in their leadership organization… 

So, it’s actually Major League Baseball Advanced Media, is a separate group within MLB that actually brought us on to do all of this. So that’s how the opportunity arose and we knocked this thing out very quickly for them, and we’ve been growing with them now for three years. It’s crazy how quickly time flies, and every year we’ve had very nice growth with them – growth with them, that’s a fun thing to say. And now we’re in the minor league ballparks as well, and we continue to expand our footprint throughout their organization. So, it’s really cool.

Tim Butara: Awesome. So it helped you to get your foot in the door for this type of industry, basically. And as you said, it’s the perfect industry for something like Mojo.

Michael R. Wagner: You know, there’s so many use cases. It’s the use case, right? So, this is an edge-driven use case with a ton of GPU acceleration that’s required because of the applications that they run on the edge to give the fan the best experience possible. It’s driven by the baseball stadiums, where they are all, the ballparks, that’s what they call them.

So all their ballparks have 15 different input sources; six of them are high-speed cameras, there’s a bunch of lasers as well, and then there’s other cameras that augment that. And that’s to track every possible statistic you can imagine from an on-field perspective. So, it’s like, the speed of the bat being swung when the pitch comes in. It’s the rotation of the ball coming off of the pitcher’s hand. It’s the break of the ball, how much it curves actually from the pitcher’s hand until it meets the catcher’s glove.

So, all of these things are tracked and then uploaded into Google Cloud, every game, 7.2 terabytes of data per game. It all happens on an application called Statcast, this sort of centralizes all of that, and they do all of their crunching up in the cloud as well. So it’s a perfect use case where you have a remote facility that’s gathering a ton of data, and it needs to be processed very fast at the edge in order for the applications to work. 

It’s actually called the Hawkeye application, Hawkeye is the name of the company or the solution that delivers the high speed cameras. And then that rolls up into Statcast. So the entire solution is edge-based. And then it takes all that data up into the cloud for big data crunching and all that fun stuff. 

So, yeah, it’s just a perfect use case example of where you have all this edge-driven activity. It has to be managed somehow, you have to manage those servers some way, somehow. And that’s one of the running jokes that really applies now more than ever is, what is the edge? The same as what is the cloud? It’s just someone else’s computer.

And the edge is a little harder, because for the most part, the applications that take advantage of the edge need to run on bare metal. So, how in the world are you going to discover, provision and maintain these remote compute POPs, these remote compute areas efficiently? 

So, more than anything, what we see is Mojo being leveraged to cut travel and expense costs related to having to go out and update a BIOS with a thumb drive. That’s one of the more common use cases or problems that we resolve with the Mojo platform. You can picture the growing need for this type of low level solution that works regardless of who the manufacturer is, given the massive growth in edge and the continuing demand for everyone to be connected at all times.

Tim Butara: That sounds like it requires a lot of work, a lot of knowledge, a lot of things that you need to be careful, pay attention to. And you said that the MLB team was very technical. And I’m wondering – maybe, Ian, you will know this best – did you have any particular challenges or any particular things that you had to consider when you were working together?

Ian Evans: When we look at Mojo, one of the things I mentioned earlier was governance. But we also have really put a lot of features into the software that prevent certain things from happening with an infrastructure that is not, of course, in the best interest of the customer. So, as an example, if you’re running a firmware update and that goes wrong, you don’t want that to cascade and break the entire datacenter, these sort of things.

So, we’ve put processes in places that allow a set of granular control and approvals. So, certain groups can go in and they can process things like firmware updates. Let’s say you have an upstream system that only needs to view the infrastructure for reporting purposes, you can set up a user for that. 

So we’ve kind of set that granular approval and role-based access control thing to fix what we consider a major problem around who’s really using the hardware. Who is accessing it, who is doing certain things on it? This allows them to effectively control that and see that. So that’s one of the biggest, I would say, operational challenges for a lot of organizations, is just what’s being done and who’s using this stuff.

The other one, I think, that’s really important is isolating assets. The way we look at it from Mojo’s perspective, is the assets can be placed anywhere and the customers want to be able to determine where those assets are pooled. So we’ve introduced that part of the product that allows them to pool those assets – it could be a datacenter, it could be a closet, it could be a particular rack, whatever the customer wants that to be. 

And then that’s classified in a pool and then those groups can effectively start to use those servers for whatever they want. They can put Kubernetes on it, single OS installs, development purposes, whatever they want. So, that granular compartmentalization is critically important. 

And the other one that we also have really paid very close attention to, and is in the best interest of the customer, is if they’re already accustomed to public cloud workloads. And they’re used to consuming public cloud and not having to pay attention to the underlying infrastructure, what’s under the hood – we want that experience to be very close.

So, if they’re moving from a public cloud platform and they’re consuming stuff with Mojo, they kind of want that same field to be in place, because most of these customers don’t want to deal with firmware, they don’t want to deal with complex BIOS updates and all this stuff. They just want to click a button, they want to execute a workflow, and they want it to move, and they want to do everything across those systems in a very unified fashion.

So, our focus has really been from power up on that server and bringing it up to an operational state, where it’s just essentially powered on, then it goes through BIOS updates, it goes through firmware updates, it goes through specific workflows – and the customer can define all these on their own accord. 

So it really makes that process a lot easier, where they look at it as more, ok, I’m just going to run this workflow, and it’s going to do these 25 servers and it’s going to do all these specific things. It would normally be a very time consuming process and also a frustrating process. So we’ve really focused on removing that level of complexity. And just sheer frustration. When people think bare metal, ok, this is a great idea, but I know these things are going to be a major pain for us.

So we focused on every one of those little things, and we really positioned Mojo to be able to tackle that in a “keep it simple, stupid” type of fashion. Just keep it very simple, very usable, give the customer the best possible experience as they move through that. And it’s about the application, allow them to get the application up as quickly as possible, so the end user can start to consume with little emphasis on exactly what’s going on under the hood from that perspective.

Tim Butara: Yeah, I mean, I think user experience is definitely one of the really key things here for something– and, as we talked about before, maybe it’s the CFO that needs to come in, and very often it’ll be non-technical users working with this. And that’s why this is so important.

And the other key, you mentioned this granularity, this customizability. And that’s just a no-brainer if you have an infrastructure or an ecosystem where you have so many different options. And people are used to the things that they’re used to, and they don’t want to shake up their entire infrastructure just to implement something new.

Ian Evans: Yeah, exactly. And that’s a question a lot of these data center directors, people who are in charge of these infrastructures, they’re getting questions from upper level management, ok, why would I invest in swaths of infrastructure when I can just do this stuff in public cloud?

And they know that the challenge is the underlying tools that allow the update and the mobility and everything else that’s notoriously a pain in the neck in the data center. A lot of hands and feet working on these things all the time, they don’t want to deal with that, they don’t want all that staff in place to be able to do these things. They want a more DevOps centric approach, where they have somebody who has the skill, they can do all the stuff remotely, they can position these servers and they can start to consume them quickly.

So, if you can’t talk about that and have a conversation about that successfully, it makes the overall value of building a private infrastructure a lot harder to talk about. So, we think Mojo is positioned really well for that. It really enables these infrastructure managers and people that manage these infrastructures, it gives them a much better discussion point when they decide to make these decisions on starting to use infrastructure for those purposes.

Tim Butara: Yeah, that makes a lot of sense. Well, to circle back to Major League Baseball before we jump off the call, can you share any of the specific wins that MLB has achieved since they started using Mojo and started working with you? I know, Mike, that you already kind of talked about some of them, but I’m interested if you can share something that will make our listeners who are contemplating contacting you and working with Metify, something that will make them go, ok, we need to work with them?

Michael R. Wagner: There’s an outstanding article that Kevin Backman, who was the principal systems architect on the project overall, wrote for Medium. It essentially walks through the benefits and what they all got to experience by getting their application stack moved over to bare metal. I can highlight a few things.

They were able to diminish their virtualized footprint completely. So, everything had been on VMs before, they removed all of that. So that’s a significant cost savings, when you can do away with the VMs depending on how those are being delivered. So that was a big one.

And then in terms of travel and expenses related to having to go to the stadiums, that also was a big cost saver for them. And then certainly the operational soft costs associated with just being able to do things quickly instead of having to dig around and open multiple tools and do all the acrobatics they earlier required to do these low level changes to BIOS and firmware, are difficult to compute.

But it was one of those things where, again, the benefits of installing it and having it on the best architecture for the delivery of the applications was the most important thing. The fact that we ended up saving them a bunch of money and it’s a very easy tool to use as well, makes their lives much easier, those are really cool benefits. But for us, it’s about using the optimal architecture, period.

We know that Mojo platform enables a lot of simplicity and it brings with it some soft benefits from time, labor, what are you doing spending your time on instead of all the sort of manual busy stuff, you get to focus it higher up the stack. So, that’s a great thing. But for us, the most important thing is to approach each customer with the use case in mind and understanding exactly what is the optimal low-level infrastructure to make their applications run the best possible and most reliably they can.

Tim Butara: Ian, anything to add here maybe?

Ian Evans: Mike managed to get most of it in there. For me, the most important aspect of it I think is really time savings. Cause the one thing about bare metal that is kind of overlooked a lot of times when people are looking at all the tools, it’s incredibly complex, it’s probably one of the most complex things that you can possibly manage within the datacenter.

Because you do. You have all these different OEMs, you have a lot of different variables that come in to how a system comes online, how it’s planned for future use, how it’s consumed. So our focus is really going to be, and it continues to be, adding additional levels of intelligence and usability into the software platform, that really with the goal makes it an experience, the customer doesn’t have to look at these individual pieces of hardware anymore.

You know, they look at it more like, I need this and I need this set of constraints, and that set of constraints is going to enable this application, and I need to build a workflow around that. That’s our focus right there. And we feel like we’ve moved in that direction very nicely and we continue to do so in the future.

Tim Butara: So, it’s more treating it as a platform rather than just a single tool?

Ian Evans: You got it. Exactly.

Tim Butara: Well, that’s actually a great note to finish. And thank you so much, Mike, Ian, it was very awesome to have both of you on, and to kind of provide both of your unique perspectives – also more the business perspective, as well as the more technical perspective. Before we jump off the call, before we wrap it up, if listeners would like to reach out, learn more about Metify, learn even more about Mojo, where can they do that?

Michael R. Wagner: Yeah, the best way is just to hit our website. So, it’s    www.metify.io. And check us out there. And then they can schedule a demo directly with the Calendar link on our site, or even just a phone call as well. So that’s the best way to hit us.

Tim Butara: And also, can you send over the article that you talked about before, that I can also include that in the show notes?

Michael R. Wagner: Absolutely, yup. I will get a link for that.

Tim Butara: Awesome. Well, thanks again so much, it was great having both of you on.

Michael R. Wagner: It’s a pleasure, thank you.

Ian Evans: Thank you. Pleasure.

Tim Butara: Well, to our listeners, that’s all for this episode. Have a great day everyone, and stay safe.

Outro:
Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes and don't forget to share the podcast with your friends and colleagues.