Alexandre Chabot-Leclerc ADT podcast cover
Episode: 116

Alex Chabot-Leclerc - Unlocking the potential of AI in R&D

Posted on: 14 Dec 2023
Alexandre Chabot-Leclerc ADT podcast cover

Alex Chabot-Leclerc is the Vice President of Digital Transformation Solutions at Enthought, a science-focused digital transformation consultancy.

In this episode, we discuss how the field of research and development (R&D) can tap into the power of new AI technologies in order to stay on the cutting edge of innovation. We focus primarily on how R&D jobs will be impacted, and how organizations can support their people to become digitally enabled and future-forward.


Links & mentions:


"AIs are really bad in general at going outside of what they've learned, which means by definition they know everything that humans know, but they don't know the things that we don't know. And the job of scientists is to figure out the things that we don't know."

Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, content and community manager at Agiledrop.

Tim Butara: Hello everyone. Thank you for tuning in. I'm joined today by Alex Chabot-Leclerc, VP of digital transformation at Enthought, a science-focused digital transformation consultancy. In today's episode, we'll be discussing how the field of research and development can unlock the potential of new and emerging AI technologies in order to stay on the cutting edge. And we'll have a particular focus on the impact of AI on jobs and all the new skills that people need to adapt to this impact. 

Alex, welcome to our show. It's great to have you as our guest today and get to discuss this with you. Do you want to add anything before we dive into the discussion? 

Alex Chabot-Leclerc: No, thanks for having me. I look forward to it. 

Tim Butara: Okay. Awesome. So the first thing we need to do is we have to set the stage and set some context for our listeners. So what does the current R&D or research or development landscape look like when it comes to new and emerging technologies right now? 

Alex Chabot-Leclerc: It's a really good question. It changes a lot based on the domain. My and Enthought's main experience is in material science, semiconductor, life science, or like drug discovery, these kinds of things. There's a lot of digital things already on the data side, on the analysis side, on the simulation side, on the modeling side, data management, all those things.

But obviously, like everywhere else, there's this boom of using generative AI or trying to use generative AI for different use cases. So the tools change, but the problems kind of stay the same. 

The problems are, lots of tasks in science are very time consuming, because unlike, I don't know, maybe web development, at some point you got to interact with the physical world. You have to make the sample, you have to make the drug, you have to make the in vitro, you have to deal with animals, you have to deal with machines. So you have to go to the physical world, which means, at the other end of that, you need to put it back into the computer or actually in the middle, you often have to wait and then have to put the data back into the computer and then make sense of it and connect it back to the input.

So there are a lot of opportunities there for connecting machines in, in manufacturing, I think it's called like Industry 4.0. I don't think we're at 4 in R&D, we may be we're at like 2 or something. 

Other challenges navigating really complex datasets, images, DNA sequences, spectral electron microscopy images as well, but that are processed differently, very long, very large time series and stuff. So yeah, there's new technologies that they're promising and old problems. I would say. 

Tim Butara: And I'm guessing that a lot of these are perfect use cases for, you know, the power of AI technology. So, you know, better data management, better connectivity. I'm guessing that these are a lot of the opportunities and use cases that new AI tools are positioned to address and kind of solve in R&D.

Alex Chabot-Leclerc: There's right now, like, this month or this year, I see sort of 2 branches. There's all the language models that are interesting. The large language models, ChatGPT being the most famous example, and the very related domain of foundation models, which are the same underlying technology, but not applied to language. They may be applied to images, for example. I'll get to that in a second. 

So the language ones are interesting because in science, big medium of communication between humans, between labs and between researchers is documents, like, writing papers, writing patents, making presentations, writing press releases, and these kinds of things.

And I was just in Japan last week talking to a drug discovery company, a drug development company, and any drug development project begins by some scientist reading all the papers, all the patents, all the presentations internally and externally, like, it's an immense amount of work to make sense of all this text that is not really in a database. I mean, you can use Google Scholar and search for stuff, but it doesn't give you the information. 

So there's a lot of potential for large language models to be able to summarize and extract information and extract, like, what is the molecular formula that is being discussed in this paper, extract information from tables, things that we can do with, like, general programming, some AI that predates the large language models, but it has a lot of potential for sort of unlocking new things, new pieces of information, new data that sort of used to be in a database, made it into a paper, and now we have to get it out of the paper and put it back in a database so that people can make sense of it. 

Tim Butara: Yeah, like, you pointed to a few examples of this particular notion of all of these back and forths, right? We have to extract something from a database, then make something with it, and then put it back into the database. And as with other industries where AI is revolutionizing things, one of the biggest benefits that I see here is the time saved, right? You mentioned that, yeah, you're able to do that or a scientist would be able to do that on their own, but it would take them enormous amounts of time to go through all the papers, to go through all the documents related to everything.

And then because of this overload of information that humans are not kind of evolved to... you know, we're much better at kind of uncovering complex insights based on a few data points, whereas machines and AI and tools like that are, you know, primarily designed to cover large, huge amounts of data.

So it would be that much, not just time consuming for a scientist to do that on their own, but the chance for error would be much higher because their focus would be that much dispersed through all of these, you know, through just getting the baseline information rather than innovating based on that information. And I'm guessing that I just enables you to kind of start innovating earlier and start making use of the data earlier instead of, you know, investing a lot of effort and concentration in time into just just getting to terms with it in your head.

Alex Chabot-Leclerc: Yeah, we call it, this is a very nerdy term, but incidental complexity, which means, like, I know what I want to do, like, I know exactly the question that I have, I know where the data live, but it's so much work to get the data together, to process it so that I can answer my question. So a lot of the things that we build, and I think a lot of the potential of digital transformation, digital tools is to sort of reduce this tedium of going from question to answer. 

And you talked about making errors. That's another place where we're really excited about where AI and machine learning has a lot of potential. There's a lot of judgment in science, especially when looking at, I don't know, a spectrum of like a mass spectrum of some material or an image of a culture. You're looking at like a bunch of cells in a petri dish and then something is supposed to happen. Something is supposed to be a certain size and then a human will look at them and will decide, all right, now is the time, like, yes, I am satisfied with what happened in my petri dish, now is the time to move on to the next step, or it failed, I should go back to the beginning.

There's a lot of basically decision making that happens in the scientist's mind that sometimes takes decades to acquire, which means that the new scientist fresh out of grad school that has only been doing research for 10 years hasn't yet acquired this knowledge. And by connecting again the experimental conditions that go into the experiment, the outcome of the experiment, and once you connect that with the scientist's expertise, it becomes possible to encode the expertise basically in a system that can get better both for the more senior scientists so that they don't have to look at this culture half of their, like, four hours a day and count cells and... but also onboard new scientists faster.

And also, because it's done by a computer, you get much more repeatable, reliable measures. If there's an error, at least it's consistent error, rather than having a very high variability that you don't quite know if it's due to the experimental condition or if it's due to the human who's looking at the, let's say, the culture in this case.

Tim Butara: Yeah, those are some very, very good points and then they lead perfectly into the next area of our discussion today, so the impact of these new technology trends on jobs in R&D and what you said right now is already kind of veering into this area, right? Jobs will be heavily impacted, I'm guessing on one side, that's positive and on one side that can also have negative consequences. Let's talk about that for a little bit. 

Alex Chabot-Leclerc: Yeah, I've been thinking about that for a little while, like reading all the articles everywhere, like AI is going to take my job and stuff like this, and I have a very different view of the role of AI in R&D, because there's very little, like, mechanistic work in research and development, it's a lot of people with high levels of education, Master's and PhD, where they're at the boundary of human knowledge in many cases, where AIs are really bad in general at going outside of what they've learned, which means by definition, they know everything that humans know, but they don't know the things that we don't know. And the job of scientists is to figure out the things that we don't know.

So I think it's not going to replace scientists for quite a while. But I think it's much more of a, has the potential of being much more an augmenting tool. There's this comparison between like cyborgs and centaurs of, like, one that mind melds with the machine. And the other one, the centaurs that more like uses the machine to augment the human ability.

I think there's a lot of potential for that and where that might affect jobs is, it's kind of like the difference between, this is not a perfect analogy, but like a farmer who's using a plow that they're pushing themselves versus a farmer that uses a bull to till the land versus someone who has a giant John Deere machine that does the thing, right?

They solve different problems. One of them works for like small scale things. Some of them are much better to feed thousands of people, millions of people. I think the scientists who are able, and the companies who employ the scientists who are able to use those tools are going to, like, accelerate and, like, take over the market in a way, or have the potential to move so much faster than their competitors. So I think the impact on the job will be pressure, I hope in a way, for scientists to learn more digital tools because otherwise they won't be able to keep up. 

Tim Butara: So again, it's not a question of replacement or overtaking jobs, but just that the nature of their jobs will change because of these new tools. 

Alex Chabot-Leclerc: I think so. Or that's what I see. And also that's what I hope, cause I'm kind of, I associate, I am not a practicing scientist day to day, but I have a PhD, I associate with scientists. But that's also what I'm seeing with the companies that we're working with. The scientists are basically artists. The tools allow them to create more art.

That's not a perfect analogy, again, because like, because a bunch of artists are being put out of work because of tools like MidJourney or Stable Diffusion that can create actual, like, drawings and stuff. But here, maybe because the art is in the physical world, they're more immune to AI. 

But yeah, they're able to do so much more. They, we humans, all of us are able to do so much more than an AI that I think they're, humans are safe for a little while until we meet... let's talk again next year and see what happened. 

Tim Butara: Yeah, I think that like every conversation about AI going on right now will need to be revisited in six months' time or a year's time because things are changing so fast that probably, like, we often joke that, you know, by the time that the episode gets released from when we record it, things will change so much that a lot of the stuff will not be as relevant as when we're actually talking about it. So, so that's always, you know, that's always a potential issue here. 

Alex Chabot-Leclerc: It's both exciting and exhausting. 

Tim Butara: Oh yeah, that's a very, very nice way of putting it, yeah. So, with all this in mind, how should organizations support their people to kind of tap into these new technologies as efficiently as possible and to work alongside AI rather than kind of going against it too much?

Alex Chabot-Leclerc: I think there are two main things. There are always many things, but this is a short discussion. I can't go through all the things, but one part is education and one part is to think about the data before thinking about the AI. So let me talk about education first. We've discussed and observed and read many accounts of projects where the tool, whether it's AI or whether it's more like a model of sorts, is really good or has the potential of being very good, and then the humans completely disregard the model. They're like, yeah, sure. Okay. But I'm not going to use that because of a lack of understanding. 

Humans and the scientists that we work with, the ones that I know, they don't like, we, they don't like, we don't like black boxes. There needs to be education about how the things work, which sometimes means building a version themselves or learning to build maybe a much simpler version than what they might be able to buy or might be able to like hire people to build. But there's a lot of understanding that needs to be developed of how AI and just digital tools in general, how they work so that they can be trusted. And then once they're trusted, and that's where the sort of virtuous cycle of using them and experimenting more, learning more, doing more, that's where the speed comes from.

And also in general, education, where there's a lot of great tools today for doing low-code/no-code solutions, either just in quote unquote programming space, like data manipulation, simulation, or AI and machine learning, those have a good on ramp, like it's easy to get started, but in the end, people who use that start thinking about code, really, whether you, whether you click on things or whether you write code, it ends up kind of being the same, but there's a really hard wall with these low-code/no-code solution where things become either unmanageable or things become impossible.

So I think learning some programming to understand, again, the idea of the black box, being able to, like, self serve, do the experiments themselves, like, to bring back the analogy of the artist. I'm, let's say, someone who's a really good, who can draw really well, you're not giving them pens, you're giving them, you're presenting them someone, and say, this person has a pen, tell them what to do, right? It's not going to make a very good drawing, right? They're much better because they can, they know how to use the pen. 

A scientist who doesn't know how to program, who has to tell a programmer, build this thing, will make for a worse drawing, a worse program, a worse model, than if they're able to do it themselves. They might not build the full, like, fresco on the wall or something like that, but if they can make the sketches and say, this is exactly what I want and be able to give that their programmer, there's a lot of value there. So that's part of the education. 

And the other one about the data next year or in two years, there will be a new AI thing, which will have revolutionized everything, but the data will be the same. Data stays, technology changes. So one way to enable the scientists and science-based businesses to do more AI, new things with technology as this comes out, is to have their data under control, so that they can feed to the beast, feed them the new machine with their data and any new technology that, if you can't give it data, then you're hamstrung, you can't really use the technology effectively. 

Tim Butara: Why? Why? Because of the data being constrained or what?

Alex Chabot-Leclerc: It's a good question. Let's say, because of this fact that I sort of started with about the physical world, which is, at some point, someone makes a recipe of something like a plastic. And then they have to put in a machine to characterize what's up with the plastic. Like, how does it bend? How does it resist to temperature changes? Does it break? Et cetera. 

These machines, they can be connected to databases, computer systems, so the data goes through. But in our experience, it's not that common. Often the machine will spit out like an Excel file or CSV file and the scientists will look at it, they'll make a figure a plot to like, learn what happened in the experiment. And then the knowledge will go in their head. And then the data will be, like, on a folder somewhere, like on Dropbox or Onbox or on SharePoint, and then it will be basically, it will be there in the computer, but basically impossible to find for them, for their colleagues, for the person who's going to replace them when they retire.

So that's what I mean by like, if that data was stored, so it is stored, that's part one, but findable, accessible, labeled in the right way, connected to the inputs, then there's possible for the data to be valuable beyond the immediate use case. 

Tim Butara: So, that's one of the main ways in which companies can support their people... companies and organizations and labs can support their people to work as effectively as possible with existing as well as potentially new AI tools that will come out in the future and to do this in a long term way rather than just in a short term manner.

Alex Chabot-Leclerc: Yeah. 

Tim Butara: Well, Alex, we're almost at the end of our discussion and I just, to kind of tie things together, and based on everything that we discussed so far, my last question for you is: what should a digitally enabled, future forward workforce look like? 

Alex Chabot-Leclerc: Such a good question. It's not just a bunch of programmers. Because they're definitely useful, but they think in a particular way, and I think, it's not the end-all, be-all of doing research. I think it will be very multidisciplinary. I think the teams will be very mixed with sort of a range of skills on sort of the two spectra of the science dimension and the sort of digital skills dimension. 

But I think no one will have zero. You won't have programmers that have zero science and you won't have scientists that have zero programming or digital skills. I think the mix of those things together will really pay off. So it means... we think, well, I think, it's a lot easier to teach someone programming in my experience than to teach science.

So I highly encourage all the scientists to learn a little bit of programming, a little bit about dealing with data, a little bit about sort of managing information, managing code. And I think also in the future, or this future digitally enabled workforce will have enough of a shared language that the sort of what I was describing before about the artist and the other person with a pen, it'll be a lot more like Michelangelo with his teams of artists painting the Sistine Chapel ceiling, right? He didn't do it all himself. There was a team of people who understood his intent and could help. 

Tim Butara: Well, I love this final analogy of art with all this. I think that we really brought it home and finished on a very strong note, alex. Just before we wrap things up, if listeners would like to connect with you or learn more about you, where can they do that?

Alex Chabot-Leclerc: The best way is probably on LinkedIn. I'm Alex Chabot. And then they can check out Enthought's website at 

Tim Butara: Awesome. Well, Alex, thank you again for joining us. This has been great. And yeah, we were happy to have you. 

Alex Chabot-Leclerc: Pleasure. 

Tim Butara: Well, to our listeners, that's all for this episode. Have a great day, everyone and stay safe. 

Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at, as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes, and don't forget to share the podcast with your friends and colleagues.