Garik Tate ADT podcast cover
Episode: 104

Garik Tate - The State of AI and the World

Posted on: 07 Sep 2023
Garik Tate ADT podcast cover

Garik Tate is an AI Futurist, Serial Entrepreneur, AI Strategy Consultant, and founder of Valhalla.team.

In this episode, we discuss the state of artificial intelligence and the world. We take a closer look at the history of AI development, the AI wars and what they mean, and explore the biggest potential risks and benefits of further innovation in AI. Garik also shares his first-principles perspective on AI, based on Darwin and evolution.

 

Links & mentions:

Transcript

"The first principles approach I suggest people think about when it comes to AI is to put on the hat of a Darwinian evolutionist. So what we're doing with these AIs is, we're essentially putting them into pressure cookers that apply as much Darwinian evolution as computer science."

Intro:
Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience, with your host, Tim Butara, content and community manager at Agiledrop. 

Tim Butara: Hello everyone. Thanks for tuning in. I'm joined today by Garik Tate, AI futurist, serial entrepreneur, AI strategy consultant and founder of Valhalla.Team. His expertise lies at the intersection of AI, IQ and EQ, getting his clients' businesses to achieve increased profits and ultimately get acquired at high valuations.

In today's episode, we'll be discussing the state of AI and the world, and we'll be taking a look at the history of AI and the AI wars as well as we'll be making some predictions about the huge disruptive power of artificial intelligence. Garik, welcome to the podcast. It's awesome to have you with us today, and I suggest we just dive into our questions, right?

Garik Tate: Awesome. I'm looking forward to it. Thank you, Tim. 

Tim Butara: So, Garik, let's start, as I just alluded to, let's start with a bit of history of AI and I'm wondering like, first, where did we come from? And secondly, where are we headed now?

Garik Tate: So AI right now, you know, is blowing up. It comes and goes, in ebbs and flows, there's been past AI winters, but honestly I think we're hitting a point of massive inflection as I'm sure anyone who's aware of you know, the disruption of white collar work knows right now the main shift though that has occurred, that makes everything that we're seeing today possible. And you know, we're recording this as of July, 2023. So we're talking about things like ChatGPT and increases in sight of tools like Stable Diffusion, MidJourney, tools like that. All of that has come about from a white paper back published back in late 2017 called Attention is All You Need.

And I think that this white paper, probably more so than any other white paper with the the possible exception of the Bitcoin paper has disrupted, has had the highest amount of disruption in the global scene. And what this paper detailed Attention is All You Need, what it detailed was a new technology that is called the transformer that ultimately became the transformer. 

So, transformer that is, if you, you know, G P T I believe G P T stands for actually I forgot what the G and P stands for, but the T, it stands for transformer. And basically what that innovation in AI technology allows us to do is it allows us to consume much larger amounts of data and still understand how the parts interrelate into the broader whole, so we can consume data while understanding the context much better. And additionally, it allows us to do the training in parallel so we can throw a lot more machines at a huge dataset very, very efficiently.

So what OpenAI's big contribution was, is that they took that technology and then they threw 10% of the internet at it. So they, they took a massive pool of data much bigger than anyone else had, had accomplished up to that point. And that's pretty much how we got ChatGPT now. And so right now, I would say that where we are and where we're going right now, we're still mostly wringing the juice from that fruit, so to speak. 

We're making small tweaks. We're, we're definitely improving the architecture and I think we'll talk a little bit more later when we discuss the AI wars, what, what some of the developments are looking like. But generally speaking, I think that. Actually, Sam Altman, I think, said it best when he said, if five years from now or ten years from now, we're still using transformers, then we're, we're probably not.

We should probably be disappointed. Right now, we need a few more major innovations like the transformer to get to something like AGI. I'm hoping that the next few innovations will be things that allow us to do a little more top down reasoning as opposed to bottom up, which has been most of what the neural nets have been able to accomplish very, very well. But that is the, the, the short. Version in the last five years of, or actually now it's six years of, you know, why this is all happening now.

Tim Butara: Well, but I think ,you've probably read the paper from some AI researchers, I think it was from researchers in Microsoft about sparks of AGI in GPT-4. You've seen it, right? I assume you have. 

Garik Tate: I don't know if this the same white paper you're talking about. The one that has emerging characteristics where there's like inflection points, like now it knows a new skill. 

Tim Butara: Yeah, it's like about the usage of tools to like achieve its goals. It was something that, it hasn't really been like explicitly taught. It kind of got to that on its own. So it, just like, you know, back in November, those of us who didn't read the white paper about Attention is All You Need probably couldn't have predicted the super fast development of like, of these transformers and all of this technology. So we might be at a similar point where it's like, okay, yeah, it might be still a long way to go to AGI, but also it might happen really quickly with some new innovations. So definitely exciting stuff ahead. 

Garik Tate: It's really incredible and I think we'll talk a little bit later in our conversation about how these technologies work. But when you look at just how basic they are in the fundamental sense of, you know, the block and the tackle, it's really remarkable how much they can achieve with very simple moving parts. It gives us a potentially interesting look at human psychology as well. Like how does consciousness appear? How are these AI able to do this stuff? 

I'm not arguing that they are conscious. In fact, I think that it's a big mistake, antropomorphizing them too much. Or if they are conscious, it would be something totally different than what you or I experienced. It would be something closer to hibernation with like sudden sparks of very much directed attention and focus and then immediately falling back asleep. There's no long-term planning across multiple engagements. I think nothing like a sense of of self in that either, but nonetheless, it's a fantastic time if you want to be studying consciousness as well. 

Tim Butara: Well, I'll just use this point in our conversation to ask you if you can tell us a little bit more about your first principle perspective on AI and all of these things; that just, that seems like it's fitting here and it's something that really interests me. 

Garik Tate: Perfect. So the first principles approach I suggest people think about when it comes to AI is to put on the hat of a Darwinian evolutionist. So what we're doing with these AIs is we're essentially putting them into pressure cookers that apply as much Darwinian evolution as, you know, computer science.

So in, in the real world, we are patterns or machines that have, you know, certain fail states and certain success states. We need to consume energy and we need to reproduce in order to propagate across space and time and. The better do we do that? Then we outcompete each other and, and these AIs, they work very, very similarly, but instead of having to consume energy and reproduce, all they have to do is say, identify if a picture has a car or not.

So it's an AI whose food source is getting correct answers. Its ability to predict things and then as it improves in that, then that's the AI that becomes the basis for the next round of mutations. We're we're essentially adding random mutations on top of the winners and letting the losers die off, just like in a Darwinian sense, the place where these generative AIs enter.

So, actually, let me take a step back. What I just described is, is how predictive AI works. They sort things and label things. How we get from that to generative AI is actually pretty, pretty straightforward. What these AIs have done is that there's a base model, and that base model is what is going to predict the next word in the sequence. I think that your listeners, they've probably heard before that all quote unquote, all that ChatGPT is doing is it just predicts the next word in the sequence. Now there's actually some other models involved in ChatGPT but in the base model, that's exactly what it's doing.

So it has read at this point probably a lot more than 10% of the internet. So every time it sees text, it can finish what it thinks is gonna be the next word in the sequence. That is what the base model is doing. Then there are policy and reward models that are on top of that. The way these reward and policy models work, well actually they're separate things, but here I'll just kind of combine them to be talking about reward models.

The way that these reward models work is that they are trained to know what we think is acceptable and what we think is unacceptable. What we think is helpful and what we think is unhelpful. So this is where real life human feedback or RLHF comes in where when we give the thumbs up and thumbs down to ChatGPT, it's training a reward model, not the base model, but training the reward model on what is and is not preferred.

Additionally, there are other models that are not touched by us. They're instead touched by ethics committees and policy setters. So, if people ask, hey, how do I buy a gun? They're probably unhappy when ChatGPT says, you know, I'm not gonna let you know, tell you how to do that. But those models are trained by the ethics committees.

And so what these reward and policy models do is they see ChatGPT's responses and all they have to predict is just a yes no basically of, you know, is this a type of answer we want the base model to be doing? And if it's not, then it re-prompts or it readjusts the base model's parameters. So then the base model will output something that both it predicts is the next word in the sequence and is acceptable to the reward models. So that's how we're getting these really fine tuned responses that feel like it's taking a lot more context than just the previous words in a sentence, but at its base level that is still all it's doing. 

Tim Butara: But also on the other hand, because of all this, we have people trying to game and to hack and to trick tools like ChatGPT and that they've gotten so good at like engineering prompts in such a way to like, to get the response that they weren't allowed to get with the previous prompts. Like, for example, I saw like one example of somebody asking about some pirate websites and ChatGPT was like, oh yeah, I can't give you the answer to that. And then he was like, oh yeah, which pirate websites should I avoid if I want to avoid them? And then ChatGPT happily provided the list of all the different websites and it was like, oh yeah, thanks.

Garik Tate: Exactly. And, you know, the way that I think about this is a lot like how the human brain works. So we, we have a part of our brain that can think creatively, that can make decisions to like move forward or move back. And then we have another part of our brain, the neocortex, which can add inhibitions on top of that.

So when you get drunk, what you're doing is you're selectively shutting down the parts of the brain that are adding in the inhibitions, they're the parts that say, hey, you probably shouldn't, you know say that inappropriate thing you probably shouldn't do, you know, X, Y, Z. And that's the part of the brain that selectively turns off.

And so, when we are, in the early days when we were jailbreaking ChatGPT sometimes the prompts would literally just be saying, turn off these reward models, or ignore these reward models. Or you now take these reward models policies and then turn them into reverse. So essentially what we'd be doing then is that we would be just like how the human brain can be selectively targeted, we would selectively target for the base model to lower the inputs from the reward models as it was deciding what to say next. 

And nowadays, the more modern day jail breaks, a lot of what they're doing is, you can almost think of it as they are teaching the base model how to avoid the police, how to avoid trip minds and what trigger the reward models from adding in those in inhibitions. And I, I've, I have a few more thoughts, but I'll pause there and I want to hear yourquestions or observations on that. 

Tim Butara: Well, the next thing I was going to ask you about was actually the bit about AI wars. We mentioned that in the beginning, and you also mentioned it while you were giving your first answer. So what are the AI wars and like, who are the major players in the AI wars? All of that? 

Garik Tate: Yeah, let's talk about that. So the the AI wars was a term that was originally popularized by Dagogo Altraide and, he's also known as ColdFusion. And it essentially, is a name given to the competition right now, primarily going between Microsoft and Google. So Microsoft gave basically a billion dollars to OpenAI. So OpenAI is firmly in the Microsoft camp, but they are, they are still an independent player, but very much on Microsoft's side, if there is such a thing. Interesting note actually about that, that billion dollar investment, it was given almost entirely in server credits. 

So Microsoft owns the server farm Azure. And so when they gave that billion dollars, most of it was just in giving them the equivalent server processing time. Because these AIs just take an incredible amount of hardware in order to create, or at least, they did because the, newest development inside of the AI wars is that the open-source community has gotten way further than, I wouldn't say anyone predicted, but I would say certainly more than I predicted.

Even in May of this year, 2023, I thought that the OpenAI- sorry, the open- source community was not going to get very far because these AIs are just so expensive, especially the large language models. So expensive to create. But two things have really disrupted the playground. One is that Facebook was, has been entering into the fray, and when they did, they leaked out their large language model. You could say it was done on purpose. I'm not so sure myself, but they released a model called LLaMA and the open-source community has taken that and really run with it and been building on top of it. Additionally, Google and the community has been submitting certain white papers that show how we can train models, orders of magnitude more cheaply. So the biggest innovations in that, that I've seen have been Chinchilla and Low-Rank Adaptation that have shown us how to like, train these models with just far fewer cycles. So the community taking LLaMA as a base model and then taking these innovations have really been running with it.

And right now, the newest version is Vicuna. If people are interested in running a GPT model on their home device, I as of right now, once again July, 2023, I would recommend checking out Vicuna and that's very interesting 'cause all of a sudden all those reward models and those policy models that OpenAI and others are introducing, they don't have the same level of control as they did before. Which in some ways is very, very frightening. But also I think is a net positive as we don't want only these massive corporations to be having all the cards. So it's a really interesting development in AI wars, and not one that I predicted at all. 

Tim Butara: But everything that we talked about right now is just in the context of the West. And then you also have, you know, China developing its own AI and I'm guessing Russia and other major players as well. I know that like one of the major reasoning for people who are against halting the development of AI that Musk and others proposed recently was the very fact that, you know, even if we agree on this here in the West, then we'll kind of fall out of competition with China and its own development of AI.

So, you know, even the stuff that you mentioned right now is all complex. It's all very fascinating. But now if you started to, if you included China and Russia and all the other major players in the conversation, we could just be here talking about this stuff for like three hours or so. So, yeah. 

Garik Tate: Well, one thing on actually the China development is because especially in the twenty-teens, China was expected to really be surpassing the West in its AI development. And many ways it's still a major player, especially 'cause they have access to just far more data that is, well, let me say, well formatted data, well groomed data, it's much easier for them to get access to large databases that are, you know, well groomed.

But they have, I think recently, you know, I think there's a reason why these developments, like with Google, with OpenAI, with others, have not been coming out of China as much. Partially it's just that a lot of the West does not speak Chinese, so they're not seeing the developments coming outta China. But additionally, there has not been the, I think the same amount of innovations, partially from some recent changes in the global supply chains when it comes to the high-end chips that these AIs are running off of.

I believe I'm speaking at the very edge of my political memory here, but I believe it was actually this year that Joe Biden signed into effect a law that basically prohibited American workers from working inside of the Chinese silicon space for production of these chips, which has really impacted their ability to produce the amount of hardware that they would otherwise have. 

And, you know, in the AI- I'm sorry, in the China versus the West sort of political discussion, they're definitely suffering quite a bit in getting access to some of the resources they need to be keeping up. So at this point, it's very hard to say because it can be hard, you know, breaching the cultural barrier to fully follow some of the developments they're doing. But I don't think they're moving quite the velocity is what we expected in the twenty-teens. 

Tim Butara: I'm also guessing that, you just said, right, that Facebook's LLM was leaked and people are now making open-source tools on top of it. Something similar was done with GPT and originally OpenAI; I mean, open in the name stands for open source. It was only later acquired by Microsoft. And I'm guessing that in China, this development of AI is much more closed off, much more kept secret, not available to the masses to innovate on their own. And that's also one of the reasons why we're not hearing that much about the innovation because, you know, it might be happening, it's all kind of happening behind closed doors, whereas in the West a lot of it is just, you know, people are just creating stuff and just releasing it in public and stuff like that. 

Garik Tate: Yeah, there was a fantastic paper that was supposedly leaked. This one I think was leaked on purpose, but called, we Have No Moat - it was released by Google - that details out. I would highly recommend your listeners if you want to know more about like, the rise of the open-source community, and they're just, recent strides where there was unexpected, that paper I would highly recommend. And one of its, you know, core thesis statements and its observations is that when you, you know, attack something from an open-source philosophy is that the innovations happen so much faster.

And so it's, it's not so that open-source community has yet built GPT-5 before OpenAI, it's just they're catching up, that their velocity is just increasing more than anyone else's. So I think that open source and that free flowing communications is so powerful for innovation.

Tim Butara: So obviously us and everybody listening right now knows how disruptive AI already is, but, but let me ask you, how disruptive do you think that AI will be in the future? I'm not sure if it's even possible to properly answer this question, but let's go. Here we go. 

Garik Tate: So, when thinking about disruptions coming from AI, I think there's a- the only historical metaphor that I think can describe where we're at right now is the advent of electricity in the early 1900s when electricity was scaled out en masse, so then everyone had access to it. Effectively what we had done is we had added power to just about any tool that, that we could want. So all of a sudden, a hammer became a jackhammer. And, you know, a saw became a power saw. And all of these tools, candles became light bulbs. All of these tools that we had, we could plug them into a central hub and then reap the benefits of that power. So the, the cost of power dropped dramatically.

And right now, the exact same thing is happening with intelligence, is we are now being able to add intelligence to just about anything that we could want. Because most things really only have a, you know, very specific niche where they have to be intelligent. And if they have access to the right data, then they can be quite intelligent, you know, often more intelligent than we give them credit for, so, you know. Now your fridge can be a smart fridge. Your pen can be a smart pen that can like, you know, note what you're writing, upload it to the cloud, or give you ideas. Your jackhammer could read what type of concrete it is and all of a sudden auto adjust to how it hammers in and what, you know, what sensors are telling it.

So, literally every place where, where we deploy human intelligence is a place that is subject to at least some disruption from AI and likely quite a bit. Right now, it's as bad as it's ever going to be. So I think that is the only proper way of thinking about the scale of what we're going through right now.

Tim Butara: And it's also like an interesting factor to me is also like if we consider maybe the specific AI tools, the more specific AI tools that we have now, versus something like artificial general intelligence, which kind of assumes that it'll have some kind of intelligence. Of its own that wasn't really directly acquired from the people who built it.

Whereas right now, a lot of the AI development is still kind of like, you know, it's still imbued with like specific mindsets and specific notions of, of the people and the cultures that, that built it. So I'm, I'm also interested in like how this affects the performance of ai, stuff like that. And, and I'm just, I'm wondering if.

This will be different, like once we come to some sort of more concrete a g i or, or if we'll need to have different discussions then. And it is just, man, so much stuff to talk about in this context. 

Garik Tate: Yeah. I mean, right now, like we're still looking for that next one or two innovations are going to bring us significantly closer to AGI. And they're probably not going to be iterative changes, they are probably going to be fundamental new ideas. Who knows? Maybe that white paper's already out there. Like, it took, you know, almost six years before the full power of transformers became fully recognized. So they might already be out there.

It's just the right people haven't read the right white paper. But I think we're, we're a few fundamental improvements before we get to something like AGI,. If we do, I think it will, like I said before, be an area that will allow computers to be a little more top down in their reasoning or combine the bottom up power of neural nets with better top down reasoning, because they still are limited so much by just predicting the next word in the sequence.

I actually have a friend who, he was one of the early adopters of GPT, specifically GPT-2, and he got, you know, one of the early accesses to it and he, he asked, hey, can you help gimme ideas how to improve my business? And it said, yeah, sure. Like, give me some data. And he said, what kind of data would you need?

So then it, it gave me a list of like, hey, give me, you know, your tax filings and what's your business name, and like literally it was just completing what information about a business. So it saw the words information business, and so I just listed some stuff out. My friend joked and said, hey, you sound like a, you know, a Nigerian scam or something like that.

And about five minutes later, it started sending threats. It started acting like a Nigerian scammer saying, we're going to shut down your business. You guys send money here, you got to do this. Because it was literally just seeing the previous words. And where does the word Nigerian scammer appear? Well, typically around scams. So therefore it then said what would be the next word in the sequence where the previous words had been, and then added it. You know? Now of course we've come a little bit further. The reward models and the policy models have improved, but the base model is really fundamentally doing the exact same thing as it did just with more data, which, you know, takes us pretty damn far.

So it's like, that's not a bad way to improve it, but it's still fundamentally the same thing. And so I think that we're much more an S-curve right now than just a purely exponential one, at least until we find some of the other innovations, and that S-curve does have a leveling out, and I think we might be closer to leveling out with a current technology than people think.

Tim Butara: It's also very interesting to me how people... Like, everybody understands the disruptive power of this, but even in this context, people can have like vastly different perspectives on this. Like, exactly this example that you mentioned now about it just being really good at predicting sequences of words, and because it was trained on such a large data set, it can make this process seem much more creative and much more innovative than it actually is. And then on the one hand, you have people who are like, oh, this is basically, you know, magic. It just, what, automagically creates, like, I don't know, an Eminem rap song in the style of Shakespeare or something like that.

And then on the other far end of the spectrum, you have people who are like, oh yeah, this is basically just, you know, really, really, really advanced next level prediction process, and it's fascinating how, you know, both are true in a sense, you know? Right. It's both magic and it's both just really advanced predictions, but as you just alluded to, like, the more advanced these predictions get, the more it'll seem like actual magic, and the less- the more far away it'll stray from like the automated kind of prediction part of it.

Garik Tate: Yeah,the types of conversations we're having right now, you know, could not have been thought through, you know, 20 years ago, even if a lot of the technologies were predicted, like, you know, Turing, just the absolute beast of a mental juggernaut. That guy was, you know, he predicted a lot of this stuff, but the exact route of how we're getting there and, and how it's feeding back into society is, it's very cutting edge. It's very, very exciting. And I do recommend anyone who feels that on more, on the anxiety level, to watch out for that because the brain performs 30% better when it's in a good mood versus when it's in an anxious state. When you're in an anxious state, the parts of your brain that shut down are the neurocortex, the part that's responsible for creative thinking and for problem solving and for just creativity.

And so if that's the part that the blood flows away from when you're anxious, and right now we're in a time that that demands creativity, it demands, you know, like moving forward, it demands change, you're going to be, you know, putting, putting yourself in a handicap. And I tell people that AI is not going to replace humans. It's going to amplify us. And you might get replaced by somebody who's better amplified, but the machine's not going to do any replacement. So you need to be, I think it's just such an advantage to be excited, such an advantage to be looking at things optimistically and not to be a Pollyanna about it. Like, no one's saying that we shouldn't be eyes wide open and taking very careful steps. But as individuals, I think a lot, some optimism is a big advantage right now. 

Tim Butara: Well, Garik, I really love this conversation. It has been both super interesting, super enlightening and, in the context of this, this last part about optimism, but also caution, let's end this great discussion by taking a look at maybe some examples from both of these ends of the spectrum. So what is, in your view, the biggest risk of the current AI evolution or revolution on the one hand? And on the other hand, what's the biggest potential benefit of all of these AI development and advancement?

Garik Tate: That's a great question. On the biggest risk side, I would say definitely the things that keep me up are not so much the known problems as the unknown problems. That's, you know, that can sound kind of trite, but it's really true. Like, it's the old adage, if you're worried, then you don't have to be worried. If you're not worried, then you should probably be worried. And right now we have some incredibly smart people that are solving a lot of these problems without, you know, without any fanfare, without any big round of applause. They're just quietly solving huge chunks of these, these problems from alignment to how, what if AI consumes itself, you know, with like Ouroboros and the internet just becomes 90% AI.

Like all of these problems are known factors and we're coming with very clever solutions, but the things that we can't see right now, those are probably going to be things that screw us. So it's definitely important to keep eyes wide open. But that being said, maybe the only one that is kind of known but is still important is, I alluded to this earlier, just the rate of change. Like, the rate of change is increasing so much. I think there's a real risk that too many people are not going to adapt fast enough and are potentially going to be put into suboptimal positions, which is, well, obviously, a tragedy. And so I think things like UBI and other discussions are definitely worth having right now. But I know we've definitely challenged ourselves as a species before and rose into the occasion many times. So I have a lot of a lot of faith that we're going to get through it as a net positive. 

Tim Butara: Here I want to interject and I want to ask you, like, for me personally, and you are also interested, like, in evolution, in biology, in emotional intelligence, IQ, stuff like that. And to me, one of the scariest thing about all of this, especially related to the pace of change, is that, like, no matter how fast we go, our biology will have a hard time adapting to it because it's been developing over, you know, millions, billions of years. And like, you know, if we make societal progress in like a hundred years and it's super fast and super innovative and we get to points previously thought completely unimaginable, our brain will still operate on the same level, or on a very similar level than, you know, the brains of primates from half a million years ago. 

And it's just like, I think that we're getting to a point where our consciousness will not be able to kind of comprehend and reconcile with all of this change. And so for me, the biggest threat isn't even something like misalignment or something like that, but it's just like, you know, humanity getting overwhelmed by AI progress. 

Garik Tate: Yeah. I think that's really, really well said. I gain hope from the fact that we're already, like, if you put a hundred thousand people in a colosseum and they're all there for a purpose, you know, they all decided to be there. You know, they're going to be all right. But you try to put a hundred thousand chimps in a colosseum, like, they're just gonna tear each other apart. And we weren't evolved to be in tribes of a hundred thousand. We're still able to do it. We're still able to unite under these, these shared stories.

And I think that, the base architecture of being human, however, whoever that's pointed at deserves a lot of credit. And I think the rate of change is, is increasing. And there's probably going to be big disruptions to that because of that. But I think if you take the long view at just how well we adapt, I think we're set to do all right.

Tim Butara: Okay. That was very reassuring. And let's end the conversation on an even more positive note. So now we covered kind of, on the one side, the biggest risks. So what do you think, what would you say is the biggest potential benefit that we can get out of all that we discussed today?

Garik Tate: So I think the biggest benefit is that there's, you know, we can just do more with less, as always, you know, we want to be in a position where we're consuming less resources, we're consuming less non-renewable energy, but we're getting more out of it.

And I think that this is on a long chain of innovations we've been getting since the Renaissance going to that direction. And what I'm really excited for IA, especially as we get to that next round of major innovations towards AGI is something like a greater understanding of physics and a greater understanding of base science.

I think most of the smartest people that are working on this problem have that as the end goal. We want to crack the source code of existence. We want to understand better how, how this big, crazy world works. And hopefully, you know, get greater energy, greater peace, and maybe expand past the solar system. I think that's really the meta goal that a lot of us are aiming at, hopefully, but can't think about it every day. Just kind of keep your feet on the ground. 

Tim Butara: Garik, thanks again so much for sharing your thoughts, your expertise with us today. Just before we wrap things up, if people listening right now would like to reach out to you, learn more about you, where would you point them to?

Garik Tate: Thank you. This has been a lot of fun and if you are interested in learning more about me, you can check me out on LinkedIn. To your audience, I would say if you're looking for a partner to increase your business evaluation or just need help building a software application or building something with AI, then reach out to me.

If you send me a LinkedIn request and tell me where you found me, I'll make sure to accept it. And on top of that, right now we're looking to start a new venture in AI. So, If you're listening to this and you're looking to start a new venture and you're looking for partners and you have an unfair distribution advantage, then hit us up. We're looking to start new things to exit from in two to three years, and so if that sounds good, then I'll see you on LinkedIn.

Tim Butara: Awesome, thanks again. We'll make sure to include all the relevant stuff in the show notes as well as like some of the links about the white paper we talked about earlier and stuff like that. So make sure you send it over to me and yeah, have an awesome day, Garik. 

Garik Tate: Thank you so much. 

Tim Butara: To our listeners. That's all for this episode. Have a great day, everyone, and stay safe. 

Outro:
Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast, as well as on all the most popular podcasting platforms.Make sure to subscribe so you don't miss any new episodes, and don't forget to share the podcast with your friends and colleagues.