Peter Voss ADT podcast cover
Episode: 89

Peter Voss - Artificial General Intelligence

Posted on: 27 Apr 2023
Peter Voss ADT podcast cover

Peter Voss is the CEO and chief scientist of the advanced chatbot platform Aigo, as well as one of the people who came up with the term artificial general intelligence (AGI).

The focus of this episode is on artificial general intelligence. We discuss a recent research paper on sparks of artificial general intelligence in GPT-4 and the recent petition by experts to halt AI development for six months. We also explore what the future holds with regards to AI and its connection to humanity, asking the question: are we ready for AGI?

 

Links & mentions:

Transcript

“I think a lot of us have been really surprised at how much you can achieve with generative models, with large language models. And I think it’s shortened the estimate of what a lot of people think how long it will take for us to really have artificial general intelligence at human level.”

Intro:
Welcome to the Agile Digital Transformation Podcast, where we explore different aspects of digital transformation and digital experience with your host, Tim Butara, content and community manager at Agiledrop.

Tim Butara: Hello everyone, thanks for tuning in. Our guest today is Peter Voss, CEO and chief scientist of the advanced chatbot platform Aigo. Peter is one of the people who coined the term AGI, artificial general intelligence, over 20 years ago. And AGI is also exactly what we’ll be talking about today, supported by recent research from Microsoft experts published in late March about GPT-4 actually showing signs of artificial general intelligence. Peter, welcome to the show, I’m very excited to have you here, to discuss this with you. But first, do you want to add anything to the intro?

Peter Voss: No, I think that’s fine, I think we can jump into the topics. Thanks for having me.

Tim Butara: Okay, awesome. And obviously the first thing that I wanted to ask you about is what I just mentioned in the intro, that you were on the team that coined the term “artificial general intelligence”. Can you tell us more about this and what AGI even refers to?

Peter Voss: Yes, certainly. So, I fell in love with programming a long time ago, and started my own software company, and that was very successful, we had an IPO. When I exited the company, it occurred to me, what do I want to do now? What big problem do I want to tackle that’s going to be interesting and challenging?

And it really occurred to me that software is not intelligent. It’s quite dumb by itself. If the programmer didn’t think of something, then you’ll just get an error message or it’ll crash or do something not intelligent. And so I really wanted to figure out how we can build intelligent machines, how we can build machines that can learn, think and reason the way humans do.

And I actually took off five years to study all different aspects of intelligence, starting from epistemology, theory of knowledge, how do we know anything? What is reality, what is our relationship to reality and how can we know it and how certain can we be? How do animals learn, how do children learn? What do IQ tests actually measure? All different aspects of intelligence, and of course I studied what had already been done in the field of AI.

And what I saw, what became very obvious, is that the field of AI had really drifted very significantly from its original intent. When the original term AI was coined some 60 odd years ago, it was really about building machines that can think and learn and reason the way humans do.

Now, they thought they could do this in a few years, but of course it turned out to be a really hard problem. So what happened over the decades, AI has actually morphed into narrow AI. When people talk about AI over the last few decades, it’s really been narrow AI, to solve one particular problem at a time. And it’s really the human ingenuity, it’s a human intelligence that is turned into code to solve the problem. So the intelligence doesn’t really reside in the machine as much as it does in the programmer or the data scientist.

A perfect example of that is Deep Blue from IBM, world chess champion, and was ingenuity of the people designing the program, together, obviously, with the brute force hardware advances at that time, that allowed the system to become world chess champion. But it couldn’t even play checkers, so it didn’t really learn to play chess.

Even if you go to something more recent, like Alpha Go, it’s again the ingenuity of engineers to come up with a particular neural network that could self train itself by zillions of games of chess to become good at playing go and using certain software traits. But it’s really, again, the ingenuity of the engineers and data scientists to do that.

So, anyway, it became clear that we really are in the era of narrow AI. And in 2002, I got together with a few other people who felt, like I did, that time was ripe for us to go back to the original dream, the original vision of AI, to build thinking machines. And so three of us actually coined the term – Shane Legg, Ben Goertzel and myself, we decided to write a book on the topic. And we talked about, how do we describe that, how do we differentiate ourselves from the main field of AI?

So, general intelligence seemed really appropriate to me, because g, the little “g” is a symbol for general intelligence in IQ tests or in psychology. So I think it was a good term to come up with. And as it turned out, it really caught on and now you see it everywhere. So, artificial general intelligence, in short, is to have machines that can think and learn and reason the way humans do. 

Now, not limited to that, but the important thing is that it can, by interacting with the world or people – the world in general, whether it’s robotics or through just text or speech or whatever where they interact – that they can actually learn new tasks and they can reason about them, and basically do planning, have imagination to try different things, to come up with novel solutions to novel problems the way humans can.

Tim Butara: A lot of this is exactly what has been observed by the researchers in the paper that I mentioned in the intro, right. Basically, we’ve seen in the past few months this surge of generative AI, as pretty much everybody listening will have encountered it by now, and probably the large majority of people listening will also have already used it by now. And GPT-4 is the most recent publicly available version as of recording this and it’s disrupting pretty much everything.

As I said, the paper from Microsoft researchers about sparks of artificial general intelligence in GPT-4. So, what do you think about this? We’ve talked about it, you’ve read the paper even before we decided to discuss this. What are your thoughts here?

Peter Voss: Right. I mean, there are definitely sparks of general intelligence in GPT-4. These generative models with massive datasets, massive training sets are really quite phenomenal. And I think very few people expected them to be as powerful as they are, to be able to do such a wide range of different things, and to actually do something that looks very much like reasoning. And so, it definitely is a major advance.

Now, the question that AI researchers are asking themselves, is this the real thing? Is this actually AGI or is it proto AGI? Do we just need more training data to get there? And I think most people agree that it’s not enough; that, as impressive as the statistical systems are – Daniel Kahneman has this, you know, system one and system two, which is roughly subconscious thinking and conscious thought, just very roughly as a division. 

And ChatGPT or generative models, large language models, really are system one. They don’t have meta cognition, they don’t think about thinking, they don’t have access to the actual thought process. And this is why they’re also so unreliable, they can confabulate and make up stuff. 

Which is kind of a strength, but because it’s purely a statistical system, it doesn’t have grounding. It’s just whatever the training set patterns, these very abstract multi-level abstract patterns will come up with. And as everybody knows, it can come up with scientific reports and give you citations that are completely fabricated. And it wouldn’t really know, because it doesn’t have access to ground truth.

So, I think there’s actually a fairly long laundry list of shortcomings that I think are a strong indication that the current architecture is not enough. And I can certainly talk more about what architecture I think is the right thing. But I think there’s a general consensus, or at least a lot of AI researchers believe that something more, fundamentally different is needed in addition to what we have on large language models.

Tim Butara: I love the term that you used, proto AGI. And I think that this is kind of an unavoidable step, right. It would be hard to just immediately get a high level of artificial general intelligence, that we could really say, oh yeah, this is it, this is more than just proto AGI, this is AGI version 1.x for example.

And, to me personally, it’s surprising that in such a short time after we saw ChatGPT go public, we’re already discussing this. And even though hopefully you’ve calmed the nerves of somebody listening right now who is maybe worried about the imminent threat of AGI or something like that, it’s still fascinating that we’re able to discuss it right now as a viable possibility, not just as something really far far ahead in the future.

Peter Voss: Yes, certainly. We certainly can also talk about the risks in that. But it’s totally implausible, in fact it’s impossible, for an AGI to overnight, or very very quickly, become that smart that it can outfox basically all defenses and everybody. This is just Hollywood movies, it’s not realistic. Anybody who’s worked in software for any length of time knows that there are a million ways of things going wrong versus going right.

If you try to do something complex, there’s just so many ways in which it can go wrong, go off the rails and fail – and not fail in a human catastrophic way, just won’t work the way you expect it to work. So, yes, we are going to see an evolution of that. But I do believe it will take a different architectural approach to actually get to AGI.

Tim Butara: So, based on all this, what do you think about the recent call to halt the development of GPT for six months? 

Peter Voss: Yeah, it’s a little bit biased, and it’s hard to know what the motivations are of different people, and I’m sure there are different motivations for different people. I think there are some people who really bought into, like Eliezer Yudkowsky, this is just going to kill us all and we’ve got to stop it, or at least we’ve got to pause it. So I’m sure there are some people who’ve just bought into that for whatever reason.

I’m sure a very big motivation is competitors, competitive advantage. Anybody who isn’t OpenAI or Microsoft is obviously behind the curve now – DeepMind and Google and whoever else wants to play in this field is obviously behind. So, by having a moratorium, they could catch up, or they believe that they could catch up.

Then, I think there’s sort of the moral high ground that a lot of people might just sign on to it, because they think it’s the sensible thing to do or the moral thing to do. We obviously have luddites, who inherently are anti-technology, that would sign on. So, I think there’s probably a combination– it’s very interesting. 

Gary Marcus has been very vocal on this, initially seemed to sort of say, well, this is really dangerous. We need to stop it because it’s so dangerous. But then he kind of refined his comments to say, well, the real danger is misinformation. Ok, we’ve had misinformation en masse for many years. I mean, government misinformation for one. Governments across the world have been very good at propaganda in pretty much any country, I mean, some countries are worse than others. And we don’t have good defenses against government propaganda.

And then of course you have large enterprises who spend billions in advertising, and obviously a lot of that is misinformation. I mean, look at pharmaceuticals, for example. I mean, it’s really terrible what pharmaceutical companies advertise in America of stuff that really doesn’t work and that is harmful. But they’re pushing pills on people.

So, I don’t really see that that’s so different. We’ve had Photoshop for how long? People can adjust to that misinformation. And I think, if anything, it would encourage tools to be developed to help you tell the truth. That hopefully more organizations or more apps or whatever would come up to fact check things.

And you know, whatever one thinks of Twitter – and obviously, my Twitter feed only has certain types of things that are fed to me, so it’s difficult for me to know what other people see. But I think the community notes, I find them really good. If we had more of that, and not government fact checkers who have their own agenda, but community. Now, obviously you need to get into the right community that the fact checking itself doesn’t get distorted again.

There’s also the argument that we’re giving China and whatever, other players that we may not be as comfortable with, we’re giving them extra time to catch up as well. And finally, a moratorium like that really doesn’t work for software. I mean, how are you going to check that? Maybe the bigger teams can be monitored in some way. 

But it’s so easy these days to download one of these models, to build your own model. I mean, we’re already seeing papers coming out where people are generating GPT 3.5 with a hundred times less processing power, equivalent performance. And that’s going to continue. So, really across all the reasons, it doesn’t make sense.

Tim Butara: That was such a fantastic answer. And it really reflects a lot of my own opinions about this as well. And I think your answer was too extensive to unpack everything here. I think we would need like three additional specific episodes to unpack everything.

But one of the themes that struck out to me here is, if it’s so dangerous, if misinformation can be such an important risk of this, what good will pausing it for only six months do? Right? That’s the first thing that I thought about. If it’s so dangerous, if you believe that it’s so dangerous, then why are you advocating for a six months ban? That’s kind of paradoxical to me. 

Peter Voss: Yes, in fact, I think there’s a good point here. And that is, one of the terms that people throw around is alignment. And again, it’s a big topic, we haven’t solved the alignment problem. To me, I think it’s a non-problem, but let me not go down that path right now. Even assuming there is an alignment problem – six months isn’t going to solve the alignment problem.

People have had hundreds of millions of dollars, the AI safety community has had an enormous influx of money over the last few years, but they’ve been going for 20 years and they’ve made zero progress really on alignment problem and AI safety. I have an opinion on why that is as well, but… And Eliezer Yudkowsky has admitted that they have made basically no progress over 20 years on solving the alignment problem. So, spending another six months isn’t likely to produce any better results.

Tim Butara: Yeah, that’s another very important point here. It makes very little sense, because this thing is out of the box now. And it’s going to be doing what it’s going to be doing, and what people are going to be doing with it.

Peter Voss: Yeah, and of course the other thing that is often forgotten – every decision like that has a cost-benefit. By pausing it, you are losing the benefit of AI. I mean, clearly, the reason people are developing AI or AGI and these models is to bring benefit to humanity. You know, these are tools that can help us be more productive and ultimately they can help us solve the big problems that we are facing, whether it’s disease, or pollution, or governments, or energy, or whatever it might be, these AIs will help us. So, by pausing it, you are losing out the benefit.

And another point is, whatever rules are put into place – as we well know, putting rules into place, putting new laws or whatever controls you have, always have unintended side effects that are often very negative.

Tim Butara: So, do you think that humanity is ready for artificial general intelligence?

Peter Voss: Well, ready or not, here we come. I think we need to adjust. Now, I also have the perspective that there are so many problems facing humanity in terms of how we manage civilization, how we manage modern life, basically. And I think there’s a good argument to be made that we need AGI to save ourselves from ourselves, or to save us from ourselves.

That without more intelligence, I mean, let’s not forget, it’s artificial general intelligence, it’s intelligence. And if you believe that being smarter about things, bringing more intelligence to the world is more likely to solve problems, which I do believe, then we want– we may well need AGI to help us manage the complexities of civilization.

Tim Butara: I think there’s kind of a double-edged sword aspect here as well, right. One is what you just mentioned and highlighted, and the other, I would say, is, because society is in such a state where it’s kind of deteriorating, then the risks of something like AGI getting misused can become even greater in my opinion. So, it’s kind of like, we would need to help us solve these problems, but it would be much more effective if we solved these problems and then used AGI.

Peter Voss: Well, no, I think we need AGI; and my view is, it’s actually going to be a lot harder to abuse AGI. And the reason I say that is, at the moment, all of our experience is with narrow AI. And it’s easy to imagine that you would have a very powerful AI that could, say, break into the defenses of your enemy using AI, and do something – shut down the industry or something like that.

But AGI is general intelligence, which should help us think things through. And the analogy I give is, if humans make bad decisions– now, bad decisions, it’s another way of saying immoral. You know, things that you would rather not have done, that are not good. Why do humans make bad decisions? And there are obviously a host of reasons. But generally there are three categories, and I can point to 9/11 as an example because it obviously had a lot of consequences.

So, the three reasons that people tend to react emotionally, kind of just the way we are designed as humans. So, the first reaction of Americans was we’ve got to hit back at somebody. You know, this is terrible, what they did to us, whoever they are, and we’ve got to hit back at someone. And that’s kind of the emotional response.

The second thing is lack of good information. You know, the sort of weapons of mass destruction that were there. That was bad information. And the third thing is that humans aren’t really very good at logical reasoning. It’s an evolutionary afterthought; our brains developed from instinctive and automatic reactions, and then evolution got this logical thinking part added to the brain. But it’s not that great, humans aren’t really that good at logical thinking.

So, having AI assistants, if everybody had an AI assistant, it could mitigate these things that humans do that we tend to regret. You know, just acting on our emotions without thinking, having better information to make the decision, and being better at thinking through, is it really going to serve your purpose to invade Iraq or Afganistan or whatever? Is that ultimately the objective you want to achieve? If logical thinking and better information was in play, probably better decisions would have been made. So, I use it as an illustration where I really believe that better intelligence, better thinking makes for better decisions and more moral decisions.

Tim Butara: But the key thing is probably intelligence, right? As we talked in the beginning, you have a lot of AI technologies that aren’t actually intelligent. I’m guessing that actual artificial intelligence would be needed to have this benefit that you’re describing.

Peter Voss: Yes. A lot of the risks or the downsides of AI right now is the lack of intelligence, that they aren’t smart enough. I mean, look at self-driving cars. I actually have a Tesla, I love it, but the full self-driving still lacks intelligence. It comes across things that weren’t in the training set, and it really doesn’t know what to do because it’s not smart enough. So, if it was smarter, it would be better doing the job.

Tim Butara: So, it’s basically, the more intelligent a system is, the less likely it is to be negatively influenced by the decisions of the people who created it.

Peter Voss: Exactly.

Tim Butara: Well, Peter, this has been such an awesome discussion. And I have a final great and kind of big-picture question for you. And we kind of already cover it, but to cover it here succinctly – what do you think the future holds for artificial intelligence and its connection with humanity? What are the key implications, both short term and long term?

Peter Voss: So, I think a lot of us have been really surprised at how much you can achieve with generative models, with large language models. And I think it’s shortened the estimate of what a lot of people think how long it will take for us to really have artificial general intelligence at human level. 

I think there’s really a very good chance that it will happen in less than ten years. In fact, I would probably have a shorter timeframe on that. Once people get on to the right approach to overcome the core limitations, the inherent limitations that you have in large language models.

Now, in terms of what it means for humanity, there’s going to be a lot of change. One of the biggest industries is going to be to help people cope with the change, to either decide, I really don’t want to be at the cutting edge of this, so, at what level am I comfortable in terms of moving from what I’m doing now to the brave new world. But it’ll be optional.

People talk about, almost everybody says, I’d love to win the lottery, and I wouldn’t have to work anymore. Well, AGI, ultimately, is going to be like everybody in the world winning the lottery. You know, they won’t have to work. Now, a lot of lottery winners end up being pretty unhappy. 

So, that is something, if work is really a big part of who you are, your personality, your self esteem and so on, and suddenly you don’t have to work anymore; I see that as the big industry in the longer term as this abundance, radical abundance, becomes available through AGI.

Tim Butara: Man, these were some awesome insights, Peter. This is exactly a reflection of my thoughts about this double-edged sword of– I love the phrase that you used right there at the end, radical abundance. Man, I’m probably going to start using this more often in personal conversations as well.

Peter Voss: I forget who the author is right now, but I believe there’s actually a book Radical Abundance, which is very good.

Tim Butara: I need to check that out. Peter, as I said, thank you so much for joining us today, this was fantastic. Before we wrap it up, if listeners would like to reach out to you, connect with you, learn more about you or learn more about Aigo, where can they do all that?

Peter Voss: Yes, so, my company is Aigo.ai, and we basically do a chatbot with a brain. So, you know, if you want a chatbot that actually remembers what you said earlier in the conversation and can think and reason. And we target currently enterprise applications, but ultimately we want to make our chatbot or personal assistant available to everyone – seven billion people in the world. That’s the idea, so we have this little angel on our shoulder that can be our assistant and help us navigate and make good decisions. I also have a lot of articles on Medium.com, so Peter Voss Medium.com, you can find them, about free will and rationality and ethics and of course a lot about AI.

Tim Butara: We’ll make sure to include all of these in the show notes, as well as a link to the research paper on GPT-4 and AGI that we talked about, and an excellent YouTube video which provides a great overview of the main points, for those who probably don’t have the time to mull over the, what is it, I think it’s 150 pages or something like that. Peter, I guess we’ll have to meet again quite soon to discuss how things are evolving in this sphere. Thanks again, this has been fantastic.

Peter Voss: Great, I enjoyed it, thank you.

Tim Butara: And, well, to our listeners, that’s all for this episode. Have a great day, everyone, and stay safe.

Outro:
Thanks for tuning in. If you'd like to check out our other episodes, you can find all of them at agiledrop.com/podcast as well as on all the most popular podcasting platforms. Make sure to subscribe so you don't miss any new episodes and don't forget to share the podcast with your friends and colleagues.