You can enable subtitles (captions) in the video player
Demis, Google launched its most powerful model, Gemini 3, just a few months ago. It was received with a lot of excitement. Where do you think Google is right now on the AI race?
Well, we’re very happy, as you say, with the last model we released, Gemini 3. It’s topping pretty much all the leaderboards. So it’s a great model. Feedback’s been great from our users and enterprise customers.
But I think, overall, we have had a really good year last year when we look back on it. I think the trajectory of progress we’ve been making is the fastest of anyone in the industry. If you look at, Gemini 2.5, the previous version that we released in April/May last year, that was already becoming very competitive, I think, at the frontier.
And then I think we cemented that with Gemini 3. But of course, it’s a ferocious, intense competition, as you know. And everyone’s pushing as hard as they can. And we’ve got to make sure we deliver this year too.
Is it why Sam Altman declared a code red?
Well, apparently that’s what was being reported. And…
How do you feel about it?
It’s fine. We just focus on ourselves. And I think that’s what we’ve got to do is block out the noise, and just execute, focus on the quality of our research, and then making sure we’re shipping that quality fast enough into our product services. And I think that’s what you’ve seen with our share of the chatbot space with Gemini app has gone up, 650mn monthly users now.
And then things like AI Overview, two billion users. It’s the most used AI product in the world. So we’re really proud and pleased with how that’s going. But I think we’re just scratching the surface of what we can really do when we’re fully in our groove.
And we’re going to get to that. But when you look at the industry, what do you think rivals are doing best? What do you think is really interesting right now?
Well, I think what Anthropic’s doing with code is very interesting with their Claude code. There’s a lot of excitement around that in the developer market. We’re pleased with the performance of Gemini 3. But they’ve done something special there, I think other than that, I’m very excited about the stuff that we’re doing on multimodal. I feel like…
Do you want to explain?
Yes, multimodal, being… Gemini, from the beginning, has been multimodal. And by that, I mean being able to deal with more than just language and text, but actually image, video, audio as a native input and output. And we’re bringing that all together.
That’s always been our strength. And the reason we want to do that, and I think that’s what I’m excited about this year, is that’s what you would need for a kind of an assistant that travels around with you in the real world, maybe on your glasses or your phone.
It needs to understand the world, the context around you, the physical world. And of course, for robotics, that’s critical too. And I think I’ve been spending quite a lot of time on that last year. And I think that’s going to have some big moments in the next couple of years.
Can you talk a little bit about these big moments? Is it a question of trying to create devices that would… the new iPhone, or the glasses?
There’s actually so many simultaneous things one has to do, which is why it’s very exciting, but also quite daunting at the moment is, at least from our perspective, as Google DeepMind, as kind of like… we like to think of ourselves, we describe ourselves internally as the kind of engine room of Google. And we’re providing the engine, which is these models, like Gemini, and Veo, and Nano Banana, all these state-of-the-art models.
And then we got to figure out how do we want to incorporate them into features in products that are really useful to the end user? So there’s that whole aspect of work, which is enhancing what already exists, from email, to your Chrome browser, to search.
But then there are also all these very exciting new greenfield areas of digital assistants like the Gemini app. But what does that become over time, including new devices? And we’re working on, and we’ve announced recently, partnerships with Warby Parker and Gentle Monster on new types of smart glasses.
Obviously, Google has a long history with smart glasses. But I think maybe we were a bit too ahead of our time when we first started this 10-plus years ago at Google with the devices. But now I think what was missing was a killer app for that. And I think a universal digital assistant that helps you in your everyday life could well be that killer app for things like smart glasses that’s connected to your phone.
This is an area where a lot of your competitors are also working on. Why do you think you will be able to compete very effectively?
Well, I think it starts with the quality of your research and models. So I think we have, by far, the deepest and broadest research bench. I think we have the most talent in the industry. And I think that then will translate to the quality of our breakthroughs and research innovations. And then that underpins what you can do with these new products.
Speaking of talent, there’s a real talent war in the industry. Some researchers are getting offers for $100mn. How are you holding on to your researchers? Are you having to pay more than that?
Look, I mean, of course, another part of the ferociousness of the competition is the talent wars. But I think that most top researchers, they’re, of course, fabulously well paid. But it’s then, beyond that, is the mission.
What are you trying to do with your skills? These are phenomenally smart people. They could do anything with their skills. Are you doing good in the world? Are you building products, or applying, in our case as well, AI for science, scientific ends that actually you’d be proud of, and your friends and family would be proud of, and you’re overall benefiting society?
And I think we’re very lucky at Google that we have that, those product services that people love and use every day, from maps to email, that we are enhancing with our AI work. So it’s very motivating to when you make a research breakthrough, you can ship it. And then immediately, a billion users can take advantage of that.
So my expectation is that this year we’re going to hear a lot more about a techlash because there are growing concerns in society, but there are also safety misuse issues. And we’ve seen several examples of that. How concerned are you? And how do you protect against it?
Look, I think society is right to be worried about these things. And I think, of course, as you know, I spent my whole career working on AI because I really believe in all the benefits that are going to come from science and medicine advances, things like AlphaFold that we’ve done.
But we need to also worry about these harmful use cases. We’ve tried to get ahead of that with things like SynthID, like watermarking technology for things like deep fakes, getting the right guardrails around the usage of Gemini.
And we take that responsibility very seriously for all the users that we have. And we try to be role models beyond what we do. So there’s what we can control. And then beyond that, we try to be role models for what responsible use of these kind of… deployment of these technologies looks like.
And then as far as society goes and the average person is we need to show, as an industry, as a scientific field, what the unequivocal benefits are more clearly, more quickly. And I think for us, that’s doubling down on our AI for science and AI for medicine work and things like that are kind of unequivocal goods in the world.
I’m going to get to that with Isomorphic, but just staying with the misuse and safety issue, whatever happens in this industry, and if there is a growing techlash it will affect all the companies. So is that something… I mean, are you all not getting together to discuss this? Is there any effort under way to address it as an industry?
There are some industry groups. And of course, most of the lab heads know each other quite well. But I think you’re seeing different frontier labs do different things. And I think we’ll have to see how that works out.
What we can control is what we do at Google DeepMind. And we try to broadcast that at places like this and show the way forward that, I think, gets most of the benefits but mitigates the risks. And we hope others will follow in that path. But it would need something governmental, I think, to create the whole of the industry to do that. And then there’s also the international co-operation question too.
The other big risk this year is the bubble bursting. Are we in an AI bubble?
Well, yeah, look, for me, it’s not a binary question, yes or no. The AI industry is very big now, as you know. And it’s sort of multifactorial. So I think my guess is… I mean, from our point of view we’re seeing more usage than ever, incredible demand for our models and the AI features. We can barely satisfy them. There aren’t enough chips to go around.
And so I think from that perspective, and also overall, there’s not going to be… it’s going to be the most transformative technology probably ever invented. So I think from that perspective, there can’t really be a bubble.
But on the other hand, I think there are parts of the industry that do look bubble like, for example, seed rounds, multi-billion dollar seed rounds in new start-ups that don’t have a product, or technology, or anything yet does seem a little bit unsustainable. So there may be some corrections in some parts of the market. And then we have to see.
I don’t worry too much about that from our day to day. I focus on our technology and delivering that. And my job as head of Google DeepMind is to make sure we’re well positioned no matter what happens. If the bubble bursts, we’ll be fine. We’ve got an amazing business that we can add AI features to and get more productivity out of. And also, if the bull case continues, then we’ve also got these amazing AI first, AI native products like the Gemini app.
You’ve also spoken about the AI race and the competition with China. From what I can see, in China there is no AI race. It’s very different from what you hear. There’s no sort of race to reach AGI [artificial general intelligence]. There is a lot more focus on applications and finding efficiencies. Is that, perhaps, the more realisitic approach?
Look, it’s perhaps the more… I don’t know about realistic, but the less risky approach, perhaps. And I think, by the way, I think the Chinese market, from what I understand, is just as intensely competitive as the western companies are with each other.
It’s just that I think you’re right, they’re more focused on the near-term applications, what can you concretely do right now, rather than maybe these more research heavy frontier capabilities that would get you to AGI. I think that’s fine.
I started DeepMind. And our job is now at Google DeepMind and Alphabet as a whole, we want to build AGI. We think that’s the ultimate goal. And then that will unlock so many opportunities and possibilities in the world that we’ve talked about many times.
So I think that’s really the North Star. And on the way, we’ll create lots of useful technologies. But I think you’ve got to have that as a North Star if you want to progress the research in as innovative way as possible. And I think that’s why, in my opinion, the western companies are still in the lead on that.
How many months are you ahead? Is it a matter of months?
I think probably it’s only a matter of months now, would be my guess. Although interestingly, some of the Chinese leaders and entrepreneurs I talked to, they feel like they’re further behind than that.
I’m not sure that’s the case. Maybe it’s only a matter of six months or so now. But I think it’s important that because I think what even things like DeepSeek, which was I think was a bit of an overreaction in the West to that, actually, it’s a bit overblown, the Chinese labs haven’t proven they can innovate beyond the frontier yet.
They’re getting faster and faster at catching up to the frontier, what the frontier labs are doing. But they haven’t innovated beyond that, the next transformers or something like that. They haven’t proven they have that capability yet.
Do you think they are as focused on it, though?
They’re probably not. And that might be one reason why.
There was, in the last few months, a debate about AGI. And you disagreed with Yann LeCun, who said that there is no such thing as general intelligence. You’re a real expert on the brain. So explain to me why you disagreed with him.
Yes, yeah, we have many fun debates, Yann and I, at conferences and things. But this was an online one. Yeah, I just think it’s kind of ridiculous, his argument on that. I think he’s confusing two things, which is general intelligence, which I think clearly we have as humans, and our brain has that, and universal intelligence, something that can understand anything that could be possible.
And the thing is, it’s obvious our brains are very general because look at modern civilisation we’ve built. And we’re basically tool-making creatures. That’s what separates us from other animals is we build tools, from all the modern things around us – vehicles, 747s, but also computers. And I’d include AI in that, as well as the ultimate expression of the computational tool.
So if you include all of that and the science that we do, it’s unbelievably general. It’s not everything that could possibly happen to your retina and all these arguments he makes. But it’s clearly general. And then the other argument I make is more from Alan Turing, who’s one of my all-time scientific heroes. And he proved that Turing machines could compute anything that was computable.
So that’s super general class of machine. And all modern computers are based around that. But also, I think most neuroscientists would agree that our brains are an approximate Turing machine, or approximately Turing powerful, which means that we can do, in theory, understand almost anything. So that’s computable.
And so the idea is that we have… our brains are a general system that can, in theory, learn almost anything. Not that we already know that. So it’s a question of… do you have the learning capability versus the actual full knowledge. Obviously, our brains are limited, we can’t know everything, a single human. But in totality, our brains are very, very powerful.
And very flexible.
And extremely flexible.
So what is it going to take to get to AGI, recursive self-improvement, where essentially AI models can teach themselves? We’re not there yet. How far are we from it?
Yeah so…
And is that the main breakthrough?
Well, that’s one. I mean, I think there are… I think there are quite a few capabilities missing from today’s systems that will be needed for something that could probably pass as AGI. And continual learning is one of those things, like online learning after you’ve been – after it’s been trained. Can it learn new things from the user or from experience? So it’s sometimes called continual learning or online learning.
And for that, you need the personalisation, right?
Yes, that would be part of it. So if you want it to personalise, then that would be a form of online learning. But also self learning and self-improvement can also be part of that. So that’s a closed loop version of like experiencing something in the world and then updating your knowledge base directly and automatically.
And we actually pioneered a lot of that work back 10-plus years ago now with AlphaGo and AlphaZero, our games playing programmes. But of course, the question is, in games, that’s much simpler. The real world is much messier, much more complex. So the question is, can you translate some of those techniques to the messy real world?
Let’s get to Isomorphic, because originally, I think the company has said that you’d be going to clinical trials in Q4 of 2025, but then it was preclinical. So what happened? And when will the drugs go to…
So nothing happened. I misspoke it. I think it was one interview I gave a couple of years ago. It was preclinical we were entering last year was the plan. And so I misspoke then. And basically, we’re in preclinical trials with a few of our drug programmes. It’s going very well. We’re advancing very well. And then as soon as that’s ready, we’ll move that into clinical.
So when will we have the first AI-designed drug?
Well, I hope in the next few years, but it depends on how the preclinical trials go and the clinical trials.
Has it been harder than you had expected?
Not at all. It’s been we’re actually doing phenomenally well. And we just announced a new deal with – a new partnership with J&J yesterday. So we now work with J&J, Eli Lilly, and Novartis, three of the best pharmas in the world. And we also have our own internal programmes.
So we have about 17 programmes in total. And we’re going to talk a lot more about that. You’ll see a lot more news from us this year, first half of this year on our progress, which is going very well.
You also announced that you were building a materials science lab in the UK. Can you give me some more details about that?
A little bit more. I mean, we’re still quite early stages with that. But I think materials science, and AI designing new materials, semiconductors, superconductors, batteries, these kind of things is going to be a huge part of the benefits AI will bring to the world.
And I think we’re at maybe like an AlphaFold 1 level, some promising research prototypes. But we need to go further. And for part of that is we need to be able to test our materials that our AIs are designing quickly. And so we’re thinking about creating a kind of automated lab in the UK to test these theoretical compounds that the AI systems are coming up with.
Every time I see you, I ask you, what is your expectation of… what’s the timeline for AGI? And I’ve noticed that lately all of you are not talking so much about timelines. In fact, Sam Altman even says there is no… we are almost at that stage. So I am going to ask you, what is your timeline?
Well, mine’s been very consistent. I think we’re about 5 to 10 years away. So maybe it’s now about four to nine years. So now it’s… so now it’s like four to eight years. So I think 2030 is probably the earliest it could be. Maybe 50 per cent chance over that kind of time zone.
So I’m still sticking with my timeline. I think others who’ve had more aggressive timelines maybe are updating to be a little bit longer and a little bit more realistic. But for me, things always take a little bit longer than one assumes, even at the pace that we’re all going at. I mean, that’s still phenomenally… that’s still extremely soon. I just think it’s not going to be like next year.
And your job has evolved a lot from DeepMind to actually handling all of Google’s AI. I’m just wondering where you see your future is. Do you want to be CEO?
Look, I love the… I’m very happy with what I’m doing. I love being close to the science and the research. So I still try and carve out time to do that, even though I’m running a lot of things, including some products now.
But look, I can get pretty excited about… I’m very general in my interests. And I can get very excited about anything that’s cutting edge, especially if it has… especially if it has a leaderboard attached to it. Yeah, I think there’s only so much one can do in the day and still leave enough time for serious thinking, which I do at night time. And I quite like that routine. So I hope to be able to stick with that.
You didn’t say no.
No. OK, there we go.
Thank you. Thank you, Demis.
Thank you.
Read the full article here