Google DeepMind’s Demis Hassabis on his Nobel Prize: ‘It feels like a watershed moment for AI’ 

0 0

In the 15 years since it was founded, Google DeepMind has grown into one of the world’s foremost artificial intelligence research and development labs. In October, its chief executive and co-founder Sir Demis Hassabis was one of three joint recipients of this year’s Nobel Prize in chemistry for unlocking a 50-year-old problem: predicting the structure of every known protein using AI software known as AlphaFold.

DeepMind, which was acquired by Google in 2014, was founded with the mission of “solving” intelligence — designing artificial intelligence systems that could mimic and even supersede human cognitive capabilities. In recent years, the technology has become increasingly powerful and ubiquitous and is now embedded in industries ranging from healthcare and education to financial and government services.

Last year, the London-based lab merged with Google Brain, the tech giant’s own AI lab headquartered in California, to take on stiff competition from its peers in the tech industry, in the race to create powerful AI.

DeepMind’s new positioning at the centre of Google’s AI development was spurred by OpenAI’s ChatGPT, the Microsoft-backed group’s chatbot that provides plausible and nuanced text responses to questions. Despite its commercial underpinnings, Google DeepMind has remained focused on complex and fundamental problems in science and engineering, making it one of the most consequential projects in AI globally. 

In the first interview of our new AI Exchange series, Hassabis — a child chess prodigy, designer of cult video game Theme Park, and a trained neuroscientist — spoke to FT’s Madhumita Murgia just 24 hours after being announced as a Nobel Prize winner. He talked extensively about the big puzzles he wants to crack next, the role of AI in scientific progress, his views on the path to artificial general intelligence — and what will happen when we get there. 


Madhumita Murgia: Having reflected on your Nobel Prize for a day, how are you feeling?

Demis Hassabis: To be honest with you, yesterday was just a blur and my mind was completely frazzled which hardly ever happens. It was a strange experience, almost like an out-of-body experience. And it still feels pretty surreal today. When I woke up this morning, I was like, is this real or not? It still feels like a dream, to be honest. 

MM: Protein folding is essentially solved, because of your work on AlphaFold models — an AI system that can predict the structure of all known proteins. What is your next grand challenge for AI to crack?

DH: There are several. Firstly, on the biology track — you can see where we are going with that with AlphaFold 3 — the idea is to understand [biological] interactions, and eventually to model a whole pathway. And, then, I want to maybe build a virtual cell at some point. 

With Isomorphic [DeepMind’s drug development spin-off] we are trying to expand into drug discovery — designing chemical compounds, working out where they bind, predicting properties of those compounds, absorption, toxicity and so on. We have great partners [in] Eli Lilly and Novartis . . . working on projects with them, which are going really well. I want to solve some diseases, Madhu. I want us to help cure some diseases.

MM: Do you have any specific diseases you’re interested in tackling?

DH: We do. We are working on six actual drug programmes. I can’t say which areas but they are the big areas of health. I hope we will have something in the clinic in the next couple of years — so, very fast. And, then, obviously, we will have to go through the whole clinical process, but at least the drug discovery part we will have shrunk massively.

MM: What about outside of biology? Are there areas you’re excited about working on?

DH: I’m very excited about our material design work: we published a paper in Nature last year on a tool called GNoME [an AI tool that discovered 2.2mn new crystals]. That’s AlphaFold 1-level of material design. We need to get to AlphaFold-2 level, which we are working on. 

We are going to solve some important conjectures in maths with the help of AI. We got the Olympiad silver medal over the summer. It’s a really hard competition. In the next couple of years, we will solve one of the major conjectures. 

And, then, on energy/climate, you saw our Graphcast weather modelling won a MacRobert award, a big honour on the engineering side. We’re investigating if we can use some of these techniques to help with climate modelling, to do that more accurately, which will be important to help tackle climate change, as well as optimising power grids and so on. 

MM: It seems like your focus is more on the application side — on work that translates into real world impact, rather than purely fundamental. 

DH: That’s probably true to say. There aren’t many challenges like protein folding. I used to call it Fermat’s last theorem of biology, equivalent. There’s not that many things that are that important and long-standing as a challenge. 

Obviously, I’m very focused on advancing artificial general intelligence [AGI] with agent-based systems. Probably, we are going to want to talk about Project Astra and the future of digital assistants, universal digital assistants, which I’m personally working on, as well, and which I consider to be on the path to AGI. 

MM: What does the AI double Nobel Prize in chemistry and physics [this year’s prize for physics went to Geoffrey Hinton and John Hopfield for their work on neural networks, the foundational technology for modern AI systems] say about the technology’s role and impact in science?

DH: It’s interesting. Obviously, nobody knows what the committee was thinking. But it’s hard to escape the idea that maybe it’s a statement the committee is making. It feels like a watershed moment for AI, a recognition that it can actually, is mature enough now, to help with scientific discovery. 

AlphaFold is the best example of that. And Geoff and Hopfield’s prizes were for more fundamental, underlying algorithmic work . . . interesting they decided to put that together, almost as double, related awards. 

For me, I hope we look back in 10 years and AlphaFold will have heralded a new golden era of scientific discovery in all these different domains. I hope that we will be adding to that body of work. I think we’re quite unique as one of the big labs in the world that actually doesn’t just talk about using it for science, but is doing it.

There’s so many cool things going on in academia as well. I was talking to someone in astrophysics, actually a Nobel Prize winner, who is using it to scan the skies for atmospheric signals and so on. It’s perfect. It’s being used in Cern. So maybe the committee wanted to recognise that moment. I think it’s pretty cool they’ve done that. 

MM: Where is your AlphaFold work going to take us next in terms of new discoveries? Have there been any interesting breakthroughs in other labs you’ve seen that you’re excited about?

DH: I was really impressed with the special issue of Science on the nuclear pore complex, one of the biggest proteins in the body, which opens and closes like a gateway to let nutrients in and out of the cell nucleus. Four studies found this structure at the same time. Three out of four papers found AlphaFold predictions [were] a key part of them being able to solve the overall structure. That was fundamental biology understanding. That was the thing that stuck out to me. 

Enzyme design is really interesting. People like [US biochemist and Nobel laureate] Francis Arnold have looked at combining AI with directed [protein] evolution. There’s lots of interesting combinations of things. Lots of top labs have been using it for plants, to see if they can make them better resistant to climate change. Wheat has tens of thousands of proteins. No one had investigated that because it would be experimentally too expensive to do that. It’s helped in all kinds of areas, it’s been wonderful to see. 

MM: I have a conceptual question about scientific endeavour. We originally thought predicting something was the be-all-and-end-all, and spent all this time and effort predicting, say, the structure of a protein. But now we can do that really quickly with machine learning, without understanding the ‘why’. Does that mean we should be pushing ourselves to look for more, as scientists? Does that change how we learn about scientific concepts?

DH: That’s an interesting question. Prediction is partly understanding, in some sense. If you can predict, that can lead to understanding. Now, with these new [AI] systems, they are new artefacts in the world, they don’t fit into normal classification of objects. They have some intrinsic capability themselves, which makes them a unique class of new tool.

My view on that is, if the output is important enough, for example, a protein structure, then that, in itself, is valuable. If a biologist is working on leishmaniasis, it doesn’t matter where they got protein structures from as long as they are correct for them to do their science work on top. Or, if you cure cancer, you’re not going to say: don’t give me that because we don’t understand it. It would be an amazing thing, without understanding it fully. 

Science has a lot of abstraction. The whole of chemistry is like that, right? It’s built on physics, and then biology emerges out of it. But it can be understood in its own abstract layer, without necessarily understanding all the physics below it. You can talk about atoms and chemicals and compounds, without fully understanding everything about quantum mechanics — which we don’t fully understand yet. It’s an abstraction layer. It already exists in science.

And biology, we can study life and still don’t know how life evolved or emerged. We can’t even define it properly. But these are massive fields: biology, chemistry, and physics. So it’s not unusual in a sense — AI is like an abstraction layer. The people building the programs and networks understand this at some physics level but, then, this emergent property comes out of it, in this case, predictions. But you can analyse the predictions on their own at a scientific level.

Having said all of that, I think understanding is very important. Especially as we get closer to AGI. I think it will get a lot better than it is today. AI is an engineering science. That means you have to build the artefact first and then you can study it. It’s different to a natural science, where the phenomenon is already there.

And just because it’s an artificial, engineered artefact doesn’t mean it will be any less complex than the natural phenomena we want to study. So you should expect it to be just as hard to understand and unpack and deconstruct an engineered artefact like a neural network. That’s happening now and we are making some good progress. There is a whole field called mechanistic interpretation, which is all about using neuroscience tools and ideas to analyse these virtual brains. I love this area and have encouraged this at DeepMind.

MM: I looked up a project you mentioned previously about a fruit fly connectome [brain map], made using neural networks. AI helped understand that natural phenomenon. 

DH: Exactly. That’s a perfect example of how these things can be combined, and then we slowly understand more and more about the systems. So, yes, it’s a great question, and I’m very optimistic we will make a lot of progress in the next few years on the understanding of AI systems. And, then, of course, maybe they can also explain themselves. Imagine combining an AlphaFold with a language capability system, and maybe it can explain a little bit about what it’s doing. 

MM: The competitive dynamics in the technology industry have intensified a lot in AI. How do you see that impacting and shaping progress in this field? Are you worried there will be fewer ideas and a focus on transformer-based large language models (LLMs)? 

DH: I think that actually a lot of the leading labs are getting narrower with what they are exploring — scaling transformers. Clearly, they are amazing and going to be a key component of ultimate AGI systems. But we have always been big believers in exploration and innovative research. We have kept our capabilities of doing that — we have by far the broadest and deepest research bench in terms of inventing the next transformer, if that’s what is required. That’s part of our scientific heritage, not just at DeepMind but also Google Brain. We are doubling down on that, as well as obviously matching everyone on engineering and scaling.

One has to do that — partly to see how far that could go, so you know what you need to explore. I’ve always believed in pushing exciting ideas to the maximum as well as exploring new ideas. You don’t know what breakthrough you need until you know the absolute limits of the current ideas.

You saw that with long context windows [a measure of how much text can be processed by an LLM at once]. It was a cool new innovation and no one else has been able to replicate that. That’s just one thing — you’ll see a lot more breakthroughs coming into our mainstream work. 

MM: You and others have said AGI is anywhere between 5 to 20 years away: what does the scientific approach look like to achieving that goal? What happens when we get there?  

DH: The scientific approach would mean focusing a lot more time and energy and thought on understanding and analysis tools, benchmarking, and evaluations. There needs to be 10 times more of that, not just from companies but also AI safety institutes. I think from academia and civil society, [too]. 

I think we need to understand what the systems are doing, the limits of those systems, and how to, then, control and guardrail those systems. Understanding is a big part of scientific method. I think that is missing from pure engineering. Engineering is just seeing — does it work? And, if it doesn’t, you try again. It’s all trial and error.

Science is what can you understand before all that happens. And, ideally, that understanding means you make less mistakes. The reason that’s important for AGI and AI is that it’s such a powerful technology you want to make as few mis-steps as you can. 

Of course, you want to be able to get it perfect, but it’s too new and fast moving. But we can definitely do a better job than, perhaps, we’ve done with past technologies. I think we need to do that with AI. That’s what I’d advocate.

When we get nearer to AGI, maybe a few years out, then a societal question comes, which also could be informed by the scientific method. What values do we want these systems to have? What goals do we want to set them?

So they’re sort of separate things. There’s the technical question of how do you keep the thing on track to the goal that you set? But that doesn’t help you decide what goal this should be, right? But you need both those things to be correct for a safe AGI system. 

The second one, I think, may be harder, like, what goals, what values and so on — because that’s more of a UN or geopolitical question. I think we need a broad discussion on that, with governments, with civil society, and academia, all parts of society — and social science and philosophy, even, as well. 

And I try and engage with all those types of people, but I’m a bit unusual in that sense. I am trying to encourage more people to do that or at least act as a role model, act as a conduit to bring those voices around the table. 

I think we should start now because, even if AGI is 10 years away, and some people think it could be a lot sooner, that’s not a lot of time.

This transcript has been edited for brevity and clarity. Parts of this interview were adapted for an article in early October about AI’s role in recent Nobel Prize awards.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy