David Deutsch on the development and application of AI

Thema:
Artificial intelligence
27 September 2022

David Deutsch is a pioneer in the field of quantum computers. In 1997, he published The Fabric of Reality, a book setting out his views on quantum mechanics. He is currently a professor at the University of Oxford. In a video call, Peter Werkhoven, chief scientific officer at TNO, discussed with him the development and application of artificial intelligence (AI).

A conversation with David Deutsch

What do you think the quantum computer will mean for AI?

There is currently a great deal of talk about ever-increasing digitalisation, and especially about AI and the role of that technology within society. In that connection, the subject of quantum computers and quantum algorithms regularly comes up. What do you think the quantum computer will mean for AI?

'There are certain functions that quantum algorithms can perform amazingly more efficiently than any classical algorithm. But at the moment, I think they will only be deployed for special purposes, for example developing new drugs or games. That might be important, but I don't expect quantum algorithms to play a central role in AI or in artificial general intelligence, AGI.'

‘AI must be obedient, it must do what it’s programmed to do. Whereas humans are fundamentally disobedient.’

Why not?

'To explain why not, it’s first of all important to know that AI and AGI don’t only differ from one another but are almost the opposite of one another. The AIs that we are familiar with are AIs that diagnose diseases, play chess, or run huge factories, for example. Those are objective functions designed to maximise processes. For that, the AI has to be obedient. It must do the things that it’s programmed to do. AIs can do that better than humans, because humans are fundamentally disobedient. And that’s particularly true of people who are creative. When a human plays chess, he or she calculates moves completely differently to a chess computer. The computer is able to consider billions of possibilities, whereas the human chess player is limited to just a few hundred. Another difference is that the human is able to explain something. He or she can write a book afterwards, for example about how he or she became world champion. But a computer program that beats the world champion can’t write such a book because it doesn’t know what it’s done. All it did was follow a set of program rules.'

'The AGI program must be able to provide unexpected answers.'

What about an AGI? How does it differ from an AI?

'An AGI is expected to behave in a way that can’t be specified beforehand, because if you could specify it, you’d already have the answer. The AGI program must therefore be able to provide unexpected answers, answers to questions we didn't even know to ask.'

Are human brains fast enough?

You mentioned a number of differences between humans and machines. I’d like to take a closer look at that. A study has shown that at the cognitive level, people process information at a bit rate of 50 bits per second. That’s not much. Is that a limitation of human thinking? Is it why we play chess totally differently to computers? And also solve problems totally differently?

'Processes in the human brain run somewhat in parallel, so I don't think the rate is as slow as 50 bits per second. But yes, it’s far less than a billion bits per second. We humans don’t work by looking at all the options and ticking them off. We work from understanding. A chess player looks at the board, tries to understand the situation, and then starts speculating. I’m a follower of the philosopher Karl Popper. He taught us that science – and thinking in general – is a matter of speculation and criticism. And at the moment there’s no computer program that can do that.'

Do you think we can get explainable AI at a level where humans can understand that explanation?

That ties in with the current discussion about 'explanatory AI'. Many people think that if we develop AI for serious high-risk applications, then that AI should be able to explain itself. Do you think we can get explainable AI at a level where humans can understand that explanation?

'Yes, but I don't think that’s a step towards AGI. Quite the contrary! In fact, it’s another step in the other direction. Let’s suppose a computer program could say that it had diagnosed cancer, indicating which X-ray it used to do so and what it detected in that image. That can certainly be done in a way that we as humans can understand. But so far, an AI can’t indicate that it thinks there’s a new disease and also explain the development of that disease as a human doctor would. AI can't create something new. It may perhaps arrive at new implications, but it does so on the basis of existing knowledge that’s been inputted.'

When you talk about creating new knowledge: we have examples in which AI has discovered new antibiotics.

'I think that’s done by matching chemical properties with the properties of cells or bacteria. That's not really explanatory knowledge.'

And there are more examples of how AI is able to reduce complex patterns to regular patterns. Can that be called explanatory knowledge?

‘No. Those are situations in which the knowledge has been programmed beforehand, although perhaps not deliberately. It’s in fact very easy for a programmer to enter knowledge, for example about which data are relevant and which aren’t. In that way you’re already telling the program to find an answer. But that’s not what physicists do when they discover a new law. They discover the concepts behind the unknown.'

A lot of people say that AI must exhibit moral behaviour. Do you think AI can learn that by interacting with humans?

'The same applies here too: the real content of the moral behaviour that AI exhibits will have been introduced by the programmer. It's a bit like training a dog. A dog can’t create explanatory knowledge either. It’s trained to follow certain moral patterns, and it's actually surprising that dogs can be trained that way. Machines can undoubtedly do this even better than dogs. But that’s not moral behaviour; it’s just being obedient to rules. It can’t make moral judgements itself but only carry out the moral judgements of others, like a slave.’

Do you want these systems to be slaves?

Let’s take the example of self-driving cars, or autonomous responses to cyber attacks, which is also intelligence software. Do you therefore want these systems to be slaves? Or do you want them to disobey and do what they wish in those roles?

'In the case of self-driving cars, we could look at guide dogs for the blind. They also have to assess traffic and people. They need to assess certain threats and ignore other signals. And a guide dog does all of that better than a human. In fact, a human is much less dedicated to such a task, so sooner or later he or she will make a mistake. It’s amazing what guide dogs can do. Where self-driving cars are concerned, it’s a trickier problem, but it’s largely the same problem.’

What is humanity's biggest challenge in harnessing the potential of AI and not becoming a victim of it in the years ahead?

'It’s important to realise that AIs aren’t perfect. And also to realise that they aren't completely controlled by rules, simply because we don't know what rules to give them. The first version of an AI won't be as good as the tenth.'

‘The possibilities are endless. Every year I’m amazed by the power of AIs.’

What advice do you have for people who want to create AI applications in a responsible manner?

‘The possibilities are endless. Every year I’m amazed by the power of AIs and I don't think the risks are apocalyptic. The risks are comparable to the risk of any new technology. The first time a steam locomotive was demonstrated to the public, a Member of Parliament who was present was killed. People were not used to it, a locomotive that could quickly pass other objects. And back then, fast meant 24 kilometres an hour. So yes, we do need to be wary with AI. We need to be aware that applications are involved that look like they don't need supervision, when in fact it’s very important, especially at first, to have proper supervision and to intervene immediately in the event of errors. Once it’s mature, such an AI makes fewer errors than humans and we can start trusting it.'

Download vision paper

Download vision paper ‘Towards Digital Life: A vision of AI in 2032’

Get inspired

41 resultaten, getoond 1 t/m 5

AI in training: FATE develops digital doctor's assistant

Informatietype:
Insight
21 May 2024
Can the predictive power of AI help with preventive care? In project FATE we train the digital doctor's assistant of the future.

Boost for TNO facilities for sustainable mobility, bio-based construction and AI

Informatietype:
News
23 November 2023

GPT-NL boosts Dutch AI autonomy, knowledge, and technology

Informatietype:
Article

The Netherlands starts realisation GPT-NL, its own open AI-language model

Informatietype:
News
2 November 2023

New AI Lab for effective and responsible oversight

Informatietype:
News
20 October 2023