Arnon Grunberg on AI, creativity and morality
Peter Werkhoven, chief scientific officer at TNO, talks to Arnon Grunberg (who is based in New York). Grunberg made his breakthrough in 1994 with his novel, Blue Mondays. He has since become one of the Netherlands' best-known authors.
A conversation with Arnon Grunberg
The Peter Werkhoven and Arnon Grunberg talked about AI over dinner some years ago. Today, they finally get the chance to continue their conversation. What is Grunberg's view on creativity? Can it be taught to machines? And how do humans morally relate to machines?
How do you think moral behaviour on the part of machines and software should come about?
How do you think moral behaviour on the part of machines and software should come about? How do we as humans teach or program that moral behaviour into intelligent machines?
'That’s a very difficult question, which I think leads to the question of whether you can teach a machine to be creative. One example is the game of chess. For me, that's a creative sport because you make decisions that will never seem entirely rational to a machine. A chess computer runs through all the possibilities and arrives at a highly rational decision. If you’re talking about morality, there’s always a human component too. That’s unpredictable and it implies the free will that people have. When we say that a machine should make moral decisions, what we're actually saying is that the machine should be free. For me, that’s an area that you mustn't want to get into. Take the example of a self-driving car that has to decide between hitting an elderly man or a mother with a child. Are we going to program AI in a way that it can make the morally right decision? For me, such a situation is pure chance and we as humans shouldn’t intervene.'
‘When we say that a machine should make moral decisions, what we're actually saying is that the machine should be free. For me, that’s an area that you mustn't want to get into.’
So no arbitrary decision should be made either?
'For me, chance is fine. That's how accidents already happen. The driver thinks and makes a decision. In my opinion, you can’t say that AI should decide that the mother and child can live and the elderly man can be run over because he has only five years left to live. I realise that you can rationalise it, but I think such a decision has consequences for society and that, in doing so, you’re still undermining a foundation of the rule of law, namely, that everyone has the same rights and that those rights don’t suddenly diminish as soon as you get older, or even when a lot of people would say your life has been 'completed'.'
‘I think you shouldn't have too high expectations of people's moral abilities, and as long as we humans are the ones programming AI, that also always means there's a danger that AI can be misused by humans.’
I understand that you always carry a little dice in your pocket to make arbitrary decisions about very simple matters. Is that true?
'Yes it is, and it’s become almost a kind of talisman. But it’s more than that, because it also involves awareness that you're not fully in command of yourself. That there are always things at play that you can’t explain in a completely rational way. Humans rarely act 100% rationally. So outsourcing certain decisions to a dice, which I sometimes do, offers a way out of an 'unbearable responsibility'. But it also involves recognition that you're at the mercy of chance and that as a human being you're less omnipotent than you'd like to be. Hubris is always punished, after all. I think you shouldn't have too high expectations of people's moral abilities, and as long as we humans are the ones programming AI, that also always means there's a danger that AI can be misused by humans.’
You can also make the judgements first and, as a society, think through all possible situations in advance before teaching them to the AI. Doing that has an effect on decisions during panic situations. Don't you see that as moral progress?
'Suppose we all decide that it's permissible for a self-driving car to run over an elderly person or someone with a criminal record. In my view, that creates second-class citizens who enter public space in a different way because they know they’ll be the ones who come off worst in the event of an accident. However, I can also envisage that a hundred years from now, it’ll be considered stupid that people used to make certain moral decisions for themselves. But the question is also: if AI is programmed by you, who’s in charge? Is AI really a self-learning entity that can dissociate itself from the way it’s been programmed by humans? Or will it always remain a tool in the hands of humans (who are fallible)?'
‘If you think machines will eventually be able to perform specifically human acts such as composing a symphony or writing a novel, then you may also think that they’ll become our masters, in the same way as we are the master with pets and livestock.'
What will it mean for humanity if AI becomes more creative than we are ourselves?
'I once predicted that in sixty or seventy years' time, a machine would win the Nobel Prize for literature. I don't know whether that time frame is correct, but I can well imagine it happening. It does mean that we as humanity will have been toppled from our throne and that all sorts of ideas about creativity will have turned out to no longer be valid – which will overturn our view of ourselves and of the world and produce a completely different world. For the time being, I think machines can come close to humans, but because language is so illogical, irregular and ambiguous, it remains difficult for machines to understand, grasp, and reproduce it. If machines do attain that level, the main point is that we will have created something that will eventually force us to become immensely humble. If you think machines will eventually be able to perform specifically human acts such as composing a symphony or writing a novel, then you may also think that they’ll become our masters, in the same way as we are the master with pets and livestock.'
Take creativity: that seems to me very difficult to explain to AI. What do you think?
AI currently operates without any causality. Being able to explain to itself why it did something is a solution as regards also assigning responsibility to AI. But take creativity: that seems to me very difficult to explain to AI. What do you think?
‘I entirely agree. It’s difficult for artists, too, to explain why they’ve made certain decisions, because they are associative, intuitive, and irrational. I also don't think you need to explain creativity; the product speaks for itself. That’s an essential difference, of course, where moral decisions are concerned. A penalty imposed by a court always has to be explained, and there’s also always a human element in the severity of the penalty. So AI can also only be a fair judge if it can also say that it could have reached a different decision. Can we imagine a machine that is merciful? If we can, then it presupposes a kind of freedom. If we've programmed a machine to always be merciful, then that mercy doesn’t amount to anything. It's about sometimes deviating from the rule.'
Merciful machines; can you imagine that?
‘I think we can imagine a merciful machine if the machine says: I’m now faced with such an inhuman choice that I can’t make that choice. I'd rather destroy myself than make that choice. If a machine can do that, then we can also have a machine that’s merciful. In my view, the capacity to suffer is necessary to arrive at a machine capable of replacing a judge, or people in general, when moral decisions are concerned.
'A society is based on trust, so the moment AI becomes an essential part of society it presupposes that there's a basic trust in AI. I think you have to assume trust, otherwise you mustn't allocate that power to those machines. I can well imagine that from time to time we’ll carry out a random check of AI, and ask it: “explain what you did. Is our trust justified?” The question remains, of course: do you want to live in a world without chance? I think that a world in which chance no longer exists and everything has become a decision isn't necessarily a better world. When difficult choices need to be made, a machine needs to also have the ability to let fate decide sometimes. But I think that at some point we’ll find that the decisions that, say, a self-driving car makes are acceptable because they remind us of the randomness with which accidents used to happen. Certainly if the number of car accidents decreases, we’ll quickly reconcile ourselves to the odd ethical imperfection. But everything I've said about that in terms of choice still applies. I don’t think you should want to rule out the chance aspect.'
Is there any advice you'd like to give applied scientists regarding the advance of digitalisation and the role of AI?
'That would mainly be the idea of being aware of one's own human shortcomings, and that you shouldn’t want to let AI overcome them. AI can definitely solve certain problems, but I think we need to be very careful when it comes to ethical moral decisions. We need to be certain that we don’t offload our responsibilities onto AI. That would basically be another step back towards magical thinking; that humans make decisions and outsource them to the gods, and that AI then becomes more or less the scapegoat. “It was AI that did it.” No, for the time being, we are the ones who did it.'
Download vision paper
Download vision paper ‘Towards Digital Life: A vision of AI in 2032’
More about 'Towards Digital Life: A vision of AI in 2032'
- David Deutsch on the development and application of AI
- Georgette Fijneman on the promise of AI for health insurers
- Rob de Wijk on the rise of AI in geopolitical context
- Bram Schot on the impact of AI on mobility
- Eppo Bruins on AI in different government domains
- Bas Haring on AI, science and philosophy