Educating AI
Non-intelligent algorithms have already become indispensable in supporting people with digital communication. Think of the recommendations you receive on Netflix, facial recognition on your iPhone, or Google searches leading to ever smarter results. In the next ten years, we will be able to add intelligent algorithms that ‘coach’, ‘accompany’, or support humans in making complex decisions.
TNO predicts that intelligent algorithms will quickly ‘mature’. They will, for example, be able to offer medical or legal advice using large amounts of data, but also be used autonomously in situations in which manpower, brainpower, or time is limited. Such AI solutions may be used in the context of camera surveillance or cyber security, or self-steering cars and drones, which are also powered by these algorithms. In all applications, it is up to humans to ‘educate’ the AI (i.e. configure the human norms and values to which the AI must conform).
You can read about how AI is educated in Chapter 1 of our vision paper ‘Towards Digital Life, A vision of AI in 2032.’ (pdf) How can we make clear to AI which goals we want to pursue as humans? And how can we ensure intelligent systems will always function in service of society?
‘Our children and our grandchildren must be able to see that we did our utmost to have this discussion and that we dealt with these issues in a sensible and ethical manner.’
Download vision paper
Download vision paper ‘Towards Digital Life: A vision of AI in 2032’
More about 'Towards Digital Life: A vision of AI in 2032'
- David Deutsch on the development and application of AI
- Georgette Fijneman on the promise of AI for health insurers
- Rob de Wijk on the rise of AI in geopolitical context
- Bram Schot on the impact of AI on mobility
- Eppo Bruins on AI in different government domains
- Bas Haring on AI, science and philosophy
- Arnon Grunberg on AI, creativity and morality