Trustworthy AI in 2030 - a Foresight Exercise

In 2030, when AI-based applications will hav permeated every aspect of our lives, it may be too late to raise the question of the trustworthiness of the AI-systems. That is why we decided to take stock of the current situation, consider what trustworthy AI is, and what significance it can have in 2030.

We conducted a broad-ranging analysis of the factors influencing current AI development, and global trends that can reasonable by foreseen to provide the societal and environmental context in 2030, and at the end of March, we invited a hand-picked, interdisciplinary group of AI users from business and science to a foresight workshop. Together with us, these well-known expertsformulated some leading questions for the year 2030:

  • What role will Europe play in the global AI market in 2030? Will China and the US continue to be the seeminglyunrivalled drivers of AI technology?

  • What state of the art can we expect in terms of machine learning in 2030? How will it take into account unavoidable questions such as those about high energy demand?

  • To what extent will people have adapted to AI systems - in their habits, their behavior  and in their language – both in their private and professional lives?

  • What role will the trustworthiness of the technology play, as defined by the European Commission's High Level Expert Group on AI?

The result of the workshop are 13 hypotheses for the future, which are based on our analysis of the framework conditions that can be expected in 2030.  The top hypotheses include:

  • The market for Trustworthy AI will grow until 2030 because awareness of the power and risk of misuse of the technology will increase in the coming years.

  • The EU AI Act will not catapult Europe into  irrelevance, but will create a new starting point for trustworthy AI "Made in Europe".

  • Not all applications will bear the Trustworthy label, because the cost-benefit factor does not exist in all industries and fields of application.

  • Trustworthiness of the AI solutions used will be an indisputable feature, especially in the fields of media, health, and human resources, but also in public administration.

The workshop took place as part of the Roadmaps for Digital Humanism project, funded by the Vienna Business Agency and the WWTF, on the premises of our project partner Plattform Industrie 4.0. We would like to thank the participants who made an exciting exchange possible (in alphabetical order):

  • Wolfgang Groß, Gradient Zero 

  • Manfred Gruber, Bundeskanzleramt  

  • Wolfgang Kabelka, Bundesrechenzentrum  

  • Kostadinka Lapkova, Raiffeisen Bank Intl. 

  • Christopher Lindinger, Johannes Kepler Universität Linz  

  • Iveta Lohovska, Hewlett Packard Enterprise

  • Norbert Math, alien productions 

  • Lena Müller-Kress, winnovation consulting  

  • Verena Ossmann, ecoplus. Niederösterreichische Wirtschaftsagentur GmbH  

  • Carina Pölzl, Speedinvest Heroes  

  • Viktoria Robertson, Wirtschaftsuniversität Wien 

  • Roland Sommer, Plattform Industrie 4.0  

  • Luzia Strohmayer-Nacif, Austria Presse Agentur 

  • Manfred Tscheligi, USECON

Here you can also find the press release about the workshop.

Blog post image (c): Fabio Lucas, unsplash.com

Previous
Previous

Complexity Science meets Digital Humanism

Next
Next

Ethical AI & Regulation: Two podcast episodes