How can AI become a trustworthy co-worker?

How trustworthy is “colleague” AI in work environments? How can decision support by artificial intelligence (AI) systems be designed in such a way that humans actually retain control and autonomy? These questions were discussed in an interdisciplinary group at the workshop "Trustworthy colleague AI”, initiated by leiwand.ai.


The workshop "Trustworthy colleague AI" took place virtually on May 5th 2022, and was part of the initiative "Trustworthy AI in practice" that was called to life by the founders of leiwand.ai, Rania Wazir and Gertraud Leimüller. The workshop was implemented together with the Vienna Chamber of Commerce, the Vienna Business Agency and the Plattform Industrie 4.0 Österreich, as well as the research project fAIr by design (funded by the FFG).

The 30 participants were carefully selected from a wide variety of stakeholder groups: AI developers, users of AI systems, scientists from different disciplines and stakeholders from companies and public administration, as well as representatives from worker’s and human rights organisations. In a co-creation setting, the interdisciplinary participants were invited to join the discussion on the specific challenges and opportunities as well as practical requirements on AI as a trustworthy colleague.

Two speakers gave the introduction to the topic. In his keynote "The Human in the Loop: To Be or Not to Be?" Manfred Tscheligi, Head of Center for Technology Experience at the Austrian Institute of Technology (AIT),Professor of Human-Computer Interaction at the University of Salzburg and Head of the Department of Artificial Intelligence, discussed how AI systems and interfaces in the work environment can be designed in such a way that humans remain in the driver seat and explained the requirements on such systems in terms of trust and acceptance.

Building on this, Alexander Zeiss, Product Manager and Head of Shared Service Center Artificial Intelligence at ITSV, gave practical insights into the use of AI for automated control of submitted invoices and described the challenges and possible solutions in building trust and acceptance of AI by employees.

In a subsequent plenary discussion with both speakers, there was an intensive discussion about the issues that need to be considered in order to achieve the right balance between users perceiving AI systems as trustworthy colleagues, and avoiding over-trust.

Group picture of the online workshop

The discussion was further deepened in small interactive groups that analyzed what needs to be considered in decision support by AI so that humans remain in control. Intense discussions were held in all groups and a variety of challenges as well as opportunities were identified. Ethical issues, the precise definition of goals and contextualization of applications, and opportunities in the area of increased job security, and monitoring and visualization of discrimination in human decision-making were discussed. Download a summary of our findings here: Findings PDF

Major conclusions:

The exchange showed that the discussion on the potential of AI as a trustworthy colleague is still at a relatively early stage in Austria. User requirements in the various application contexts of AI must be researched and taken into account to a much greater extent. It is necessary to raise the awareness of all actors, in orderto consider how to structure the cooperation between humans and machines already during the development phase of AI systems. Legal, ethical and sociological aspects must also be taken into account from the start. Only through further multi-stakeholder engagement and discourse can the potential of artificial intelligence as a trustworthy colleague unfold and the numerous opportunities be exploited, as well as the emerging challenges be overcome.

 

You want to know more about trustworthy AI?


In cooperation with

Previous
Previous

Ethical AI & Regulation: Two podcast episodes

Next
Next

AI is not yet green