We Make AI Trustworthy - For All

leiwand.ai helps you to make your AI system trustworthy, by adhering to high standards of quality, transparency and fairness.

With our support, you can ensure that your AI system performs to the expectations of all stakeholders.

A circuit board shining in turquoise light that represents modern algorithmic computer technology

Trustworthy AI Systems: A Must Have

Artificial Intelligence (AI) systems have become part of everyday life: from decision-enhancing algorithms, to chatbots and self-driving cars: the range of applications of AI is already broad, and continues to expand rapidly. While AI has tremendous potential in improving many aspects of life, it can be privacy invasive, detrimental to our environment, and might propagate inequalities and discrimination if applied blindly and without control.

“Our vision is that AI systems should serve all humankind & the earth – and not vice versa.”

- Dr. Gertraud Leimüller and Rania Wazir, PhD

The goal of leiwand.ai is to help organisations and businesses develop and deploy trustworthy AI – AI systems that deliver what they promise, fairly. We believe that digital technologies must be shaped in novel ways in order to improve their quality and impact, and earn the trust of citizens and customers.

We Help Make Quality and Fair AI

Trustworthy AI is not a simple label that can be attached to a product in hindsight: It is a design choice and needs to be pursued throughout the AI system life cycle.

  • We believe that for AI systems at the interface to people, their development must be influenced by diverse streams of knowledge from different disciplines.

    At leiwand.ai, we are a team of mathematicians, data scientists, NLP experts, social scientists, innovators, philologists and project managers that work together on guiding the entire AI development process towards positive impact, fairness and sustainability.

    If you want to use or develop artificial intelligence, we can provide our AI expertise in preparing your systems to conform to quality standards, such as those that are required by regulations like the EU AI Act.

    Our Services

  • From AI system inception to retirement, we bring societal, human and planetary needs into the equation. leiwand.ai devises strategies to maximize positive impacts and minimize risks throughout the AI system’s life cycle.

    We offer AI development support, strategies and guidance to assess the conformity and impact of your AI system. We can increase your AI system’s functionality for diverse user groups.

    Our aim is not only to help our customers understand, develop and deploy fair and transparent AI systems: through our continuous research and testing, we are creating technology that can test your AI system’s quality.

    In other words: We use AI, to test AI

    Learn more

  • Algorithmic Risk Radar Title Foto

    The Algorithmic Risk Radar

    We are currently creating our very own in-house technology for pre-assessing bias risks in artificial intelligence systems. This AI-based tool will be the first of its kind.

  • When AI Discriminates

    We investigated how algorithmic decision systems are ingrained with bias, shedding light on the challenges, implications, and potential solutions to bias in ADS.

  • Fair by Design

    How can discrimination against user groups by AI be avoided during the development or application of AI in the market? The Fair by Design project closed the gap between theoretical ethical guidelines and practical application.

  • Roadmap to Trustworthy AI

    Together with our partner, Plattform Industrie 4.0, we are developing a roadmap for making trustworthy AI simple - so that all organisations, big and small, can develop AI systems that we can trust.

  • Algorithmic Contracts

    The European Law Institute initiated a project on algorithmic decision-making systems in the various stages of the contract life cycle, in order to assess and further develop the current level of protection of existing EU law affords consumers and other interested parties.

  • Trustworthy AI in Practice

    Trustworthy AI in Practice is an Austrian initiative launched by leiwand.ai to raise awareness and increase engagement with the complex topic of trustworthy artificial intelligence (AI) in Austria.

  • Digital Humanism in Complexity Science

    leiwand.ai and the Complexity Science Hub Vienna have partnered in a consortium for developing a roadmap for digital humanism to build a sustainable community of researchers and business stakeholders in complexity science, digital humanism, and computational social science.

  • Online Hate Barometer

    Amnesty International Italy’s “Barometro dell”Odio” project has been active since 2018. Through this project, the human rights impacts of two social media platforms – Twitter and Facebook - have been investigated.

We Make AI leiwand

  • Portrait of Co-founder and MD of leiwand.ai Gertraud Leimüller facing the camera with a white background, wearing a yellow top.

    Dr. Gertraud Leimüller

    Co-Founder & MD

  • Portrait of Co-founder and CTO of leiwand.ai AI data scientist Rania Wazir, smiling at the camera with a white background.

    Rania Wazir, Phd

    Co-Founder & CTO

  • Janine Vallaster

    Janine Vallaster, MSc

    Social Scientist

  • Lene Kunze, MSc

    Social Scientist

  • Portrait of leiwand.ai data scientist Mira Reisinger

    Mira Reisinger, MA

    Data Scientist

  • Portrait of leiwand.ai communications manager Patrick Kosmider, smiling at the camera and wearing a blue shirt. The picture background is white.

    Patrick Kosmider, MA

    Communications Manager

  • Sarah Cepeda, PhD

    Data Scientist

  • Portrait of leiwand.ai chief strategist Silvia Wasserbacher-Schwarzer smiling into the camera, wearing a suit on a white background.

    Silvia Wasserbacher-Schwarzer, MA

    Chief Strategist

  • Black and white portrait of leiwand.ai AI data scientist Thomas Trelm wearing a hat

    Mag. Thomas Treml

    Data Scientist

Our Partners