AI Tech & Solutions
We use AI, to Test AI
Our aim is not only to help our customers understand, develop and deploy fair and transparent AI systems: through our continuous research and testing, we are creating technology that can test the quality of your AI system according to EU wide law.
Would you like to work with us on this journey? We are always looking for new use case partners!
ABRRA
The Algorithmic Bias Risk Radar
We are currently creating our first in-house technology for pre-assessing bias risks in artificial intelligence systems.
This AI-based tool will be the first of its kind.
The Risk Radar will run on a carefully curated expert database that will be filled with thousands of AI Incidents.
With this technology, we will be able to identify potential adverse effects of AI systems early in their development, procurement, and certification process.
The technology will facilitate targeted Risk Assessments and Fundamental Rights Impact Assessments, as required by the new EU AI Act for high-risk applications, such as those encountered in fields like human resources, health, finance, education and public administration.
Estimated Time Of Arrival 2026
Project Partner: TU Wien, Prof. Sabine Köszegi
Funding Body: The Austrian Research Promotion Agency (FFG)
We Develop Quality Strategies for AI Systems
At leiwand.ai, we follow a hands-on approach when it comes to making elements of AI more trustworthy. Rather than just telling our customers about AI, we also want to help create AI that is transparent, fair and follows quality standards that align with international regulations. We do so by assessing the impact of your AI system.
To achieve this, we strike a novel, more holistic AI development path: From AI system inception to retirement, we bring societal, human and planetary needs into the equation. leiwand.ai devises strategies to maximize positive impacts and minimize risks throughout the AI system’s life cycle. We increase your AI system’s quality by ensuring it functions well in the intended context, taking into respect the diversity of user groups.
Where we Guide AI Development
What types of bias can arise with the use of algorithmic decision support systems, and when could this amount to discrimination as defined in Article 21 of the Charter of Fundamental Rights of the European Union?
In a high-ranking consortium, two different use cases -- predictive policing systems, and automated offensive speech detection -- were investigated by us for the European Union Fundamental Rights Agency (FRA).
This project aims to answer the question: How do we make AI fair?
While AI has tremendous potential in improving many aspects of life, it can be privacy invasive, detrimental to our environment, and can propagate inequalities and discrimination if applied blindly and without control.
With fAIr by Design, we initiated a consortium of 8 partners that aim to:
• develop a new interdisciplinary process model and toolkit for designing fair AI systems
• develop effective solution strategies for fair AI systems in five use cases within the fields of healthcare, media, and HR
For Intact, a provider of software solutions for audits, certification, accreditation and standards, we tested the bias of the company’s algorithm.
Intact uses algorithms to detect anomalies in audits. In our collaboration, we looked at food safety audits, implementing the open source version of their algorithm in order to be able to test it for bias. To do so we used data that was representative of their use case.
For the medical tech start-up rotable, we reviewed the quality of the AI the company utilises in matching trainee doctors and available training positions at hospitals.
Provided with access to the code and data of their technology, we inspected their code for possible sources of bias and examined choices that were made building the algorithm.
In the case of rotable, the algorithm was based on a mathematically defined formula which is a closed form algorithm.