Hero Image of the leiwand.ai  AI Technology & Solutions page that shows a laptop halve openend in a dark room, with its screen exuding blue and orange light.

AI Tech & Solutions

Our in-house Technology

We use AI, to test AI

Our aim is not only to help our customers understand, develop and deploy fair and transparent AI systems: through our continuous research and testing, we are creating technology that can test your AI system according to EU wide law.

Since the market for trustworthy AI is continuously growing, we are ahead of the curve and already aim to develop tests that our customers can use to check the quality of their AI system.

We develop strategies for AI systems

At leiwand.ai, we follow a hands-on approach when it comes to making elements of AI more trustworthy. Rather than just telling our customers about AI, we also want to help create AI that is transparent, fair and follows quality standards that align with international regulations. We do so by assessing the impact of your AI system.

To achieve this, we strike a novel, more holistic AI development path: From AI system inception to retirement, we bring societal, human and planetary needs into the equation. leiwand.ai devises strategies to maximize positive impacts and minimize risks throughout the AI system’s life cycle. We increase your AI system’s quality by ensuring it functions well in the intended context, taking into respect the diversity of user groups.

Where we guide AI development

What types of bias can arise with the use of algorithmic decision support systems, and when could this amount to discrimination as defined in Article 21 of the Charter of Fundamental Rights of the European Union? 

In a high-ranking consortium, two different use cases -- predictive policing systems, and automated offensive speech detection -- were investigated by us for the European Union Fundamental Rights Agency (FRA).

This project aims to answer the question: How do we make AI fair?

While AI has tremendous potential in improving many aspects of life, it can be privacy invasive, detrimental to our environment, and can propagate inequalities and discrimination if applied blindly and without control.

With fAIr by Design, we initiated a consortium of 8 partners that aim to:

• develop a new interdisciplinary process model and toolkit for designing fair AI systems

• develop effective solution strategies for fair AI systems in five use cases within the fields of healthcare, media, and HR

For Intact, a provider of software solutions for audits, certification, accreditation and standards, we tested the bias of the company’s algorithm.

Intact uses algorithms to detect anomalies in audits. In our collaboration, we looked at food safety audits, implementing the open source version of their algorithm in order to be able to test it for bias. To do so we used data that was representative of their use case.

For the medical tech start-up rotable, we reviewed the quality of the AI the company utilises in matching trainee doctors and available training positions at hospitals.

Provided with access to the code and data of their technology, we inspected their code for possible sources of bias and examined choices that were made building the algorithm.

In the case of rotable, the algorithm was based on a mathematically defined formula which is a closed form algorithm.