ABRRA

The Algorithmic Bias Risk Radar

The AI-based risk assessment tool that will be the first of its kind.

We are currently creating our very own in-house technology for pre-assessing bias risks in artificial intelligence systems.

With this technology, we will be able to identify potential adverse effects of AI systems early in their development, procurement, and certification process.

The Risk Radar will run on a carefully curated expert database that will be filled with thousands of AI Incidents.

The technology will facilitate targeted risk assessments and fundamental rights impact assessments, as required by the new EU AI Act for high-risk applications.

These applications are encountered in fields like human resources, health, finance, education and public administration.

Client

Partner

Estimated Time of Arrival

2026

Why We Need to Quality Control AI

Artificial Intelligence systems function and learn from vast datasets, typically sourced from the internet, which contain significant amounts of unfiltered human-created content. As a result, algorithms trained on this data inherit existing biases related to gender, religion, ethnicity, and disabilities.

AI systems therefore come with their own risks if left unchecked, especially when providing users with recommendations for decision-making in situations where they could influence people’s lives.

In practice, many companies and the public sector are already using algorithmic decision systems without truly knowing what data they are running on and what their actual impact will be, often exhibiting unfairness and inaccuracies.

As an example: AI-based recommendation systems are used in human resources in hopes of accelerating recruitment processes.

However, there have been instances of sexist automated recommendations for jobs or applicants in selection processes, in which women predominantly received job recommendations for part-time jobs in nursing or cleaning, whereas men were proposed more technical occupations. The same goes for AI-supported applicant processing, where for IT jobs, men were highly likely to be proposed over women by an AI system to the recruiters out of an applicant pool.

Compliance is Key

To safeguard our human rights, health and safety when interacting with AI, the European Union has envisioned the Artificial Intelligence Act.

This legal framework dictates that systems deployed in the European Union (EU) – and requirements that refer to actors along the AI value chain, in particular, the AI system providers and deployers – are now subject to certain quality standards.

One example would be the fundamental rights impact assessment (FRIA) for deployers using high-risk AI systems.

High-risk systems, as the name suggests, pose a potential threat to the health, safety or the fundamental rights of individuals. You’ll find them in contexts like law enforcement, biometrics, education, and employment, to name a few.

AI systems deployers in public administration, or private businesses offering services for the public, as well as banks and insurers, are from now on obliged to assess what kind of impact on fundamental rights their system may have.

© All fotos by UnSplash

Next
Next

When AI Discriminates