The Promise and Peril of AI in Human Resources
Table of Contents
Introduction
In the past, human resources (HR) management was largely an interpersonal realm, where professionals leaned on their “people skills” to choose the best candidates for a job and to manage employees – whom to promote, transfer or lay off was mostly a question of human-to-human interaction.
These days, AI is increasingly “getting a say” in HR matters, providing recommendations to HR professionals and C-level executives that ought to optimize workflows.
From automated CV screening to AI-led performance reviews, schedule creation and predictive attrition analytics, the digital transformation of HR is already in full swing.
Yet while AI promises efficiency, objectivity, and optimization, it can introduce profound risks to our well-being - particularly when it comes to fairness, transparency, and accountability. At the same time, Europe is stepping up regulation with the EU AI Act, classifying HR-related AI applications as high-risk. This creates new obligations around transparency, accountability, and fairness - raising the stakes for businesses and workers alike.
What then is the true impact of AI systems on human resource management today - and possibly in the future? Does artificial intelligence truly “improve” HR management and employment, or is it simply a fancy looking hoax that creates more problems than it solves? And how will the EU AI Act affect the field of employment with AI being involved in HR management?
An event hosted by the AI Service Office and the Vienna Chamber of Labor tackled these very questions, exploring how AI is reshaping human resource management - and what we must do to ensure it doesn’t reshape it for the worse. In this article, you can read up on the key takeaways from the experts that attended and presented.
Artificial Intelligence as a Societal Force
“While we often overestimate short-term effects and ignore early developments, long-term consequences can reshape entire industries before we've had a chance to respond.”
Titus Udrea of the Economic Chamber Vienna set the stage with a sweeping overview of AI’s dual promise and peril. While AI can boost productivity and improve life in sectors like healthcare and mobility, it also introduces systemic challenges: from environmental tolls to algorithmic bias and deepening power imbalances
Big tech dominates AI infrastructure and data, concentrating power and raising entry barriers. This deepens societal dependencies and limits alternative, public-interest-driven solutions. As AI spreads across sectors like industry, education, and healthcare, it introduces risks of bias, reduced transparency, and diminished worker agency.
AI's massive energy and resource needs - from electricity to rare minerals - amplify environmental and geopolitical tensions. These demands must be addressed alongside rising mental stress and climate anxiety in the workforce.
Ultimately, AI is not just a tool but a societal force, Udrea agrued. Its development must be guided by transparent regulation, ethical standards, and open ecosystems to ensure that its benefits are equitably shared - and that its risks don’t define our future.
AI Trends in the Field
The Broadcasting and Telecommunications Regulatory Authority (RTR) conducted interviews with around 7,060 companies in Austria to assess how AI is being integrated into human resource management. The study focused on the classification of AI systems used in HR under the EU Artificial Intelligence Act, particularly identifying applications in employment and self-employment as potentially high-risk. This classification comes with a range of obligations for operators, including transparency, supervision, monitoring, and information sharing.
Robert Kiraly & Thomas Schreiber from RTR explained the findings gathered from the interviews.
Key Findings:
About two-thirds of companies use AI in HR, either internally or via external providers.
Company size and age had little influence on AI adoption.
The top three uses of AI in HR are:
Handling routine administrative tasks
Supporting recruitment
Managing internal knowledge
In recruitment, the most common AI applications were:
Writing and targeting job advertisements
Most companies limit AI use to a few specific areas—only about 1% utilize AI across 7 to 10 HR functions.
Motivations and Barriers:
The primary driver for adopting AI was cost savings, followed by improved quality.
Privacy and data protection concerns were the most cited barriers, whereas fears of discrimination or high costs were less commonly mentioned.
Governance and Data Use:
Around a third of companies have training, documentation, or internal policies regarding AI use.
9% are creating formal works agreements on AI.
Texts and employee data are the main data sources used by AI in HR.
Over two-thirds of companies have a data protection officer, but only 16% have an AI officer.
Roughly half of the AI tools used are open-source.
Half of the companies plan to increase their use of AI in HR in the next year, while very few intend to reduce it.Motivations center around cost savings and efficiency, while the biggest barriers are data privacy concerns. Interestingly, only 9% of companies are working on formal AI agreements with staff—a sign that governance is lagging behind adoption.
Their takeaway: regulation, like the EU AI Act, will soon require companies to step up. Transparency, documentation, and accountability aren't optional, as they're becoming obligations.
Transparency and the Right to Explanation
Madeleine Müller (Research Institute) zeroed in on the legal and ethical implications of AI in the workplace - particularly when algorithms influence critical HR decisions like hiring, performance evaluation, or shift scheduling.
Toward Responsible AI in Employment:
Her message was clear: transparency isn’t optional. Under both the EU AI Act and the GDPR, employees have the right to know when and why AI affects them. Both legal frameworks aim for explainable AI: decisions must be traceable, comprehensible, and challengeable. This reinforces employee rights, works council involvement, and the need for ethical design and deployment of workplace AI.
While the AI Act’s implementation details remain under development, the direction is clear: transparency is not optional - it’s a cornerstone of fair digital labor practices.
What Should Be Explained?
Müller argued that these explanations on AI at the workplace should be provided:
Why the decision was made to use AI - key inputs, logic, and impact.
How the AI system was used - its role and weight in the process.
What rights the individual has - especially how to object or seek redress.
Observing to AI to Understand AI
David Walker presented the concept of an AI observatory for Austria - an initiative that seeks to systematically track and understand AI’s real-time impact on labor markets.
Instead of speculating about the future, this observatory would focus on how AI is already reshaping jobs, introducing hybrid human-AI roles, and changing how tasks are distributed and evaluated. It would serve as a knowledge hub for evidence-based policymaking, workforce training strategies, and inclusive tech design.
Why an AI observatory is necessary : The labor market is changing
There is a polarization of employment: Technical, analytical activities are increasing, while simple, repetitive tasks are becoming increasingly automated.
Job profiles are changing – hybrid roles are emerging at the interface between humans and AI, especially in areas with a high level of interaction.
Algorithmic control is increasing: AI influences timing, task allocation, and evaluation, which can create pressure and loss of control.
Lack of transparency in AI decisions creates uncertainty and trust deficits.
Emotional and cognitive stress is increasing due to monitoring and pressure to adapt.
Motivation and meaningfulness are ambivalently affected – AI can relieve stress, but it can also limit freedom of action.
Certain prerequisites are necessary for companies to use AI successfully
Companies face numerous challenges in this regard, ranging from a shortage of skilled workers and regulatory hurdles to fragmented data silos. At the same time, AI systems offer great opportunities: increased efficiency, innovation potential, new business models, and scalability.
To be successful, they need:
Access to high-quality data, reliable infrastructure, and well-thought-out governance structures.
AI skills development through training and targeted role definition.
Interdisciplinary collaboration between IT and specialist departments.
Trust building through transparency, communication, and employee participation.
AI is no longer just a pilot project, but part of systematic corporate strategies. The key areas of application include:
Process optimization through data-based analysis
Personalization at customer interfaces
Product and service development with AI-based features
Generative AI for content creation
Decision support through pattern recognition
Current developments show:
A shift from individual projects to systematic integration.
The development of internal AI expertise in companies.
Greater use of AI in non-technical areas such as marketing and HR.
The growing importance of technical and organizational infrastructure for the sustainable use of AI.
When Algorithms Manage
Wolfie Christl of Cracked Labs delivered a sobering analysis of algorithmic management, where software systems increasingly monitor, control, and assess workers in real time.
While pitched as tools for efficiency, these systems often become surveillance machines that erode autonomy, increase stress, and lock employees into rigid, opaque performance metrics. The promise of objectivity is a myth, says Christl - bias is embedded in data and amplified by automation.
Broader Consequences
Systems that are designed to optimize output can inadvertently dehumanize work, stripping it of meaning, context, and care. Processes that once relied on human judgment are now rigidly encoded, removing the room for negotiation and empathy.
The use of algorithmic control can intensify work, undermine skills, and create an atmosphere of mistrust and stress. Employees may feel constantly watched and judged, which affects:
Their mental health
Their motivation and engagement
The quality of work
And even the employer’s reputation
There is also a dangerous development in how data is used. It is collected for one purpose, but gradually applied to others or even shared with third parties, especially via cloud-based tools.
This shift doesn’t just impact individuals. It reshapes workplace culture, erodes trust, and transfers risk from employers to employees. His call to action? Question the belief that tech is always the answer and fight for human-centered design.
The AI Act and Human Resources Management
The EU AI Act aims to make artificial intelligence safer through risk-based requirements on governance, testing, and transparency. This is especially important in the context of HR because of high-risk AI - systems considered by the EU to be significantly risky to human health and rights in certain application environments, such as human resource management.
Notably, the AI Act already banned certain uses outright, such as subliminal manipulation, social scoring, emotion recognition, and biometric categorization, particularly when these infringe on fundamental rights or workplace safety.
The AI regulation will significantly impact employment contexts by addressing how AI is used to monitor, evaluate, and manage workers. AI systems are increasingly employed in both office and production environments - for example in automation, support, and decision-making, says Johannes Warter, who works at the University of Salzburg.
He explained that AI in HR raises serious concerns around algorithmic control, surveillance, and power imbalances, especially where AI decisions lack transparency or appeal mechanisms. While the AI Act focuses on product safety, it often overlooks labour-specific safeguards, creating legal uncertainty.
Warter concluded that the AI Act is overwhelmingly complex, requiring solid in-depth knowledge on risk management of labor laws and shows legal uncertainty. But, he also emphazised that the AI Act provides added value for workers: developers of AI tools face documentation and registration duties, and enforcement of the regualtion is backed by penalties - all to secure that AI systems work for the benefit of humans and not harm them, even in hifgh-risk situations.
However, core issues like work intensification and fair distribution of risks remain unresolved with the AI Act, he adds. Protection of fundamental rights is central - but translating that into everyday workplace realities is still a challenge.
Voices of the Panel - AI at Work
Rania Wazir (yours truly, leiwand.ai)
Rania focused on the limits of bias mitigation in AI systems, especially in HR. She questioned whether certain tasks should be done by AI at all and emphasized the importance of risk assessment before deployment. She was skeptical of claims that bias can be removed entirely, referencing the “lipstick on a pig” critique of superficial solutions. Instead, she proposed a structured approach: identify risks, evaluate the system’s weaknesses, and only proceed if no better non-tech solution exists. She also called for less reliance on resource-heavy models like LLMs and more emphasis on explainable, task-appropriate systems.
Eva Angerler (GPA – Austrian Trade Union Federation)
Eva emphasized that since the advent of ChatGPT, interest in AI within companies has surged, marking a technological leap. However, she pointed out the need to clarify terminology and focus areas - whether we’re talking about AI in HR, algorithmic management, or other applications. A recent GPA survey showed that AI use in HR is already widespread, including tools like knowledge databases, chatbots, and algorithmic decision-making. The rapid pace of development calls for clear strategies and support for works councils, with the involvement of domain experts to ensure proper implementation and worker protection.
Florian Schäfer (WKÖ – Austrian Economic Chamber)
Florian highlighted that only about 20% of Austrian companies currently use AI, with much of it being generative AI, not in-house developed systems. He noted that while AI has long been present in industry, many businesses are hesitant due to legal uncertainty and a lack of skilled professionals. He stressed that framing AI competencies positively, rather than confrontationally, can encourage companies to invest in training and effective tool usage. He also raised concerns about the reliance on imported AI models, particularly from the U.S., and called for stronger European development.
Sabine Köszegi (TU Wien, AI Advisory Board)
Sabine critiqued the "technochauvinism" - the belief that tech is always the best solution. She warned against outsourcing strategic decisions like hiring to external AI systems trained on past data, which can lead to poor decisions and loss of contextual judgment. AI, she argued, can’t handle the complexity of unexpected or human-centered decisions. She expressed concern about de-skilling and cognitive offloading, especially in middle-education jobs, where overreliance on GenAI may result in reduced critical thinking and lost competencies.
Conclusion: Rethink HR, Don’t Just Automate It
AI is reshaping HR - that much is clear. While it brings speed and efficiency, it also risks deepening bias, surveillance, and worker stress. The EU AI Act raises the bar for fairness and accountability, but real impact depends on how it's applied.
The path forward? Use AI to support humans, not replace them. That means ethical design, transparency, and putting people - not just performance - at the center of HR innovation.
AI in HR isn’t just a tech question. It’s a human one. Let’s try to be more human.
Fotocredit Title Picture: Ludwig Schedl