This blog focusses on the qualifications of AI systems used in human resources and the obligations for its providers and deployers applying for such systems under the EU Regulation on Artificial Intelligence (AI Act). The AI Act follows a risk-based approach, defining five categories of risk while also distinguishing AI systems and models. Because of this approach, the qualification of risk of AI systems mainly relates to their (possible) purpose and/or effect. For more in-dept information, see our “Kienhuis Legal blog: Introduction AI Act”.
System or model
For AI compliance, providers and deployers must qualify their AI technology used in human resources as either an AI system or an AI model in order to assess the applicable compliance criteria. To determine the qualification, let us, firstly, set out the criteria for technology to qualify as an AI model. The AI Act delineates a (general purpose) AI model as AI technology that (1) is trained with a large amount of data using self-supervision at scale, (2) displays significant generality (3) is capable of competently performing a wide range of distinct tasks, (4) and can be integrated into a variety of downstream systems or applications. Excluded from this definition are AI models used for research, development or prototyping, before they are placed on the market.
On the other hand, an AI system entails a machine-based technology (1) that is designed to operate with varying levels of autonomy, meaning to operate without human intervention, (2) may exhibit adaptiveness after deployment, and (3) infers how to generate outputs, such as predictions, content, recommendations or decisions, from the input it receives.
Unacceptable risk AI systems
The second step sees to the risk qualification of AI technology. Some forms of AI used for human resource purposes may fall under the prohibited category, as they pose unacceptable risk for (future) employees. This category includes emotion recognition systems intended to identify or infer emotions, unless the use of the AI system is intended to be put in place or into the market for medical or safety reasons. See for an elaborate list of unacceptable AI systems our blog 'Kienhuis Legal Unacceptable Risk AI Systems'. If human resources use AI systems considered to pose an unacceptable risk, and do not fall within the exception, the use of set systems is prohibited. When the systems do not fall under this category, further qualification is required.
High risk AI systems
There is a great chance that the use of AI systems in human resources poses a high risk for (future) employees, since AI Act states that AI systems are generally considered to pose a high risk when used in the area of employment. Two specifications are given in this regard. Firstly, high risk AI systems entail systems for the use of recruitment and work-related selection, such as filtering job applications, placing targeted job advertisements and evaluating candidates. Secondly, AI systems pose a high risk when they are used for decision making that affect work-related relationships such as promotions, for allocation of tasks based on personal traits, or for evaluating performance of employees. If AI systems are used for such intentions, its providers and employers must comply with extensive obligations. For more information, see our blog “high-risk AI-systems and requirements”.
Excluded from the high-risk qualification are AI systems that do not pose a significant risk to health, safety or fundamental rights of people by not having a material impact on the outcome of decision making. This is the case when AI systems have the purpose of (1) performing narrow procedural tasks, (2) improving results of previously completed human activity, (3) detecting decision-making patterns, whereby the system is not meant to influence or replace previous human assessments, or (4) performing preparatory tasks to assessments.
If AI systems used for human resources purposes fall under this exception, its providers and deployers do not have to comply with the extensive high-risk obligations. In that case, they only have to comply with obligations for AI systems with limited risk, which obligations are far less stringent and extensive than those of high-risk AI systems.
Practically, high-risk AI systems create compliance duties for both its providers and deployers, but the bulk sits with the providers. Providers, being the parties placing the system on the market, are responsible for designing the system to be compliant from the start. This includes:
- setting up risk-management processes;
- ensuring good data governance and low bias;
- preparing technical documentation and logs;
- building in human-oversight features;
- ensuring transparency of the system's capabilities and errors for deployers;
- meeting standards for accuracy and cybersecurity;
- running a quality management system;
- undergoing conformity assessments;
- registering the system; and
- carrying out post-market monitoring and incident reporting.
Deployers, being the parties actually using the AI systems, have narrower, operational duties. They must:
- use the system according to instructions;
- ensure human oversight is actually exercised;
- monitor performance;
- keep logs where relevant;
- report serious incidents; and
- in some public-sector contexts, conduct fundamental-rights impact assessments.
See our blog ‘High-risk AI-systems and requirements” for more information on this matter. For more questions, contact one of the lawyers of the AI team of Kienhuis Legal.