Article

AI-Systems in Research and development

Nika Nazarian

Nika Nazarian Legal Officer

The EU Regulation on Artificial Intelligence (AI Act) applies to AI systems and models. The AI Act uses a risk-based approach whereby different obligations correspond to the different levels of risk. In total, there are five levels of risk: unacceptable risk AI system, high-risk AI systems, limited risk AI system, general-purpose AI models with systemic risk, and general-purpose AI models without such risk. For more in-dept information, see our blog “Kienhuis Legal blog: Introduction AI Act”.

AI systems and models used solely for scientific research and development are excluded from the obligations in the AI Act. Moreover, the AI Act does not apply to product-oriented research, testing, or development activities related to AI systems or models, under the conditions that they are not (yet) placed on the market or put into service. The exception is based on market-based logic, allowing developers the freedom to experiment and refine technologies to ensure they meet safety and ethical standards before public release, even with techniques that might otherwise be prohibited. Be aware that the systems and models do need to comply with obligations from the Act when they are placed on the market or put into service after prior research and development.

Think or example of a research team of a university that develops a specialized AI model in a controlled laboratory setting to study cognitive and behavioral responses to AI-driven stimuli. Because this technology is developed and used for the sole purpose of scientific research and is not yet available for general distribution or professional use. Such technology generally falls within the R&D exception and is not subject to the requirements.

However, this exception to the AI Act does not extend to research in real-world conditions. Real-world testing refers to testing AI systems or models in environments that closely resemble or are part of the actual environments in which they will be deployed. Such testing is generally part of regulatory sandboxes, which are controlled settings established to allow for safe experimentation with innovative technologies under the supervision of relevant authorities. When research and development is conducted within formal AI regulatory sandbox or in real-world scenarios, articles 57 – 61 of the must be adhered to.

Take for example a public government that decides to test an AI-driven facial recognition system on public streets during a major festival to identify volunteers and verify the system's performance in a non-simulated environment. Because this constitutes "testing in real-world conditions," it is explicitly excluded from the R&D exception. Another example is testing an AI research tool in a real classroom with actual students and teachers; because these are "actual users" performing their normal work in their real environment, the exception does not apply.

For more questions, contact one of the lawyers of the AI team of Kienhuis Legal.

Do you have any questions?
Please contact us