Article

AI-Technology in Healthcare

Nika Nazarian

Nika Nazarian Legal Officer

This blog focusses on the qualifications of AI systems used in healthcare and the obligations for its providers and deployers for such systems under the EU Regulation on Artificial Intelligence (AI Act). The AI Act follows a risk-based approach, defining five categories of risk while also distinguishing AI systems and models. The qualification of risk of AI systems, therefore, mainly relates to their (possible) purpose and/or effect. For more in-depth information on the structure, qualifications and definitions of the AI Act, see our blog “Kienhuis Legal blog: Introduction AI Act”.

Qualification: AI system or AI model

When ensuring AI policies of healthcare institutions comply with the AI Act, the first key question is whether the AI technology used in healthcare qualifies as an AI system or an AI model. To determine the qualification, let us, firstly, set out the criteria for technology to qualify as an AI model. The AI Act delineates a (general purpose) AI model as AI technology that (1) is trained with a large amount of data using self-supervision at scale, (2) displays significant generality (3) is capable of competently performing a wide range of distinct tasks, (4) and can be integrated into a variety of downstream systems or applications. Excluded from this definition are AI models used for research, development or prototyping, before they are placed on the market.

On the other hand, an AI system entails, broadly speaking, a machine-based technology (1) that is designed to operate with varying levels of autonomy, meaning to operate without human intervention, (2) may exhibit adaptiveness after deployment, and (3) infers how to generate outputs, such as predictions, content, recommendations or decisions, from the input it receives. Overall, AI systems are only capable of performing for a specific purpose for which they are trained, whilst AI models may effectively be used in a wide range of tasks.

To showcase the qualification, we will examine different AI technologies in healthcare. Let us take the following two examples: (1) technology that provides AI-powered cancer detection in (scanned and digitized) body tissue, whereby AI may enhance accuracy and enable (earlier) detection of malign tumors, and (2) AI technology to transcribe and summarise conversations between doctors and their patients, regarding e.g. medical backgrounds, intended treatment, complaints and procedures.

The first example, that pertains to AI technology used for cancer detection, is an AI system. The concerned AI system is specialised in aiding in a specifical diagnostic process, whereby it enhances accuracy and enables earlier detection. Its providers have designed it to carry out its medical function with a high degree precision. This AI technology would, consequently, not fulfill the criteria of AI models since it does not display significant generality, is not capable of competently performing a wide range of distinct tasks, and can highly likely not be integrated into a variety of downstream systems or applications Moreover, after the provider has trained the concerned AI system, the system likely generates its output without any human intervention. Therefore, such AI technology qualifies as an AI system under the AI Act.

Let us turn to the second example of AI based medical technology, namely the one used to transcribe and summarise conversations between physicians and patients. This AI tool shares autonomy and output-generation characteristics with the AI powered cancer detection technology; it transcribes and generates texts without the need of human intervention.  However, the qualification of AI tools also depends on the way and extent in which the technology can be utilized by its developers. Though general AI technology may be used for the same specific purposes, one must take into account that such technology may be specifically trained to transcribe and summarise medical information. Indications that tools have been specifically developed for use in the medical field are features such as automated integrating generated summaries into patient records, or by attaching standardized medical codes in the summaries to the physician-patient conversation based on the relevant complaints and conditions identified by the AI system. If an AI tool showcases such factors, it would likely not be suitable to be integrated into a variety of downstream systems or applications. Consequently, and as the first AI tool example, this technology would not qualify as an AI model, but as an AI system.

Qualification: Risk

The second step sees to determining the risk qualification of the respective AI technology. Both above-mentioned examples of medical AI systems do not fall under the prohibited category of AI systems that pose an unacceptable risk. For an elaborate list of prohibited AI systems, see our blog ‘Prohibited AI-systems’.

The AI systems seem to qualify as high-risk systems. Generally speaking, an AI system is considered high risk if it is either listed by the European Commission in one of nine specified areas, with potential for expansion, if it is used as a safety component in regulated products, or if it is a product itself, that is already subject to certain product safety requirements under existing EU legislation. For more information, see our blog “High-risk AI-systems and requirements”.

The EU legislation, under which AI systems are subject to product safety requirements, includes the Medical Device Regulation (Regulation (EU) 2017/745). This regulation applies to medical devices that include instruments, appliances, software, implants or other articles intended for use in the diagnosis, prevention, monitoring, treatment or alleviation of disease or conditions. Software that performs a medical function, such as diagnostic support or therapy planning, is in principle also covered by the MD-Regulation.

Considering the foregoing, the AI powered cancer detection system is clearly considered a medical device under this regulation, and, consequently, qualifies as a high-risk AI system under the AI Act. The qualification may not be as clear with regards to the AI systems used to transcribe and summarise conversations between physicians and patients. Namely, software that only performs general or administrative functions, such as performing simple searches (e.g. library functions that retrieve files based on metadata), is not considered a medical device. Tasks such as sending emails, web or voice messages, data parsing, word processing and backups are also not considered software with a medical purpose in themselves.

However, software that processes, analyses, creates or modifies medical information may still fall under the MD-Regulation, provided these tasks are done with a medical purpose. Illustrative of such software is one that modifies the display of data in a medical way, such as searching for abnormalities in medical images to support a diagnosis or locally enhancing contrasts in an image to support decision-making. Considering the above-mentioned, the transcribing and summarising AI system would also fall under the scope of the MR-Regulation and be considered a high-risk AI system.

Practically, high-risk AI systems create compliance duties for both its providers and deployers, but the bulk sits with the providers. Providers are responsible for designing the system to be compliant from the start. This includes:

  • setting up risk-management processes;
  • ensuring good data governance and low bias;
  • preparing technical documentation and logs;
  • building in human-oversight features;
  • ensuring transparency of the system's capabilities and errors for deployers;
  • meeting standards for accuracy and cybersecurity;
  • running a quality management system;
  • undergoing conformity assessments;
  • registering the system; and
  • carrying out post-market monitoring and incident reporting.

Deployers have narrower, operational duties. They must:

  • use the system according to instructions;
  • ensure human oversight is actually exercised;
  • monitor performance;
  • keep logs where relevant;
  • report serious incidents; and
  • in some public-sector contexts, conduct fundamental-rights impact assessments.

See our blog ‘High-risk AI-systems and requirements” for more information on this matter. For more questions, contact our lawyers of the AI team of Kienhuis Legal.

Do you have any questions?
Please contact us