The EU AI Act and its application to medtech
AI has potential applications across the life sciences sector, from drug discovery, through clinical trials, marketing and pharmacovigilance. It also plays an increasing role in medical devices, such as sophisticated diagnostic support tools and responsive systems that gather and take account of individual user data.
Given its increasing importance to all parts of the economy, AI is attracting the attention of regulators, either on a sector-by-sector basis or a multi-sector, cross-cutting approach. The EU AI Act is one of the leading pieces of AI-focused legislation, taking a risk-based approach to regulating AI across all sectors. Now that the AI Act has been finalised, we consider what its impact will be on the medtech industry.
The AI Act’s stated goal is to address the potential risks presented by AI systems and ensure that they respect the fundamental rights and values recognised in the EU, while also supporting AI innovation. While cross-sector legislation makes sense for policymakers as a universal approach to addressing risk, it can present difficulties for producers. The EU already has in place a comprehensive suite of detailed regulations for different classes of products, and the AI Act overlays this with additional requirements where AI is used in those products.
The heavily-regulated “high risk” category in the AI Act includes the categories set out in Annex III, such as systems used in critical infrastructure, remote biometric identification and education and vocational training. It also includes any product which is an AI system, or which uses an AI system as a safety component, and which is subject to third party conformity assessment under specified EU product legislation. The list of product legislation includes both the Medical Devices Regulation (MDR), and the In Vitro Diagnostic Medical Devices Regulation (IVDR). These products, then, will be subject to regulation both under the product-specific framework, and the AI Act.
The AI Act recognises that this layering of requirements could lead to inconsistency and duplication. There is an expectation that the AI components of a product will be assessed as an addition to the existing conformity assessment process for that product. Article 8 requires producers to comply with the relevant product-specific legislation, such as the MDR and the IVDR, in addition to the AI Act. However, it states that “providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation”.
The requirements relating to post-market monitoring in the AI Act also recognise this potential duplication. Article 72 notes that in order to “ensure consistency, avoid duplications and minimise additional burdens”, providers shall have a choice of integrating, as appropriate, the necessary elements in the AI Act into systems and plans already existing under product-specific legislation, provided that this achieves an equivalent level of protection.
The European trade association for the medical technology industry, MedTech Europe, has highlighted concerns over how this overlayering will operate. While recognising the progress that was made in improving the consistency and clarity of the legislation, MedTech Europe notes that there remains scope for uncertainty and duplication. It calls on the European Commission to produce guidelines for the medtech sector swiftly and in consultation with industry, and to ensure that these are consistent with the existing legislation (the MDR and the IVDR).
While some elements of the AI Act will take effect from February 2025, the date for application of the AI Act to medtech is 2 August 2027. This will allow time for guidelines and processes to be developed, although the work will need to be progressed quickly to give manufacturers time to comply.
Based on statements in the King’s Speech and the announcement that the UK has signed the first international treaty addressing risks of artificial intelligence, we anticipate that we will see further regulation in the UK in this area to align with the overarching aim of protecting individuals from potential harm arising from AI-specific risks.