The Nature Digital Medicine provides an overview of the implications of the European Union (EU) AI Act for the healthcare sector.
The AI Act, adopted in 2024, is the first comprehensive legal framework on artificial intelligence (AI) with a focus on promoting trustworthy AI while protecting health, safety, and fundamental rights.
The Act introduces a risk-based approach, classifying AI systems into categories like high-risk, prohibited, and general-purpose AI models.
It specifically impacts medical AI, which was not explicitly covered by previous regulations like the Medical Device Regulation (MDR) or the In Vitro Diagnostic Medical Device Regulation (IVDR).
The AI Act applies to AI systems within the EU and those outside the EU whose output is used within the region.
Key aspects of the AI Act include:
- Prohibited AI practices such as manipulation, exploitation, and biometric recognition, although some medical uses are exempt.
- High-risk AI systems like those used in medical diagnostics, which face stringent requirements for risk management, technical documentation, and human oversight.
- General-purpose AI models (GPAI), which must meet transparency, cybersecurity, and risk mitigation requirements, even if not specifically intended for medical use.
- Innovation promotion through exemptions for AI systems used solely for research or personal activities, and the introduction of regulatory sandboxes to support AI development.