Author:
Sarah de Heer is a doctoral candidate at the Faculty of Law at Lund University. In her doctoral research, she examines the impact of AI-driven medical devices in precision medicine on the right to health in Sweden. More specifically, Sarah scrutinises to which extent the conformity assessment procedure, which allows medical devices to be placed on the internal market of the European Union, can safeguard the quality pillar under the right to health. Sarah’s doctoral research is funded by Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS).
Imagine the following: you visit your General Practitioner, as you are experiencing unexplained weight loss and belly pain that spreads to the back. Your General Practitioner decides to test your blood by using an AI-driven medical device in precision medicine that predicts the likelihood of pancreatic cancer. Based on your blood sample, demonstrating a high likelihood of pancreatic cancer, the multidisciplinary team suggests starting chemotherapy. Since multiple combinations of various medicinal drugs can be included in the chemotherapy treatment plan, the tumour in your pancreas is tested with another AI-driven medical device in precision medicine to determine the drug sensitivity of the medicinal products. Based on these results, you are given the medicinal products that have showed high sensitivity and are expected to give you the best results. After treatment, the tumour appears to be in remission.
The above scenario appears to be promising: quicker and more accurate diagnosis and treatment thanks to the combination of precision medicine and AI. While precision medicine modifies healthcare diagnosis and treatment towards the data of an individual belonging to a specific (sub)group of the general population, this tailored approach of medicine is time-consuming and costly. Adding AI allows precision medicine to become faster and more scalable. However, the addition of AI – while undoubtedly revolutionary for the healthcare sector – has its own disadvantages stemming from their inherent characteristics of AI, which includes inaccuracy. The question then becomes: what if the AI-driven medical device in precision medicine is inaccurately trained, which leads to incorrect output? Going back to the scenario: imagine that AI-driven medical devices incorrectly predict that you do not have cancer or recommend a cancer treatment that does not suit you best and actually results in severe and irreversible side effects.
Unfortunately, this is not an unlikely scenario. To train an AI-driven medical device in precision medicine to identify, for instance, pancreatic cancer requires the training, validation, and testing datasets to be complete and accurate. While this is a particularly demanding task, it is a vital one to ensure the health of all individuals of the population. If the datasets, upon which the AI-driven medical device in precision medicine were to be trained, only include data of certain segments of the population, this would result in enhancing the health of individuals similar to the datasets but leaving individuals belonging to different demographics behind. Consequently, the overall health of some individuals may be improved, while simultaneously deteriorate for others. As such, the right to health may either be enhanced or be impeded depending on whether the individual resembles the dataset used.
The right to health, a fundamental right, rests upon four pillars, namely 1) availability, 2) accessibility, 3) acceptability, and 4) quality.[1] While AI-driven medical devices in precision medicine may both positively and negatively affect the pillars of availability, accessibility and acceptability, this doctoral project specifically scrutinises the quality pillar of the right to health. However, quality is not a fixed standard with a clear interpretation or methodology. Rather, seeing the ever-changing nature of medical science and legal requirements[2], the notion ‘quality’ is a moving target that is continuously evolving.[3] Moreover, as the interpretation of quality hinges on the context in which it is used, there is no sole global interpretation. Zooming in on the European Union, the provision of health care should be – amongst others – safe and effective.[4]
In short, the safety aspect of quality requires that healthcare materials, including AI-driven medical devices in precision medicine, should not harm or inflict avoidable injuries.[5] The effective requirement under the quality pillar entails that the provision of health care should be based on evidence[6] and is aimed at improving an individual’s health condition[7]. Both the safety and effectiveness of AI-driven medical devices in precision medicine hinges heavily upon the datasets used for training, testing and validation.
To ensure the safety and effectiveness of medical devices that are placed on the internal market of the European Union, the manufacturer needs to successfully complete the conformity assessment procedure under the Medical Devices Regulation[8]. Further, the Artificial Intelligence Act complements the conformity assessment procedure, where the medical device includes an AI component.[9] Thus, the regulatory framework consisting of the conformity assessment procedure of AI-driven medical devices in precision medicine comprises the Medical Devices Regulation and the Artificial Intelligence Act. Under the conformity assessment procedure, the manufacturer needs to submit evidence demonstrating the safety and effectiveness of their AI-driven medical device in precision medicine to a third party, namely the Notified Body.[10]
During the conformity assessment procedure, this so-called ‘conformity assessment body’ reviews the evidence submitted by the manufacturer. These Notified Bodies issue certifications indicating the conformity of the requirements as indicated in the Medical Devices Regulation and the Artificial Intelligence Act.[11] As such, Notified Bodies are indispensable in verifying the evidence attesting the safety and effectiveness of AI-driven medical devices in precision medicine. After having successfully completed the conformity assessment procedure, the manufacturer may draw up the EU Declaration of Conformity[12] and affix the AI-driven medical device in precision medicine with the CE marking[13]. Both demonstrate the compliance with EU Law, including the provisions on safety and effectiveness. Thus, the EU Declaration of Conformity – and consequently the CE marking – bring along a significant legal assumption.[14]
However, the question is to which extent can the manufacturer provide evidence attesting the safety and effectiveness of an AI-driven medical device in precision medicine? Let alone, to which extent can the Notified Body accurately examine the evidence submitted by the manufacturer of the AI-driven medical device in precision medicine? Seeing the inaccuracy surrounding AI, the manufacturer and the Notified Body are faced with an immense task, as they are tasked to ensure the pillar ‘quality’ under the right of health of AI-driven medical devices in precision medicine, thereby enhancing the health of individuals of all demographics of the population.
In short, the use of AI-driven medical devices in precision medicine may lead to quicker and more accurate health care. Their use, however, brings along challenges, especially caused by training AI-driven medical devices in precision medicine with improper datasets that may – for instance – be incomplete or not representative. This may lead to enhancing the pillar ‘quality’ under the right to health of those individuals who represent the datasets, while simultaneously leading to a decline of the pillar ‘quality’ of those individuals who do not resemble the datasets. Moreover, the four pillars, ‘availability’, ‘accessibility’, ‘acceptability’, and ‘quality’ are interlinked. This means that enhancing quality may also positively affect the other pillars. The other side of medal, however, is that a decline in quality may have a negative impact on the other pillars ‘availability’, ‘accessibility’, and ‘acceptability’. As such, individuals matching the datasets may experience an overall increase of their right to health, while individuals not mirroring the datasets may see their overall right to health backsliding.
Thus, there is a need for caution when implementing AI-driven medical devices in precision medicine in the healthcare sector. Specifically, it is vital that the regulatory regime is fit to address the problems associated with faulty datasets. Furthermore, Notified Body – in its capacity of overseeing the conformity assessment procedure – ought to be given the tools to ensure that quality, and thus the overall right to health, is enhanced for not only a selective group of the population but for everyone.
[1] Office of the High Commissioner for Human Rights, CESCR General Comment No. 14: The Right to the Highest Attainable Standard of Health (Art. 12) (Document E/C.12/2000/4), para. 12.
[2] Santa Slokenberga, ‘The standard of care and implications for paediatric decision-making. The Swedish viewpoint’ in Clayton Ó Néill and others (eds), Routledge Handbook of Global Health Rights (Taylor & Francis Group 2021) 122, 128.
[3] Alicia Ely Yamin, ‘The right to health’ in Jackie Dugard, Bruce Porter and Daniela Ikawa, Research Handbook on Economic, Social and Cultural Rights As Human Rights (Edward Elgar Publishing Limited 2020) 159, 162-163.
[4] Additionally, the provision of healthcare should also 1) patient-centred, 2) timely, 3) efficient, and 4) equitable. Institute of Medicine, Crossing the Quality Chasm. A New Health System for the 21st Century (National Academy Press 2001), 39-40.
[5] European Commission, Future EU agenda on quality of health care with a special emphasis on patient safety (Publications Office 2014), 25-26.
[6] Helen Hughes, ‘Patient Safety and Human Rights’ in Clayton Ó Néill and others (eds), Routledge Handbook of Global Health Rights (Taylor & Francis Group 2021) 259, 261.
[7] European Commission, Future EU agenda on quality of health care with a special emphasis on patient safety (Publications Office 2014), 25-26.
[8] For more information about the conformity assessment procedure of medical devices, please see Article 52 Medical Devices Regulation.
[9] For more information about the conformity assessment procedure for AI systems, please see Article 43 Artificial Intelligence Act.
[10] Article 53 Medical Devices Regulation and Article 43 Medical Devices Regulation.
[11] Article 56 Medical Devices Regulation and Points 3.2 and 4.6, subpara. 2 Annex VII Artificial Intelligence Act.
[12] Article 10(6) Medical Devices Regulation.
[13] Article 20(1) Medical Devices Regulation.
[14] Article 19(1) Medical Devices Regulation.
The post AI-Driven Medical Devices in Precision Medicine – ensuring the pillar ‘quality’ under the right to health appeared first on Ideas on Europe.
Author: Wendy Kwaku Yeboah is a PhD candidate in EU law at the University of Bologna, with a particular focus on EU health law and its digitalisation. Her research explores the regulation of cross-border telemedicine within the framework of the EU internal market, examining both the constitutional and operational dimensions of EU health governance. She investigates how EU law shapes access to healthcare across borders, the challenges of ensuring patient safety and data protection in digital health services, and the evolving role of the Court of Justice of the European Union in this field.
This piece builds on my doctoral research and reflects the presentation I gave at a EUHealthGov panel during the UACES 2025 conference. I am grateful to the EUHealthGov network for providing financial support for my participation, and to all panel participants for their valuable contributions to the discussion. Any errors are solely my responsibility.
Telemedicine moved from niche to necessary during COVID-19. The EU has since built important pieces of a digital health architecture – but cross-border care still runs into legal grey zones and uneven infrastructure. Here’s what works, what doesn’t, and what to fix next.
Why telemedicine matters now
The pandemic turbo-charged digital care and exposed long-standing weaknesses in health systems. Backed by unprecedented EU recovery funding, telemedicine has become a mainstream complement to in-person services – linking patients and clinicians across distance and time. But when care crosses borders, telemedicine stops being just a technical solution and becomes a legal, ethical, and governance stress test for the internal market.
What counts as telemedicine (and why definition matters)
Telemedicine is not one thing. It spans teleconsultation, telediagnosis, remote monitoring, and tele-expertise, often threaded with AI-enabled tools. This multifunctionality puts it at the crossroads of health policy, the internal market, and fundamental rights. Member States retain control over how they organise and finance care; the EU leans on internal-market powers to harmonise the digital rails (data protection, digital identity, AI). The result? An innovation that is inherently cross-border is governed by rules that are still mostly national.
The EU’s legal framework for telemedicine rests on solid principles but remains weak in practice. The Court of Justice has long confirmed that healthcare falls under the free movement of services, and Directive 2011/24/EU established the foundations for patient mobility, supported by mechanisms such as National Contact Points and the eHealth Network. Yet the directive was not designed with digital care in mind and leaves unresolved key issues like quality standards, liability, interoperability, cybersecurity, and AI-assisted decision-making. As a result, much is left to national discretion, creating a fragmented landscape that complicates life for both providers and patients.
EHDS: building the backbone, not the whole body
The European Health Data Space (EHDS) Regulation is a potential game-changer for telemedicine’s plumbing. By standardising formats and enabling secure access to electronic health records (and, progressively, imaging, labs, and discharge reports), it tackles one of the biggest blockers: data that won’t travel.
But the EHDS is infrastructure, not a telemedicine code. It doesn’t settle who is responsible when care goes wrong, how AI recommendations fit with clinical judgment, or how remote-only practice should be accredited. One telling sign: a clause that would have actively promoted cross-border telemedicine fell out during negotiations.
Data and ethics: where the rubber meets the road
Telemedicine runs on sensitive data. GDPR provides a common floor, but national overlays (medical secrecy, access rules, secondary-use controls) differ widely. In practice, providers face divergent consent models, storage requirements, and audit expectations.
Layer in the AI Act and the stakes rise. High-risk health AI must meet transparency, traceability, and human-oversight requirements. That is good for trust, but unresolved in cross-border settings are basic questions: Which authority supervises? How is accountability shared between clinician, institution, and vendor when an algorithm errs? How do we preserve clinical judgment without neutering useful automation?
Ethically, telemedicine can strain informed consent and continuity of care, and—without inclusive design—amplify digital divides for older adults, rural communities, and people with disabilities.
Cross-border telemedicine still faces stubborn obstacles. Liability and jurisdiction remain unclear when adverse outcomes arise, often relying on ad hoc contracts that cannot scale. Professional qualifications are mutually recognised in theory, yet remote-only practice is frequently caught in national grey zones or subject to extra conditions. Reimbursement rules, designed for physical travel, rarely fit virtual care, leaving both patients and providers in doubt. On top of this, uneven digital infrastructure—ranging from electronic health record maturity to coding standards and connectivity—makes even routine cross-border consultations technically complex and unequal.
A realistic way forward (that could start tomorrow)
No grand telemedicine regulation is likely adopted overnight. But the EU and Member States can take pragmatic steps that add up. My research suggests we need a three-pronged approach.
First, targeted telemedicine-specific reforms. We can’t keep expecting a directive that merely mentions telemedicine to govern its complex cross-border realities. We need clear liability frameworks for cross-border care, harmonized professional recognition for digital practice, and coherent reimbursement standards specifically designed for virtual care.
Second, rights-based innovation. The solution isn’t choosing between innovation and patient protection – it’s designing systems that deliver both. We need telemedicine frameworks that enhance rather than replace clinical judgment, data governance that enables sharing while protecting privacy, and digital tools that reduce rather than increase health inequalities.
Third, coordinated but targeted implementation. Yes, the European Health Data Space will help with infrastructure, but we cannot mistake better data sharing for comprehensive telemedicine governance. We need telemedicine-specific interpretations of the GDPR and AI Act to avoid regulatory confusion.
Here’s my central argument: The EU stands at a crossroads. We can either continue with our current fragmented approach and watch telemedicine’s transformative potential slip away, or we can seize this moment to build a truly integrated Digital Health Union.
The bottom line
In conclusion, the pandemic taught us that health crises don’t respect borders. Our coordinated European response showed us that cooperation saves lives. Now, as we build back better, we must ensure our legal frameworks are as innovative as the technologies they govern.
Telemedicine is now a central pillar of European healthcare – not a pilot project. The EU has assembled critical pieces (GDPR, AI Act, EHDS), but they amount to an incomplete kit for cross-border care. To unlock the internal market’s advantages without compromising patient rights, the EU needs a tighter weave between infrastructure and rules: clear liability and jurisdictional defaults, workable accreditation for remote practice, interoperable data that clinicians can actually use, and reimbursement that follows patients – not borders.
Get those elements right, and telemedicine can deliver what it promised in the pandemic’s crucible: resilient, inclusive, and truly European care.
The post Cross-Border Telemedicine in the EU: Promise, Pitfalls, and the Path Ahead appeared first on Ideas on Europe.