Related Sites

Related Sites

medical news ireland medical news ireland medical news ireland

NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Register

ADVERTISEMENT

ADVERTISEMENT

First guidance on safe use of large language models in oncology practice

By Priscilla Lynch - 24th Nov 2025

language
iStock.com/CHOLTICHA KRANJUMNONG

The ESMO Congress 2025 saw the launch of the ESMO Guidance on the Use of Large Language Models in Clinical Practice (ELCAP), the first structured set of recommendations to bring artificial intelligence (AI) language models into oncology safely and effectively. The publication of ELCAP in ESMO’s peer-reviewed journal Annals of Oncology coincided with a session on ChatGPT and cancer care at the ESMO Congress, underscoring the growing role of AI in oncology.

“ESMO’s priority is to ensure that innovation translates into measurable benefit for patients and workable solutions for clinicians. With ELCAP, ESMO provides a pragmatic, oncology‑specific framework that embraces AI, while upholding clinical responsibility, transparency, and robust data protection,” said Prof Fabrice André, ESMO President.

As the use of large language models (LLMs) accelerates across oncology, ELCAP recognises that opportunities and risks vary depending on the user – whether patients, clinicians, or institutions, and therefore anchors the recommendations in a three‑type structure, translating high‑level principles into 23 consensus statements for day‑to‑day practice.

The first category (Type 1) addresses patient‑facing applications such as chatbots for education and symptom support, which should complement clinical care and operate within supervised pathways with explicit escalation and robust data protection.

The second category (Type 2) covers healthcare professional‑facing tools such as decision support, documentation and translation, which require formal validation, transparent limitations, and explicit human accountability for clinical decisions.

The third category (Type 3) concerns background institutional systems integrated with electronic health records for tasks like data extraction, automated summaries, and clinical‑trial matching; these systems require pre‑deployment testing, continuous monitoring for bias and performance change, institutional governance, and re‑validation when processes or data sources change.

Clinicians should also be aware when such systems are operating within their environment, as their impact depends on interoperability and privacy‑by‑design measures.

ELCAP focuses on assistive LLMs that operate under human oversight, supporting clinicians by providing information, or drafting content rather than taking independent actions. “These systems are designed to enhance – and not replace – clinical workflows and decision-making,” added Jakob N Kather, Deputy Chair of the ESMO Real World Data and Digital Health Task Force and co-author of the study. “At the same time, the guidance acknowledges the rapid emergence of autonomous, or ‘agentic’, AI models capable of initiating actions without direct prompts, which raises distinct safety, regulatory and ethical challenges and will require dedicated future guidance.”

The ESMO President emphasised that shared standards are as critical as algorithms to ensure trust in AI-driven cancer care.

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
Medical Independent 25th November 2025
Medical Independent 25th November 2025

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Trending Articles

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT