Related Sites

Related Sites

medical news ireland medical news ireland medical news ireland

NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Register

ADVERTISEMENT

ADVERTISEMENT

AI: Making it usable, useful, and safe for patients and clinicians

By Prof Gozie Offiah, Chair, Medical Protection Society Foundation - 19th May 2025

patients and clinicians
Credit: istock.com/Just_Super

A White Paper supported by the Medical Protection Society Foundation has made recommendations on the safe and effective use of artificial intelligence in healthcare, writes Prof Gozie Offiah

Earlier this year a citizens’ jury set out 25 recommendations for policymakers on the safe, ethical and inclusive use of artificial intelligence (AI) in Ireland’s healthcare system, including a call for a national roadmap to guide the integration of AI in healthcare over the next five years.

The healthcare sector is one of the biggest areas of AI investment globally. It is also playing an increasingly important role in many nations’ public policies, including health.  The Government committed to the development of a strategy for AI in healthcare in the Programme for Government, so Ireland is no exception. With the potential for AI to be deployed in multiple ways – from managing waiting lists and analysing x-rays, right through to robotic surgery – we are at a turning point and it is an exciting time to be part of the medical profession.

With opportunity comes some risk, however, and this was crystallised in a White Paper published in March – a collaboration between the Medical Protection Society (MPS) Foundation, the Centre for Assuring Autonomy at the University of York and the Improvement Academy hosted at the Bradford Institute for Health Research.

The paper called on governments, AI developers, and regulators to ensure that AI tools are integrated into healthcare delivery in a way that is usable, useful, and safe for both patients and the clinicians using them. It said the greatest threat to AI uptake in healthcare is the ‘off switch’, if frontline clinicians see the technology as burdensome, unfit for purpose or are wary how it may impact upon their decision-making, their patients, and their licences. 

Put simply, if AI works well for clinicians, they are more likely to embrace and interact with it, and this will play a significant role in unlocking the potential benefits to patients.

The White Paper builds on results from the Shared Care AI Role Evaluation research project, which ran in partnership with the Centre for Assuring Autonomy. The Medical Protection Society established the MPS Foundation to support cross disciplinary research just like this and this project was funded as part of the first annual grant programme.

The project team – which brought together researchers with expertise in medicine, AI, human-computer interaction, law, ethics, and safety science – evaluated different ways in which AI technology could be used by clinicians – ranging from tools which simply provide information, through to those which liaise directly with patients outside of the consultation room, and those which proffer recommendations to clinicians.

The project elicited a rich set of findings, which are outlined in the White Paper, highlighting the need for a thoughtful approach to integrating AI decision-support tools into real-world clinical settings – ensuring they genuinely support the clinicians using them while preserving the important human touch in patient care.

I believe we could usefully reflect on these findings here in Ireland as AI technologies and their use in our healthcare system, rapidly evolve.

Firstly, the paper recommends that for AI tools to work for users they need to be designed with users. In healthcare contexts, which are safety-critical and fast-paced, engaging clinicians in the design of all aspects of an AI tool – from the interface to the details of its implementation – can help to ensure that these technologies deliver more benefits than burdens.

A participatory approach involving different domain experts, clinicians, and patients could ultimately help to ensure that AI decision-support tools are usable, useful, and safe.

Involving clinicians in the design and development of decision-support AI tools can also help in discovering the ‘sweet spot’ between the provision of too much and too little information. Too much information, and the time it takes to review it, detracts from building a rapport with a patient. Too little and clinicians may not trust the technology.

Achieving the right balance requires more than just designing the interface and defining the tool’s functionality. AI companies and developers need to understand the broader context in which the AI tool will be used, including its impact on workflows and patient-clinician relationships. As such, engaging with and collaborating with clinicians during design and development is crucial.


AI companies and developers need to understand the broader context in which the AI tool will be used, including its impact on workflows and patient-clinician relationships

Secondly, AI tools should not be considered akin to senior colleagues in clinician-machine teams. Clinicians should not always be expected to agree with or defer to an AI output, whether that output is a direct recommendation, a classification, or an analysis of the data. It should be made explicit in new healthcare AI policy guidance and in guidance from healthcare organisations how clinicians should approach conflicts of opinion with the AI. It should also be made clear that, in cases of disagreement, a clinician should not be expected to defer to an AI output.

Clinicians should regard AI as an adjunct, a tool. They should not think of it as a replacement for – or improvement on – either their own clinical judgement or the judgement of a trusted human colleague. Clinicians should feel empowered to disagree with AI recommendations, particularly when the recommendation is suboptimal and does not align with their own clinical judgement.

The White Paper condenses its recommendations into some practical advice for clinicians:

1. Clinicians should ask for training on the AI tools they are expected to use. This will help them to navigate their AI tool use more skilfully and know when confidence in an AI’s outputs would be justified, supporting their autonomy. This training should cover the AI tool’s scope, limitations, and decision thresholds, as well as how the model was trained and how it reaches its outputs.

2. Clinicians should only use AI tools within areas of their existing expertise. AI tools should not be used outside of that expertise. If there are specific cases where a clinician’s knowledge is limited, clinicians should seek the advice of a human colleague who understands the area well and can oversee the AI tool, rather than rely on the AI tool to fill their knowledge gap.

3. Clinicians should feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient. They should resist any temptation to defer to an AI’s output to avoid or reduce the likelihood of being held responsible for negative outcomes.

4. Clinicians should regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. They should be aware that AI tools can be fallible, and those which perform well for an ‘average’ patient may not perform well for the individual in front of them.

5. Clinicians should feel empowered to trust their instincts and judgement about appropriate disclosure of the use of an AI tool, as part of a holistic, shared decision-making process with individual patients. However, they should also be aware that in some critical situations, patients should be made aware of the use of an AI tool. Clinicians should ask their healthcare organisations for explicit guidance on this issue.

6. Clinicians should engage with healthcare AI developers, when asked and where possible, to ensure that AI tools are user-focused and fit-for-purpose for their intended contexts.

Looking to the future, generating greater confidence in AI among clinicians is vital if the potential benefits of AI are to be unlocked for patients. The White Paper addresses this issue boldly and is a timely contribution to the wider AI debate. 

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
Medical Independent 15th July 2025

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Trending Articles

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT