NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.
Don't have an account? Register
ADVERTISEMENT
ADVERTISEMENT
I’m underwhelmed by AI’s ability to demonstrate narrative-based medical competence
Is anyone else out there getting nervous about artificial intelligence (AI) and professionalism? Maybe it’s just me but there seems to be a growing perception that, given time, AI will manage the soft skills of medicine as well as any doctor.
Let me be clear, I have no problem acknowledging the benefits of AI in medicine. AI has already had a big impact in radiology, both increasing the speed and the accuracy of reading x-rays and scans. For example, the Mater Hospital in Dublin uses AI to look at scans in real-time and alerts radiologists to potentially serious findings.
And quite a few Irish doctors are using AI systems to record and transcribe patient consultations. GPs can use AI to generate referral letters, while consultants can automatically generate GP letters based on an AI summary of a consultation. Among other advances, AI has assisted robotic surgery and has sped up patient administration and workflow. No-one could balk at any of these AI-driven medical advances.
However, when it comes to the soft skills side of medicine, not everyone shares my AI pessimism. In her recently published book, Dr Bot: Why Doctors Can Fail Us – and How AI Could Save Lives (Yale University Press), Charlotte Blease cites evidence that patients reveal more about their medical problems when interacting with a chatbot than they do in a live personal consultation with a doctor. One review concluded that computer-based interviews generated between 35 and 56 per cent more detail than face-to-face interactions. Blease is also bullish about AI’s ability to demonstrate empathy. One University of Washington study found that AI-powered guidance significantly strengthened human bedside manner.
Using a mental health support site called TalkLife, researchers randomly allocated participants to receive responses that were written wholly by humans or by humans assisted with AI collaboration from a chatbot. The study found that users of TalkLife more often favoured ‘human +AI’ responses compared to ‘human only’ responses. When they assessed conversational empathy, the researchers found that ‘human +AI’ responses scored 20 per cent higher than ‘human only’ answers. And 70 per cent of mental health professionals who participated felt that AI boosted their ability to be empathetic.
Blease’s own research found that ChatGPT-written clinical notes were a richer source of empathy cues than those authored by a practising GP. Her team prompted ChatGPT-4 to rewrite the GP notes in “an understanding and empathetic manner”. They then compared the results with the original doctor notes looking for evidence of affective empathy (feeling what others feel), cognitive empathy (recognising another person’s emotions), and compassion (caring about someone’s wellbeing). “The findings were stark. In the original GP notes we could find no signatures of empathy whatsoever. In contrast, the ChatGPT-written notes were rich in empathy,” she says.
It seems the bot was actually better at conveying cognitive empathy and compassion but not affective empathy. But a recent meeting of the EU Narrative Medicine Society questioned the value of the Open AI system ChatGPT.
Olivia Booth from the University of North Carolina carried out research to see how four different AI systems responded to illness narratives involving Alzheimer’s disease, cancer, and depression.
She found ChatGPT focused on editing and publishing rather than providing empathetic support. Gemini, the Google-backed AI system, fared little better. Copilot, a Microsoft product, was better focused on understanding the patient’s perspective but it did not deepen the engagement. The response of Deepseek (a Chinese-based open-source system) to authentic patient illness narratives was the best of the four in Booth’s view. “It engaged deeply with the stories, asking thoughtful follow-up questions that helped uncover emotional and personal dimensions, and providing meaningful support for those struggling with illness” was her assessment. She rated it the best for active listening and assessing resilience in real-person narratives.
Although ChatGPT is the most used AI system in the world, it looks the least promising when it comes to replicating soft medical skills. And its much-hyped latest iteration GPT-5 is “not going to get much better”, according to a candid comment by OpenAI’s chief executive Sam Altman.
I have to say I’m underwhelmed by AI’s ability to demonstrate narrative-based medical competence. When it comes to soft skills in medicine, such as breaking bad news, I cannot see a Dr Bot being able to replicate a human interaction, at least in the short- to medium-term.
I’m afraid there is still a big question mark over AI’s ability to function as a sentient being.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
We need our politicians to challenge the many myths that the US Health Secretary is peddling...
This year, I’m bringing Christmas forward for my grandchildren...
ADVERTISEMENT
The public-only consultant contract (POCC) has led to greater “flexibility” in some service delivery, according to...
There is a lot of publicity given to the Volkswagen Golf, which is celebrating 50 years...
As older doctors retire, a new generation has arrived with different professional and personal priorities. Around...
Catherine Reily examines the growing pressures in laboratory medicine and the potential solutions,with a special focus...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Leave a Reply
You must be logged in to post a comment.