In August 2013, the UK Parliament voted to reject possible military action against the Syrian regime to deter the use of chemical weapons. In September 2013, the then US President Barack Obama said: “If we fail to act, the al-Assad regime will see no reason to stop using chemical weapons… What kind of world will we live in if the United States of America sees a dictator brazenly violate international law with poison gas, and we choose to look the other way?” (My emphases.)
Obama, like the UK Parliament, failed to act and chose to look the other way.
Between the onset of the Syrian conflict in 2011 and Obama’s address, the world had indeed seen “a dictator brazenly violate international law with poison gas”, resulting in 1,414 verified fatalities. These figures are from the Syrian American Medical Society, which, in February 2016, published a report titled A New Normal: Ongoing Chemical Weapons Attacks in Syria.
The report records 161 documented chemical attacks in Syria (a further 133 chemical attacks could not be fully substantiated), resulting in 1,491 deaths and 14,581 injuries. It further notes that of the 161 documented attacks, “77 per cent have occurred after the passage of United Nations Security Council Resolution 2118 in September 2013, which created a framework for the destruction of Syria’s declared chemical weapons stockpiles.”
And what did we, the international community, do? As Dr Mohammed Tennari of Syria’s Idlib governorate observed in A New Normal: “In response to chemical attacks in Syria, the international community sends us more antidotes. This means that the world knows that chemical weapons will be used against us again and again. What we need most,” Dr Tennari continued, “is not antidotes — what we need is protection, and to prevent another family from slowly suffocating together after being gassed in their home.”
Having read this, I opened my copy of Tongues of Conscience: War and the Scientists’ Dilemma (1970) by Robert William Reid. In a chapter titled ‘The Shape of War to Come’, Reid considers chemical weapons, noting: “Unlike the atomic bomb, they have yet to be put to use in war… ”
It’s unlikely that Saddam Hussein and al-Assad read Reid’s words and thought, “now there’s an idea”. But whatever brought them to share the tawdry distinction of being the first leaders since the end of the Second World War to use chemical weapons against civilians, they wouldn’t have been troubled by the possibility that the so-called ‘international community’ might have roused itself to intervene beyond words of censure.
One of the recommendations by the authors of A New Normal is that “… financial support from States must occur alongside an active effort by all States to end chemical attacks and other international humanitarian and human rights law violations and hold perpetrators accountable for violations”. (My emphases.)
So it was a small step, which I welcomed, when last month the US, supported by the UK and France, responded to al-Assad’s deployment of chemical weapons against civilians by launching missile strikes at selected Syrian targets. However, it was a forlorn hope that this ‘active effort’ by three States might attract a groundswell of international support. In my view, the reason for such inertia was that many asked themselves why they should stand up for a point of principle when it was being asserted by a President of questionable moral rigour.
And coming on the 15th anniversary of the invasion of Iraq, am I alone in detecting echoes of the strident assertions of many in the anti-war movement back then? Their prime motivation appeared to be strident anti-Americanism and a loathing for a President who couldn’t pronounce correctly the word “nuclear”. Far from embodying left-leaning values, here were conservatives uniting to preserve a cruel regime — governmental rape squads and a Baath Party inspired by many aspects of Nazism — while upholding a system of international law that was unable to implement basic ethical tenets of civilised behaviour.
But at least there were some who recognised that irrespective of what international law might say, humanitarian intervention is a moral right that can, and should, be asserted. One of these was France’s former health minister Dr Bernard Kouchner, who co-founded Doctors Without Borders/Médecins Sans Frontiéres. Another was Nobel Peace Laureate José Ramos-Horta, who points out in his piece in the Wall Street Journal (13 May 2004), titled ‘Sometimes a War Saves People’, that he supported Vietnam’s 1978 invasion of Cambodia to oust Pol Pot, and Tanzania’s invasion of Uganda in 1979 to eject Idi Amin, both without United Nations or international approval; applauded the French who deposed “Emperor” Jean Bokassa in the then Central African Empire; endorsed the NATO intervention in Kosovo without a UN mandate; and “I rejoiced once more in 2001 after the US-led overthrow of the Taliban liberated Afghanistan… ”
It is time to get used to the fact that Trump is in the White House; to shake free of the cultural relativism that prizes national sovereignty over human rights; and pay attention to the Dr Mohammed Tennaris of this world.
In October 2017, 20 doctors at Warsaw’s Paediatric Hospital went on hunger strike, demanding greater health expenditure from the government; and in 2015 two doctors staged a 24-hour hunger strike at London’s Parliament Square in protest at the treatment of NHS whistleblowers.
In the Polish strike, doctors were monitored by colleagues, with those whose health raised concerns being replaced by healthy volunteers; and the London case was self-limiting. Thus, some hunger strike protests or demands trying to effect political change cause, at worst, discomfort to participants. But hunger strikes undertaken by the incarcerated can sometimes be lethal, posing legal and ethical challenges for medical personnel.
Two recent studies by Gulati et al of the University of Limerick and University College Cork consider these issues in the Irish Journal of Psychological Medicine, with one addressing ‘Hunger strikes in prison: A legal perspective for psychiatrists’. They highlight guidelines from the World Medical Association, noting that autonomy is favoured over beneficence; that the neutrality of physicians is to be upheld; and that “force-feeding of an individual with capacity who refuses the same is not acceptable”.
Which makes perfect sense. The only problem is that lodged in our skulls, psychiatrists included, is an often impenetrable barrier separating the ice-block of legal reasoning from a cauldron of disparate beliefs, political opinions and views. My inference is that this separation is acknowledged in Gulati et al’s second study, ‘Hunger strikes in prisons: A narrative systematic review of ethical considerations from a physician’s perspective’, where they state: “Whilst there seems to be an overall consensus favouring autonomy over beneficence, tensions along this fine balance are magnified in jurisdictions where legislation leads to a dual loyalty conflict for the physician.”
The extent to which “dual loyalty conflicts” can arise can be gauged from the titles of two chapters in the British Medical Association’s Medicine Betrayed: The participation of doctors in human rights abuses (1992). With chapter four entitled ‘Medical involvement in torture’ and chapter five entitled ‘Abuse of psychiatry for political purposes’, here are reminders that one’s professional status does not confer immunity from uncivilised behaviour.
A more recent reminder is supplied by Bringedal et al in the Journal of Medical Ethics (2018, 44: 239‒43) in their paper ‘Between professional values, social regulations and patient preferences: Medical doctors’ perceptions of ethical dilemmas’. This Norwegian survey of over 1,200 doctors requested them to define whether a supplied range of different circumstances constituted a medical dilemma. One was the ethics of force-feeding a person on hunger strike and 76 per cent judged it to qualify as a medical dilemma. To my surprise, however, only 42 per cent said they would not force-feed a hunger striker and 39 per cent admitted they did not know what they would do. In the context of a supposedly enlightened Scandinavian model of social attitudes, these data are concerning.
Equally concerning is an apparent unquestioning attitude to the widely accepted concept of autonomy, with Garasic and Foster, writing in Medicine and Law (2012, 31: 589-98) to remind us of “a huge and dangerous political elephant in the room”. In ‘When Autonomy Kills: The case of Sami Mbarka Ben Garci’, we read that: “Autonomy rights (and therefore the right to die) are often accorded to hunger strikers who come from classes perceived to be undesirable, but withheld (or trumped by other considerations) in the case of strikers from more desirable classes.”
When the Muslim Tunisian Sami Mbarka Ben Garci was imprisoned in Pavia, Italy, charged with rape, he went on hunger strike, asserting his innocence. Assessed as competent, his autonomy was respected and he died on 5 September 2009.
Garasic and Foster note the silence of the Roman Catholic Church in this case: “It is not usually slow to comment about cases where the sanctity of life is at stake. Was this because, as well as not being seen as part of the body politic, [Ben Garci] was not part of the Catholic body ecclesiastical?”
This approach to autonomy, illustrated by the Ben Garci case, prompts the thought that treatment decisions are seldom made in a political vacuum, in which case the concept of autonomy in the context of hunger strikes may not always be a clear-cut one.
If it is acceptable not to force-feed a mentally competent hunger striker who wishes to die, is it acceptable to force medication on a mentally incompetent individual who doesn’t wish to die, to make that person sane so that they can be executed? Writing in Medicine Healthcare and Philosophy (2013, 16: 795-806), Garasic considers ‘The Singleton case: Enforcing medical treatment to put a person to death’. In October 2003 the United States Supreme Court ruled that Arkansas prison officials could force schizophrenic convicted murderer Charles Singleton to take drugs that would render him sufficiently sane to receive a lethal injection. He was executed on 6 January 2004, having undergone a series of therapeutic interventions aimed at achieving a decidedly non-therapeutic outcome: Death.
The conflicts between patient autonomy, medical ethics and political jurisdictions ought to be capable of resolution. Who would have thought that ‘ought’ could tax we humans so much?
“Whassa pointa that?” I moaned to Smicker during our English Lit class. “The ‘pointa that’, Winter,” thundered my teacher — who could hear a participle drop at 40 yards — “is that I say there is a point to learning poetry by heart!” And it was one that only took me decades to grasp: That a readily-accessible verse or poetic fragment can be a life-enhancing experience.
But life enhancement of a different nature was sought urgently many years ago when my wife somehow survived an initial assault from a grade 5 sub-arachnoid haemorrhage. She was taken from intensive care to a neuro ward prior to surgery the next day. Without an operation (which proved successful) to repair the aneurysm, she would die, so there was no questioning its necessity. What I did question — albeit silently — was the procedure for obtaining her consent. And that’s when, inexplicably, schoolboy memories of Keats’s On First Looking into Chapman’s Homer (1816) arose to provide a split perspective.
I sat at her bedside; she lay semi-conscious, the effects of the sedative slowly ebbing away. Then a junior doctor — let’s call him ‘Chapman’ — arrived, together with a demeanour whose unspoken declaration was that this ward was ‘ruled as his demesne’. I wasn’t sure what was happening “til I heard Chapman speak out loud and bold”. Focused as a spotlight, he was armed both with paperwork and a determination to ensure that when he left, all box-shaped elements of this exercise in obtaining consent for tomorrow’s operation would be well and truly ticked.
I felt less “like some watcher of the skies when a new planet swims into his ken” and more that we had been noisily interrupted. My wife, stirred by the rumpus, succeeded in opening her eyes and tried to focus on the document now brandished in front of her — a futile attempt, as her contact lenses had been removed days before. But now a new challenge arose: She — we — had to assimilate and consider some of the many risks the operation courted, and which Chapman was reciting like a cantor on Benzedrine; so, if she could just sign… here? On his way out, Chapman pirouetted and asked, “Any questions?” We “look’d at each other with a wild surmise” and said “Duhhh”, much as Homer — the Simpson, not the Greek — might have uttered when faced with choosing between a crate of beer and a box of donuts. Chapman interpreted my wife’s head moving slowly in bewilderment as a ‘no, you’ve made it all perfectly clear’, said thanks, and left.
With time comes perspective, however, together with a more charitable recollection. In retrospect, it may well have been that a stressed-out Chapman had been up all night, all by himself, tending to patients and that he just wanted to complete this administrative chore before going home to fall into a deep sleep. So, I was intrigued to come across ‘All by myself: Interns’ reports of their experiences taking consent in Irish hospitals’, published recently in the Irish Journal of Medical Science. Heaney et al’s aim was to evaluate a 12-point questionnaire returned by each of 60 interns at three Irish teaching hospitals to determine their roles in the surgical consent process and to identify their concerns.
Noting that 44 interns (73.3 per cent) had never been supervised by a senior doctor in obtaining consent; of 58 interns who had obtained consent, “six interns (10.3 per cent) reported knowledge of ‘all’ the steps of the procedure [and]… only five interns (8.6 per cent) reported that they were aware of all the risks of the procedures”, the authors concluded that most of the “interns reported that they had taken consent for a procedure without full knowledge of the procedure and its complications”.
Citing the Medical Council’s view that “an intern is deemed to be an unsuitable delegate”, the authors speculate “whether [interns] should be involved in the consent process at all”. It seems that more-senior doctors need to be involved in this sometimes tricky undertaking.
I would make a further suggestion, having looked at Consent to Medical Treatment in Ireland (2015) by the Medical Protection Society, where it states (page three) that there are three components to valid consent: Capacity, information and voluntariness. Bearing these three in mind, and as a non-medic with a narrow but sharp experience of the consent process on a patient sample of one, consenting would be a more meaningful process if the patient were fully conscious at the time consent is sought.
But there are other aspects of the process that merit further consideration. For example, Donovan-Kicken et al considered “sources of patient uncertainty when reviewing medical disclosure and consent documentation” in Patient Education and Counselling (2013, 90:254-260), finding four distinct areas of uncertainty: Language; risks and hazards; the nature of the procedure; and the composition and format of documentation. The most important area is language, and interns trying to impart lucid explanations in what might not be their mother tongue — or their patient’s — may struggle.
If only the consenting process, like poetry, could be learned by heart.
The Irish Hospice Foundation Strategic Plan 2016-2019 reminds us that 57 per cent of people in Ireland say there is insufficient discussion about death and dying. I agree… or rather, that’s what I would have said, had I not taken another look at the front cover of the Strategic Plan, which includes a photograph of two women standing in front of a poster with the caption: ‘Think Ahead: Speak for Yourself’. This prompted the thought: What about those who are neither able to think ahead nor speak for themselves? And I remembered the case of Charlie Gard.
Born in August 2016, Charlie was admitted to London’s Great Ormond Street Hospital (GOSH), where he was diagnosed with the rare infantile-onset encephalomyopathic mitochondrial DNA depletion syndrome. Charlie was deaf, paralysed, needed ventilatory support and had a poor prognosis. In January 2017, Charlie’s parents identified an experimental nucleoside treatment in the US and launched a fund-raising effort.
But when the infant developed seizures, GOSH doctors objected to what they considered futile treatment; they applied to the High Court for permission to withdraw life support. This was granted and subsequently upheld, with a ruling in June 2017 from the European Court of Human Rights exhausting all legal options. The GOSH team made plans to withdraw medical treatment.
Which brings me to the point about whether there is enough discussion about death and dying. In the case of Charlie Gard, there seemed to be discussion aplenty, if only to the extent that here was an opportunity for the public at large to deploy social media to offer anyone who cared to read or listen the benefit of their insights. Unfortunately, these insights were gleaned largely from a clamouring press whose support for Charlie receiving treatment in the US, I suggest, was based more on emotional rabble-rousing than a dispassionate consideration of the issues involved. Some of the roused rabble even threatened medical staff at GOSH.
Apparently unable to resist throwing themselves into the media maelstrom too were the Pope and the President of the US. As far as Trump is concerned, I’ll opt for Wittgenstein’s ‘Whereof one cannot speak, thereof one must be silent’ and leave it at that. The Pope — as Patrick Greenford reported in The Guardian (3 July 2017) — prayed that Charlie’s parents’ “wish to accompany and treat their child until the end isn’t neglected”. Yet following the infant’s death on 28 July 2017, Elise Harris, writing in Catholic Online (17 November 2017, http://www.catholic.org/news/hf/faith/story.php?id=76383) quoted the Pope as stressing that medical options “must avoid the temptation either to euthanise a patient or to pursue disproportionate treatments which do not serve the integral good of the person” (my emphasis).
So would it have been better to allow Charlie travel to the US for treatment as a last resort? I don’t know. And even when Wilkinson and Savulescu considered ‘Hard lessons: Learning from the Charlie Gard case’ in the Journal of Medical Ethics (published online 2 August, 2017) they conceded that they disagreed with each other about what the right course of action ought to have been. They did agree, however, on the importance of considering whether we should have lower thresholds for undertaking experimental treatments when no other option exists; how limited resources should be allocated; and whether resolution of treatment disputes can be achieved without legal recourse.
Ethical decisions are based on facts and values, and while few would dispute the facts of a case, values pose a trickier challenge. For example, when Sprung et al, writing in Intensive Care Medicine (2007, 33: 1,732-39), considered ‘The importance of religious affiliation and culture on end-of-life decisions in European intensive care units’ among patients, doctors and families, they concluded: “Significant differences associated with religious affiliation and culture were observed for the type of end-of-life decision, the times to therapy limitation and death, and discussion of decisions with patient families.”
The notion that ethics is intelligible only in the context of religion is risible, foolishly recruiting false reasoning to the interpretation of divine commands. A fully-rounded, ethically-based set of precepts cannot be usefully and uniformly applied to the same situation by individuals motivated by different values. Perhaps, as Wilkinson et al argue in Bioethics (2016, 30: 109-118), we should be — as the title of their paper asserts — “in favour of medical dissensus: Why we should agree to disagree about end-of-life decisions”.
Agreeing to disagree doesn’t alleviate the stress of wrestling with dilemmas such as those posed by the Charlie Gard case, but perhaps we can feel our way by disputation and discussion towards something approaching resolution. But prolonged reasoning is ridiculed today. Our screen-based zeitgeist dictates that mass values determine what passes for discussion, with 280 characters deemed sufficient for a point to be made. Our coarsened, consumption-obsessed society is neither an educated nor a moral one, despite having facts — literally — at our fingertips. Perhaps Eliot anticipated this when he asked in The Rock (1934): “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”
George Winter on the not so sweet dangers of sugar-sweetened drinks
To help protect the public health from alcohol abuse, minimum unit pricing needs to be introduced, and fast, writes George Winter
As turkey season gets underway, George Winter highlights the importance of good hand-hygiene, which is essential to stave off an unwanted ‘gift’ of gastrointestinal upset
George Winter examines the evidence-base to date on the benefits of meditation
George Winter examines the evolution of recognition for myalgic encephalomyelitis (ME)
George Winter on the ever-changing dietary diktats and conflicting evidence on what people should or should not eat
George Winter highlights some of the more ethically abhorrent animal and human medical experiments of recent decades
Is being moved by the plight of a particular individual or group an insufficient incentive to ignite a spark of generosity, wonders George Winter