Krista M. Lim-Hing, MD
Director, Neurosciences Intensive Care Unit at South Shore University Hospital
Assistant Professor, Zucker School of Medicine at Hofstra/Northwell Health
Michelle Schober, MD
University of Utah School of MedicineProfessor of PediatricsPediatric Critical Care Medicine
Circulation of clinical guidelines and release of novel therapeutics, diagnostic technologies and other practice-changing scientific advances all too often precede the development of ethical frameworks needed for appropriate implementation. Four years after the release of the 2018 Practice Guideline on Disorders of Consciousness, the American Academy of Neurology issued an ethical framework to assist in translating them into practice (Peterson, Andrew, Michael J. Young, and Joseph J. Fins "Ethics and the 2018 Practice Guideline on Disorders of Consciousness: A Framework for Responsible Implementation" Neurology (2022). Peterson et al state that their ethics-based guide for the provider would have ideally been published as a companion document to the original guidelines. The authors provide principles to help the provider evaluate benefits, harms, costs, and feasibility of a decision when lack of data and/or time clouds prognostication.
The rapidly evolving nature of clinical practice and scientific discoveries demands a more proactive ethical analysis. As neurocritical care providers, we are challenged to reflect on future ethical conundrums that may arise as new technologies and practices are developed.
The medical community has traditionally responded reactively, rather than proactively, to clinical ethics. This delayed action occurs on two fronts of clinical practice: technological advancement and guideline implementation. Rapid growth in research and technology compels us to recognize ethical implications prior to changes in clinical practice. Scrutinizing study design early in methodology allows for discussion of unintended ethical consequences through anticipation of clinical scenarios. The four pillars of bioethics - autonomy, beneficence, non-maleficence, and social justice - are pertinent in the developing stages of medical technologies. Two such areas of advancement are xenotransplantation and artificial intelligence (AI).
The dream of xenotransplantation dates back to antiquity, as demonstrated in the myth of Daedalus and Icarus, but only in the last half-century has this fantasy become a reality. This year researchers performed the first pig-to-human heart transplant. One important next step will be defining eligibility for a pig organ transplant. Simply being on a transplant list cannot yet justify the highly experimental and possibly risky procedure. However, would it be justified if the alternative option was death? How would you choose which individuals get a human versus a pig heart? Xenotransplantation could provide immense benefits for the organ transplant recipient but is fraught with ethical concerns. Is the good of the recipient at odds with the good of the society (social justice)? What animal welfare issues arise? Could these genetic technologies lead to a slippery slope of unethical purposes?
Advances in AI will also potentially transform patient care, but what ethical quandaries might such progress uncover? As we incorporate AI into our clinical practice, concerns regarding risks to patient privacy, perpetuation of bias, and tradeoffs between competing ethical goals are emerging. As previously mentioned, different ethical considerations may arise in the various stages of research and development such as conceptualization, development, and calibration.
The World Health Organization (WHO) released a document in 2021, "Ethics and governance of artificial intelligence for health: WHO guidance", providing proactive ethical guidance for using AI in healthcare. This document outlines six key principles for the ethical use of artificial intelligence in health: protecting autonomy, promoting human safety, ensuring transparency, fostering accountability, ensuring equity, and promoting tools that are responsive and sustainable. As stated by WHO Director-General Dr Tedros Adhanom Ghebreyesus, “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.” The guide demonstrates an anticipatory approach to ethical implementation of AI. It acknowledges the potential for improving healthcare while cautioning against the ethical challenges and risks involving informed consent, patient autonomy, and threats to privacy and confidentiality. How can we police the unethical collection and use of health data? Will AI replace physicians? Through open discussions, we can digest these risks and implement frameworks for implementation that adhere to principles of ethical clinical care. This type of proactive assessment creates an open forum for further discussion in the medical community.
With the evolution of clinical practice, new guidelines and technologies will continue to emerge. We as neurocritical care practitioners must devise or support the creation of frameworks for ethical implementation, or placement of guardrails if appropriate, so that scientific advances do not evolve into reality without adequate preparation. As we read about nascent research and evolving models of care, we must always carefully consider their future ethical implications.
Notes/Thoughts
Clinicians should have guidance how to interpret value-laden claims
… look at developing technology and potential ethical challenges
… as NCS providers, think ahead of what ethical conundrumsmay arise. Be proactive- not reactive. Review board? Create an open forum for discussion.
General ethical guidance can mitigate inappropriate implementation of the recommendations
- Framework for responsible implementation of guidelines in a timely manner
- Anticipate ethical and societal issues surrounding new technologies
Assist in translating the ethos of the guideline into clinical practice
#LeadingInsights