LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Artificial intelligence paternalism

Photo from wikipedia

In response to Ferrario et al’s work entitled ‘Ethics of the algorithmic prediction of goal of care preferences: from theory to practice’, we would like to point out an area… Click to show full abstract

In response to Ferrario et al’s work entitled ‘Ethics of the algorithmic prediction of goal of care preferences: from theory to practice’, we would like to point out an area of concern: the risk of artificial intelligence (AI) paternalism in their proposed framework. Accordingly, in this commentary, we underscore the importance of the implementation of safeguards for AI algorithms before they are deployed in clinical practice. The goal of documenting a living will and advanced directives is to convey personal preferences regarding the acceptance of therapies, including life support, for future use in case of losing one’s capacity. This is standard practice in the care of incapacitated critically ill patients as it is considered to extend the individual’s autonomy. Notably, most of the documents that intensivists encounter in clinical practice are written in a generic fashion and lack context. This problem usually leads to the reliance on family members or friends to act as surrogate decisionmakers. Surrogates should aid in decisionmaking by relaying the patient’s wishes based on their understanding of their preferences by recalling prior conversations, or experiences. Nevertheless, surrogates often lack that knowledge, express their own preferences or choose to prolong life support inappropriately to avoid making difficult decisions. This can lead to goal discordant care, a dreadful medical error in which incapacitated patients receive treatments that are not compatible with their wishes. An example of goal discordant care is a patient with an advance directive ‘do not intubate’ that is intubated and then receives a tracheostomy. We worry that both clinicians and surrogates can be incentivised to shift the burden of difficulty decisionmaking to a machine (AI) if given the opportunity. Notably, the emotional distress that surrogates endure during goals of care decisionmaking is mentioned several times by the authors. Therefore, we would like to highlight that the interests of the patients are the ones to be considered. We agree that there exists a great need for an upgraded system to address personal preferences in seriously ill patients at risk of becoming incapacitated. The application of AI technology that can adapt to the clinical context to assist in the decisionmaking process in such patients is captivating. The authors propose a multifaceted sociotechnical framework for successful implementation of an AIassisted goals of care. Some of the proposed measures introduce new challenges and significant complexity that require a significant understanding of the process by patients and surrogates. Because AI systems are at least somewhat autonomous, the implementation of AI algorithms would introduce an additional agent into the conversations. If the AIgenerated response is given priority above the other agents (surrogates and clinicians), this would result in AI paternalism. Decisionmaking in critical care practice has shifted away from medical paternalism. Medical paternalism is the practice in which physicians make medical care decisions based on their own discretion without patient’s input, and it has been traditionally justified by the ethical principle of beneficence. The alternative to medical paternalism is to prioritise the patient’s autonomy. AI paternalism, or machine paternalism, is a newly described term that refers to independent decisionmaking by AI with no, or minimal patient participation. AI paternalism has been previously described in the development of health apps, and in AIassisted clinical diagnosis. In contrast, the stakes are much higher in goals of care decisionmaking in critically ill patients. Therefore, we must question if AI algorithms without human supervision should be allowed. At the very least, we strongly recommend that appropriate safeguards are added to the proposed framework before these systems can be implemented. Next, we propose the following safeguards: informed consent, predetermined conflict resolution pathways, clinical validation of AI algorithms and continued quality control.

Keywords: practice; intelligence paternalism; artificial intelligence; care; paternalism; goal

Journal Title: Journal of Medical Ethics
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.