LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives

Photo from wikipedia

Background There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). Objectives Our objectives… Click to show full abstract

Background There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). Objectives Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. Methodology The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. Results Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. Conclusion/significance Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.

Keywords: bias; artificial intelligence; algorithmic bias; study; healthcare

Journal Title: Journal of Medical Ethics
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.