Abstract This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a… Click to show full abstract
Abstract This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.
               
Click one of the above tabs to view related content.