Elizabeth Hildt’s (2023) notion of human-like artificial consciousness (AC) is vulnerable to several objections. First, she ties it to traits such as subjectivity and to capacities for rationality, intelligence, self-awareness,… Click to show full abstract
Elizabeth Hildt’s (2023) notion of human-like artificial consciousness (AC) is vulnerable to several objections. First, she ties it to traits such as subjectivity and to capacities for rationality, intelligence, self-awareness, suffering, and sensation. But her notion of a humanlike moral status displays no such traits. She simply asserts that such a status would follow from a humanlike AC. But such capacities are not in themselves moral qualities, and Hildt cannot show how they would necessarily confer moral status. It is the quality of being a human, not the quality of resembling one, that persuades political communities to confer moral status on individuals. Second, Hildt grants AC traits that would seem to allow it to construct its own morality and to confer a moral status on itself. Yet nothing she says indicates that humans would then be obliged to recognize that status—even if they acknowledged the AC’s traits of rationality, intelligence, self-awareness, capacity for suffering, and sensation. In this way, among others, Hildt simply cannot eliminate the fundamental role of human agency in the possible moral status of AC. Third, moral status is a social construct, not a feature of the natural universe. To be sure, humans could always reconceptualize the phenomenon of moral agency to incorporate a completely new understanding of technology (Verbeek 2014). Or they could decide to confer moral status on AC, perhaps by analogy to a state conferring legal personhood on corporations (Gregg 2021). That would render the moral status of AC a metaphor for human moral status. Like human persons, a corporation can bear responsibility even while freeing human members from corporate responsibilities. But only humans have the moral capacity to give themselves laws, primarily through legislatures, and even to author their own human rights (Gregg 2012). They can give corporations legal rights, yet corporations cannot give themselves rights. They cannot legislate or interpret legislation in legally authoritative ways. Whatever obligations corporations may have toward humans are not self-imposed but imposed by humans. Corporate personhood is instrumental, oriented on the most efficient means to achieve a given end. Whereas instrumental behavior has no capacity to evaluate the moral status of either the chosen means or a given end, normative behavior is always value-committed. It evaluates the normative acceptability of any given goal. Even as a legal person, a corporation does not pursue the value-rationality that can orient moral agency. A future AC would be no different. And as long as artifactual moral agency cannot be analogized to human moral autonomy (Johnson and Noorman 2013), it makes more sense to attribute moral responsibility to humans—who construct an AC with agency and consciousness—than to AC itself. Fourth, AC consciousness is unlikely to be like human consciousness. Human consciousness includes values, interests, and motivations (such as not being harmed by AC) that AC need not necessarily share. More likely is that humans would construct AC at most as a “moral patient” out of a desire, say, to prevent AC’s possible suffering. Even then, humans can account for the moral significance of AC without having to attribute moral agency to it (Kroes 2012). To be sure, if AC ever became like animals in the sense of being capable of experiencing pain or suffering, at that point communities might invest AC with legal
               
Click one of the above tabs to view related content.