In their interactions with chatbots, consumers often encounter technology failures that evoke negative emotions, such as anger and frustration. To clarify the effects of such encounters, this article addresses how… Click to show full abstract
In their interactions with chatbots, consumers often encounter technology failures that evoke negative emotions, such as anger and frustration. To clarify the effects of such encounters, this article addresses how service failures involving artificial intelligence–based chatbots affect customers’ emotions, attributions of responsibility, and coping strategies. In addition to comparing the outcomes of a service failure involving a human agent versus a chatbot (Study 1), the research framework integrates the potential influences of anthropomorphic visual cues and intentionality (Studies 2 and 3). Through three experimental designs, the study reveals that when interacting with chatbots, customers blame the company more for the negative outcome, experiencing mainly frustration, compared with when they interact with a human agent. As the chatbot is perceived as not having intentions and control over them, it is not considered responsible. Thus, the company bears more responsibility for the poor service performance. However, the authors suggest that anthropomorphic visual cues might help mitigate the negative attributions to the company. The attribution of humanlike characteristics also helps promote both problem-focused coping, which helps consumers actively handle the service failure, and emotion-focused coping, which helps restore the emotional balance disrupted by the negative event.
               
Click one of the above tabs to view related content.