With the future availability of highly automated vehicles (AVs), vulnerable road users (VRUs) will encounter vehicles without human operators. To compensate for the lack of eye contact, realizing communication via… Click to show full abstract
With the future availability of highly automated vehicles (AVs), vulnerable road users (VRUs) will encounter vehicles without human operators. To compensate for the lack of eye contact, realizing communication via external human-machine interfaces (eHMIs) is planned. The adequacy of this regarding people with intellectual disabilities (IDs) is, however, still unknown. This work compares eHMI concepts by their perceived user experience (UX) for people with and without ID to evaluate the inclusiveness of current eHMI concepts. We analyzed related work and derived two representative concepts for a visual and an auditory eHMI. Subsequently, a survey of N=120 participants (64 with, 56 without ID) was performed, comparing the perceived UX of the selected eHMI concepts for visual, auditory, and combined modalities, and a baseline without eHMI using videos of simulations. We then had them assessed using the modified user experience questionnaire - short (UEQ-S). We found that auditory eHMIs performed worse than visual or multi-modal ones, and multi-modal concepts performed worse for people with ID in terms of pragmatic quality and crossing decisions. Our insights can be taken by both industry and academia, to make AVs more inclusive.
               
Click one of the above tabs to view related content.