detailed data gathering, the number of participants is necessarily very limited, many orders of magnitude less than what would be considered necessary to train today’s state-of-the-art recommendation algorithms. Instead, the… Click to show full abstract
detailed data gathering, the number of participants is necessarily very limited, many orders of magnitude less than what would be considered necessary to train today’s state-of-the-art recommendation algorithms. Instead, the authors engaged in a sensitive and detailed analysis of user requirements and environmental characteristics to provide the best representation of the needs of their user base. The evaluation, necessarily conducted offline at this preliminary stage, reflects another truism of modern recommender systems research, namely the need to use multiple evaluation metrics to capture a multidimensional view of system performance. In this case, we see the individualized treatment of features proposed by the authors leads to improved results for the ASD subjects, as anticipated. Thus, the algorithm is a good candidate for additional development and eventual deployment to these users. Although the results for the (presumed) neurotypical group are more mixed, they are not significantly worse than the baseline. In the end, the authors demonstrate the value of their synthesis of AI and user modeling techniques in tackling a challenging and practical problem for the benefit of a disadvantaged and understudied group. (The authors note that most HCI research in the autism area focuses on children.) This effort is one step toward a larger and ongoing goal of creating an app providing geographic information and support to ASD users. While machine learning fairness research often concentrates on ensuring fair outcomes for a system’s user base considered as a whole, this work is a reminder that real inclusivity and equity may require designs tailored to the needs of specific groups.
               
Click one of the above tabs to view related content.