Volunteer moderators have the power to shape society through their influence on online discourse. However, the growing scale of online interactions increasingly presents significant hurdles for meaningful moderation. Furthermore, there… Click to show full abstract
Volunteer moderators have the power to shape society through their influence on online discourse. However, the growing scale of online interactions increasingly presents significant hurdles for meaningful moderation. Furthermore, there are only limited tools available to assist volunteers with their work. Our work aims to meaningfully explore the potential of AI-driven, automated moderation tools for social media to assist volunteer moderators. One key aspect is to investigate the degree to which tools must become personalizable and context-sensitive in order to not just delete unsavory content and ban trolls, but to adapt to the millions of online communities on social media mega-platforms that rely on volunteer moderation. In this study, we conduct semi-structured interviews with 26 Facebook Group moderators in order to better understand moderation tasks and their associated challenges. Through qualitative analysis of the interview data, we identify and address the most pressing themes in the challenges they face daily. Using interview insights, we conceptualize three tools with automated features that assist them in their most challenging tasks and problems. We then evaluate the tools for usability and acceptance using a survey drawing on the technology acceptance literature with 22 of the same moderators. Qualitative and descriptive analyses of the survey data show that context-sensitive, agency-maintaining tools in addition to trial experience are key to mass adoption by volunteer moderators in order to build trust in the validity of the moderation technology.
               
Click one of the above tabs to view related content.