LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

CLEFT: Contextualised Unified Learning of User Engagement in Video Lectures With Feedback

Photo from wikipedia

Predicting contextualised engagement in videos is a long-standing problem that has been popularly attempted by exploiting the number of views or likes using different computational methods. The recent decade has… Click to show full abstract

Predicting contextualised engagement in videos is a long-standing problem that has been popularly attempted by exploiting the number of views or likes using different computational methods. The recent decade has seen a boom in online learning resources, and during the pandemic, there has been an exponential rise of online teaching videos without much quality control. As a result, we are faced with two key challenges. First, how to decide which lecture videos are engaging to intrigue the listener and increase productivity, and second, how to automatically provide feedback to the content creator using which they could improve the content. The quality of the content could be improved if the creators could automatically get constructive feedback on their content. On the other hand, there has been a steep rise in developing computational methods to predict a user engagement score. In this paper, we have proposed a new unified model, CLEFT, that means “Contextualised unified Learning of user Engagement in video lectures with Feedback” that learns from the features extracted from freely available public online teaching videos and provides feedback on the video along with the user engagement score. Given the complexity of the task, our unified framework employs different pre-trained models working together as an ensemble of classifiers. Our model exploits a range of multi-modal features to model the complexity of language, context agnostic information, textual emotion of the delivered content, animation, speaker’s pitch, and speech emotions. Our results support hypothesis that proposed model can detect engagement reliably and the feedback component gives useful insights to the content creator to further help improve the content.

Keywords: unified learning; learning user; user engagement; video; content; contextualised unified

Journal Title: IEEE Access
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.