Cyberbullying is disturbing and troubling online misconduct. It appears in various forms and is usually in a textual format in most social networks. Intelligent systems are necessary for automated detection of… Click to show full abstract
Cyberbullying is disturbing and troubling online misconduct. It appears in various forms and is usually in a textual format in most social networks. Intelligent systems are necessary for automated detection of these incidents. Some of the recent experiments have tackled this issue with traditional machine learning models. Most of the models have been applied to one social network at a time. The latest research has seen different models based on deep learning algorithms make an impact on the detection of cyberbullying. These detection mechanisms have resulted in efficient identification of incidences while others have limitations of standard identification versions. This paper performs an empirical analysis to determine the effectiveness and performance of deep learning algorithms in detecting insults in Social Commentary. The following four deep learning models were used for experimental results, namely: Bidirectional Long Short-Term Memory (BLSTM), Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Recurrent Neural Network (RNN). Data pre-processing steps were followed that included text cleaning, tokenization, stemming, Lemmatization, and removal of stop words. After performing data pre-processing, clean textual data is passed to deep learning algorithms for prediction. The results show that the BLSTM model achieved high accuracy and F1-measure scores in comparison to RNN, LSTM, and GRU. Our in-depth results shown which deep learning models can be most effective against cyberbullying when directly compared with others and paves the way for future hybrid technologies that may be employed to combat this serious online issue.
               
Click one of the above tabs to view related content.