Event prediction is essential in social network (SN) analysis to study the SN's evolutionary patterns (communities). Machine learning (ML) models are often used to predict events in SN communities. The… Click to show full abstract
Event prediction is essential in social network (SN) analysis to study the SN's evolutionary patterns (communities). Machine learning (ML) models are often used to predict events in SN communities. The ML algorithms manifest the results with biasness for the same dataset. Hence, the performance of an ML model requires validation for the unseen data to avoid biasness in learning. Generally, researchers use the generative adversarial network (GAN) model for generating realistic sample data to enhance the prediction of events. It is challenging for the discriminator to learn features using a single layer with similar weights in a conventional GAN technique. Therefore, this article proposes an improved version of the GAN model named generative adversarial network classifier (GAN‐C). The proposed GAN‐C model contains an additional layer, called classifier, that generates different feature maps. Wherein weights are adjusted dynamically based on the conditions to predict the events. To be precise, conditioning the weights in a classifier layer using entropy values of the features is a simple and effective way to minimize the classifier loss function (categorical cross‐entropy). GAN‐C model results in 16% loss up to 10 batches, and after that, the loss becomes negligible and can generate non‐overlapping events. The gaps between such non‐overlapping events are analyzed using the Jensen–Shannon divergence technique. The experimental results show that the existing single‐GAN and multi‐GAN methods predict events with 79.34% and 82.17% accuracy, respectively. While the proposed GAN‐C comparatively predicts events with improved accuracy of 88.56% on the same dataset. The data generated by the GAN‐C model are also approximately 55.85% and 80.59% more realistic than multi‐GAN and single‐GAN, respectively, based on root mean square error and inception score comparisons. GAN‐C is also approximately 68.01% faster than other GAN models. Thus, this article's theoretical and experimental analysis justifies that GAN‐C works suitably for predicting events in a massive dataset.
               
Click one of the above tabs to view related content.