LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Text Semantic Classification of Long Discourses Based on Neural Networks with Improved Focal Loss

Photo from wikipedia

Semantic classification of Chinese long discourses is an important and challenging task. Discourse text is high-dimensional and sparse. Furthermore, when the number of classes of dataset is large, the data… Click to show full abstract

Semantic classification of Chinese long discourses is an important and challenging task. Discourse text is high-dimensional and sparse. Furthermore, when the number of classes of dataset is large, the data distribution will be seriously imbalanced. In solving these problems, we propose a novel end-to-end model called CRAFL, which is based on the convolutional layer with attention mechanism, recurrent neural networks, and improved focal loss function. First, the residual network (ResNet) extracts phrase semantic representations from word embedding vectors and reduces the dimensionality of the input matrix. Then, the attention mechanism differentiates the focus on the output of ResNet, and the long short-term memory layer learns the features of the sequences. Lastly but most significantly, we apply an improved focal loss function to mitigate the problem of data class imbalance. Our model is compared with other state-of-the-art models on the long discourse dataset, and CRAFL model has proven be more efficient for this task.

Keywords: neural networks; improved focal; focal loss; long discourses; semantic classification

Journal Title: Computational Intelligence and Neuroscience
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.