Text classification is a fundamental task in natural language processing and is essential for many tasks like sentiment analysis and question classification etc. As we all know, different NLP tasks… Click to show full abstract
Text classification is a fundamental task in natural language processing and is essential for many tasks like sentiment analysis and question classification etc. As we all know, different NLP tasks require different linguistic features. Tasks such as text classification requires more semantic features than other tasks such as dependency parsing requiring more syntactic features. Most existing methods focus on improving performance by mixing and calibrating features, without distinguishing the types of features and corresponding effects. In this paper, we propose a stacked residual recurrent neural networks with cross-layer attention model to filter more semantic features for text classification, which named SRCLA. Firstly, we build a stacked network structure to filter different types of linguistic features, and then propose a novel cross-layer attention mechanism that exploits higher-level features to supervise the lower-level features to refine the filtering process. Based on this, more semantic features can be selected for text classification. We conduct experiments on eight text classification tasks, including sentiment analysis, question classification and subjectivity classification and compare with a broad range of baselines. Experimental results show that the proposed approaches achieve the state-of-the-art results on 5 out of 8 tasks.
               
Click one of the above tabs to view related content.