In this paper, we propose a bi-attention, a multi-layer attention and an attention mechanism and convolution neural network based text representation and classification model (ACNN). The bi-attention have two attention… Click to show full abstract
In this paper, we propose a bi-attention, a multi-layer attention and an attention mechanism and convolution neural network based text representation and classification model (ACNN). The bi-attention have two attention mechanism to learn two context vectors, forward RNN with attention to learn forward context vector c⃗$\overrightarrow {\mathbf {c}}$ and backward RNN with attention to learn backward context vector c⃖$\overleftarrow {\mathbf {c}}$, and then concatenation c⃗$\overrightarrow {\mathbf {c}}$ and c⃖$\overleftarrow {\mathbf {c}}$ to get context vector c. The multi-layer attention is the stack of the bi-attention. In the ACNN, the context vector c is obtained by the bi-attention, then the convolution operation is performed on the context vector c, and the max-pooling operation is used to reduce the dimension. After max-pooling operation the text is converted to low-dimensional sentence vector m. Finally, the Softmax classifier be used for text classification. We test our model on 8 benchmarks text classification datasets, and our model achieved a better or the same performance compare with the state-of-the-art methods.
               
Click one of the above tabs to view related content.