This paper presents a character-level seque-nce-to-sequence learning method, RNNembed. This method allows the system to read raw characters, instead of words generated by preprocessing steps, into a pure single neural… Click to show full abstract
This paper presents a character-level seque-nce-to-sequence learning method, RNNembed. This method allows the system to read raw characters, instead of words generated by preprocessing steps, into a pure single neural network model under an end-to-end framework. Specifically, we embed a recurrent neural network into an encoder–decoder framework and generate character-level sequence representation as input. The dimension of input feature space can be significantly reduced as well as avoiding the need to handle unknown or rare words in sequences. In the language model, we improve the basic structure of a gated recurrent unit by adding an output gate, which is used for filtering out unimportant information involved in the attention scheme of the alignment model. Our proposed method was examined in a large-scale dataset on an English-to-Chinese translation task. Experimental results demonstrate that the proposed approach achieves a translation performance comparable, or close, to conventional word-based and phrase-based systems.
               
Click one of the above tabs to view related content.