LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learning Chinese Word Embeddings With Words and Subcharacter N-Grams

Photo from wikipedia

Co-occurrence information between words is the basis of training word embeddings; besides, Chinese characters are composed of subcharacters, words made up by the same characters or subcharacters usually have similar… Click to show full abstract

Co-occurrence information between words is the basis of training word embeddings; besides, Chinese characters are composed of subcharacters, words made up by the same characters or subcharacters usually have similar semantics, but this internal substructure information is usually neglected in popular models. In this paper, we propose a novel method for learning Chinese word embeddings, which takes full use of external co-occurrence context information and internal substructure information. We represent each word as a bag of subcharacter n-grams, and our model learns the vector representation corresponding to the word and its subcharacter n-grams. The final word embeddings are represented as the sum of these two kinds of vector representation, which makes the learned word embeddings can take into account both the internal structure information and external co-occurrence information possible. The experiments show that our method outperforms state-of-the-art performance on benchmarks.

Keywords: chinese word; learning chinese; word embeddings; subcharacter grams; word; information

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.