Abstract Existing studies learn sentiment-specific word representations to boost the performance of Twitter sentiment classification, via encoding both n-gram and distant supervised tweet sentiment information in learning process. Pioneer efforts… Click to show full abstract
Abstract Existing studies learn sentiment-specific word representations to boost the performance of Twitter sentiment classification, via encoding both n-gram and distant supervised tweet sentiment information in learning process. Pioneer efforts explicitly or implicitly assume that all words within a tweet have the same sentiment polarity as that of the whole tweet, which basically ignores the word its own sentiment polarity. To alleviate this problem, we propose to learn sentiment-specific word embedding by exploiting both the lexicon resource and distant supervised information. In particular, we develop a multi-level sentiment-enriched word embedding learning method, which employs a parallel asymmetric neural network to model n-gram, word-level sentiment, and tweet-level sentiment in the learning process. Extensive experiments on standard benchmarks demonstrate our approach outperforms state-of-the-art methods.
               
Click one of the above tabs to view related content.