Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior… Click to show full abstract
Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip-gram and continuous bag of words (CBOW) using word, context, and mean embeddings) on a theoretically motivated, rich set of semantic relations involving words from multiple syntactic classes and spanning the abstract-concrete continuum (19 sets of ratings). We found that, overall, the DSMs are best at capturing overall semantic similarity and also can capture verb-noun thematic role relations and noun-noun event-based relations that play important roles in sentence comprehension. Interestingly, Skip-gram and CBOW performed the best in terms of capturing similarity, whereas GloVe dominated the thematic role and event-based relations. We discuss the theoretical and practical implications of our results, make recommendations for users of these models, and demonstrate significant differences in model performance on event-based relations.
               
Click one of the above tabs to view related content.