LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Where Does Bias in Common Sense Knowledge Models Come From?

Photo by hannahwrightdesigner from unsplash

Common Sense knowledge bases and models have been shown to embed bias. We investigate the source of such bias in a knowledge model called common sense transformer (COMET) by training… Click to show full abstract

Common Sense knowledge bases and models have been shown to embed bias. We investigate the source of such bias in a knowledge model called common sense transformer (COMET) by training it on various combinations of language models and knowledge bases. We experiment with three language models of different sizes and architectures, and two knowledge bases with different modeling principles. We use sentiment and regard as proxy measures of bias and analyze bias using three methods: overgeneralization and disparity, keyword outliers, and relational dimensions. Our results show that larger models tend to be more nuanced in their biases but are more biased than smaller models in certain categories (e.g., utility of religions), which can be attributed to the larger knowledge accumulated during pretraining. We also observe that training on a larger set of common sense knowledge typically leads to more bias, and that models generally have stronger negative regard than positive.

Keywords: common sense; sense knowledge; knowledge; knowledge bases; bias

Journal Title: IEEE Internet Computing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.