Articles with "trained language" as a keyword



Photo by austindistel from unsplash

Injecting User Identity Into Pretrained Language Models for Document-Level Sentiment Classification

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3158975

Abstract: This paper mainly studies the combination of pre-trained language models and user identity information for document-level sentiment classification. In recent years, pre-trained language models (PLMs) such as BERT have achieved state-of-the-art results on many NLP… read more here.

Keywords: identity; user identity; trained language; language models ... See more keywords
Photo by emben from unsplash

Tree-KGQA: An Unsupervised Approach for Question Answering Over Knowledge Graphs

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3173355

Abstract: Most Knowledge Graph-based Question Answering (KGQA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we… read more here.

Keywords: tree kgqa; question; kgqa; kgqa unsupervised ... See more keywords

On the Effectiveness of Pre-Trained Language Models for Legal Natural Language Processing: An Empirical Study

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3190408

Abstract: We present the first comprehensive empirical evaluation of pre-trained language models (PLMs) for legal natural language processing (NLP) in order to examine their effectiveness in this domain. Our study covers eight representative and challenging legal… read more here.

Keywords: language; plm based; domain; trained language ... See more keywords