Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Access"
DOI: 10.1109/access.2022.3158975
Abstract: This paper mainly studies the combination of pre-trained language models and user identity information for document-level sentiment classification. In recent years, pre-trained language models (PLMs) such as BERT have achieved state-of-the-art results on many NLP…
read more here.
Keywords:
identity;
user identity;
trained language;
language models ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2022 at "IEEE Access"
DOI: 10.1109/access.2022.3173355
Abstract: Most Knowledge Graph-based Question Answering (KGQA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we…
read more here.
Keywords:
tree kgqa;
question;
kgqa;
kgqa unsupervised ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Access"
DOI: 10.1109/access.2022.3190408
Abstract: We present the first comprehensive empirical evaluation of pre-trained language models (PLMs) for legal natural language processing (NLP) in order to examine their effectiveness in this domain. Our study covers eight representative and challenging legal…
read more here.
Keywords:
language;
plm based;
domain;
trained language ... See more keywords