LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Exploring Transformer-Based Learning for Negation Detection in Biomedical Texts

Photo by hajjidirir from unsplash

NLP techniques have been widely adopted in the biomedical domain to perform various text-analytics tasks, such as searching biomedical literature and extracting and deriving new knowledge from biomedical data. One… Click to show full abstract

NLP techniques have been widely adopted in the biomedical domain to perform various text-analytics tasks, such as searching biomedical literature and extracting and deriving new knowledge from biomedical data. One type of biomedical data is clinical texts (e.g., clinical cases and medical records), which typically contain physicians’ notes about a patient’s health, including previous medical history (symptoms, diseases, lab exams, treatments, etc.), as every visit to the hospital leads to the addition of more information to the patient’s record. Another type of biomedical data is biological articles, which typically discuss and explore a certain phenomenon, such as the behavior of biological entities (e.g., genetic relations and interactions among them) and the roles of specific biological processes in causing diseases (e.g., how genetic amplification can cause tumorous diseases). For both types of biomedical data, negation detection is an essential analytics task that can be applied to identify negated contexts in biomedical text (e.g., detecting the presence of a statement establishing that a patient does not have/fit a certain clinical condition or detecting statements that indicate the nonexistence of certain relations among biological entities). This task has been addressed in prior work by considering a variety of approaches such as rule-based systems, conventional machine-learning classifiers, and deep learning approaches. In this work, we propose applying transformer-based learning for negation detection in biomedical texts. We use pre-trained BERT and other similar models (such as ALBERT, XLNet, and ELECTRA) to address two negation-detection subtasks: negation sentence identification and negation scope recognition. We evaluated our approach using the BioScope corpus and relying on measures such as accuracy, precision, recall, F1, and percentage of correct scopes (PCS). Our findings show the potential of transformer-based learning for negation detection, reaching an accuracy of 99% for negation identification and a PCS of 95% for negation scope recognition.

Keywords: negation detection; negation; learning negation; based learning; transformer based

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.