LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Contrastive Meta-Learner for Automatic Text Labeling and Semantic Textual Similarity

Generating large labeled datasets is a common barrier in machine learning efforts, with frequent challenges in both labeling and creating useful models for these datasets. We introduce a new approach… Click to show full abstract

Generating large labeled datasets is a common barrier in machine learning efforts, with frequent challenges in both labeling and creating useful models for these datasets. We introduce a new approach to the automatic text labeling and semantic textual similarity tasks, which utilizes an encoder layer that is fine-tuned using triplet loss. This approach, contrastive meta-learning (CML), is specifically designed to create a naturally separable embedding space, based on minimal a priori examples. We find that through the use of CML, we are able to perform up to state-of-the-art performance on similar few-shot learning automatic labeling methodologies. For the semantic textual similarity task, CML creates a close approximation to a model trained with the full dataset, with as little as 8 training examples, whereas other common approaches require outside datasets.

Keywords: semantic textual; automatic text; labeling semantic; textual similarity; text labeling

Journal Title: IEEE Access
Year Published: 2024

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.