In recent years, many scholars have chosen to use word lexicons to incorporate word information into a model based on character input to improve the performance of Chinese relation extraction… Click to show full abstract
In recent years, many scholars have chosen to use word lexicons to incorporate word information into a model based on character input to improve the performance of Chinese relation extraction (RE). For example, Li et al. proposed the MG-Lattice model in 2019 and achieved state-of-the-art (SOTA) results. However, MG-Lattice still has the problem of information loss due to its model structure, which affects the performance of Chinese RE. This paper proposes an adaptive method to include word information at the embedding layer using a word lexicon to merge all words that match each character into a character input-based model to solve the information loss problem of MG-Lattice. The method can be combined with other general neural system networks and has transferability. Experimental studies on two benchmark Chinese RE datasets show that our method achieves an inference speed up to 12.9 times faster than the SOTA model, along with a better performance. The experimental results also show that this method combined with the BERT pretrained model can effectively supplement the information obtained from the pretrained model, further improving the performance of Chinese RE.
               
Click one of the above tabs to view related content.