Linking event triggers with their respective arguments is an essential component for building an event extraction system. It is challenging to link event triggers with their corresponding argument triggers when… Click to show full abstract
Linking event triggers with their respective arguments is an essential component for building an event extraction system. It is challenging to link event triggers with their corresponding argument triggers when the sentence contains multiple event and argument triggers. The task becomes even more challenging in a low-resource setup due to the unavailability of natural language processing resources and tools. In this paper, we study the event-argument linking task based on disaster event ontology in a low resource setup. We use BERT and non-BERT-based deep learning models in both monolingual and cross-lingual event-argument linking tasks. We also perform an ablation study of various features like position embeddings (PE), position indicator (PI), and segment ID (SI) to understand their contribution to performance improvement in non-BERT-based models. Using three different languages viz. Hindi, Bengali, and Marathi, we compare the results with multilingual BERT-based deep neural models in both monolingual and cross-lingual scenarios. We observe that the multilingual BERT-based model outperforms the best performing non-BERT-based model in cross-lingual settings. But in monolingual settings, the performance is similar in Hindi and Bengali datasets and slightly better in Marathi dataset. We choose the disaster domain due to its social implications. Our current experiments can be helpful in mining important information related to disaster events from news articles and building event knowledge graphs in low-resource languages.
               
Click one of the above tabs to view related content.