Hybrid attention-based transformer block model for distant supervision relation extraction

Xiao Y, Jin Y, Cheng R, Hao K (2022)
Neurocomputing 470: 29-39.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Xiao, Yan; Jin, YaochuUniBi ; Cheng, Ran; Hao, Kuangrong
Abstract / Bemerkung
With an exponential explosive growth of various digital text information, it is challenging to efficiently obtain specific knowledge from massive unstructured text information. As one basic task for natural lan-guage processing (NLP), relation extraction (RE) aims to extract semantic relations between entity pairs based on the given text. To avoid manual labeling of datasets, distant supervision relation extraction (DSRE) has been widely used, aiming to utilize knowledge base to automatically annotate datasets. Unfortunately, this method heavily suffers from wrong labelling due to its underlying strong assump-tions. To address this issue, we propose a new framework using hybrid attention-based Transformer block with multi-instance learning for DSRE. More specifically, the Transformer block is, for the first time, used as a sentence encoder, which mainly utilizes multi-head self-attention to capture syntactic informa-tion at the word level. Then, a novel sentence-level attention mechanism is proposed to calculate the bag representation, aiming to exploit all useful information in each sentence. Experimental results on the public dataset New York Times (NYT) demonstrate that the proposed approach can outperform the state-of-the-art algorithms on the adopted dataset, which verifies the effectiveness of our model on the DSRE task. (c) 2021 Elsevier B.V. All rights reserved.
Stichworte
Distant supervision relation extraction  (DSRE); Transformer block; Sentence-level attention
Erscheinungsjahr
2022
Zeitschriftentitel
Neurocomputing
Band
470
Seite(n)
29-39
ISSN
0925-2312
eISSN
1872-8286
Page URI
https://pub.uni-bielefeld.de/record/2959765

Zitieren

Xiao Y, Jin Y, Cheng R, Hao K. Hybrid attention-based transformer block model for distant supervision relation extraction. Neurocomputing . 2022;470:29-39.
Xiao, Y., Jin, Y., Cheng, R., & Hao, K. (2022). Hybrid attention-based transformer block model for distant supervision relation extraction. Neurocomputing , 470, 29-39. https://doi.org/10.1016/j.neucom.2021.10.037
Xiao, Yan, Jin, Yaochu, Cheng, Ran, and Hao, Kuangrong. 2022. “Hybrid attention-based transformer block model for distant supervision relation extraction”. Neurocomputing 470: 29-39.
Xiao, Y., Jin, Y., Cheng, R., and Hao, K. (2022). Hybrid attention-based transformer block model for distant supervision relation extraction. Neurocomputing 470, 29-39.
Xiao, Y., et al., 2022. Hybrid attention-based transformer block model for distant supervision relation extraction. Neurocomputing , 470, p 29-39.
Y. Xiao, et al., “Hybrid attention-based transformer block model for distant supervision relation extraction”, Neurocomputing , vol. 470, 2022, pp. 29-39.
Xiao, Y., Jin, Y., Cheng, R., Hao, K.: Hybrid attention-based transformer block model for distant supervision relation extraction. Neurocomputing . 470, 29-39 (2022).
Xiao, Yan, Jin, Yaochu, Cheng, Ran, and Hao, Kuangrong. “Hybrid attention-based transformer block model for distant supervision relation extraction”. Neurocomputing 470 (2022): 29-39.
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®
Suchen in

Google Scholar