Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

Authors
Lee, JinhyukYoon, WonjinKim, SungdongKim, DonghyeonKim, SunkyuSo, Chan HoKang, Jaewoo
Issue Date
15-Feb-2020
Publisher
OXFORD UNIV PRESS
Citation
BIOINFORMATICS, v.36, no.4, pp.1234 - 1240
Indexed
SCIE
SCOPUS
Journal Title
BIOINFORMATICS
Volume
36
Number
4
Start Page
1234
End Page
1240
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/57643
DOI
10.1093/bioinformatics/btz682
ISSN
1367-4803
Abstract
Motivation: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kang, Jae woo photo

Kang, Jae woo
Department of Computer Science and Engineering
Read more

Altmetrics

Total Views & Downloads

BROWSE