Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

적대적 사례에 기반한 언어 모형의 한국어 격 교체 이해 능력 평가Adversarial Example-Based Evaluation of How Language Models Understand Korean Case Alternation

Other Titles
Adversarial Example-Based Evaluation of How Language Models Understand Korean Case Alternation
Authors
송상헌노강산박권식신운섭황동식
Issue Date
2022
Publisher
대한언어학회
Keywords
주제어(Key Words): 적대적 사례(adversarial examples); 격 교체(case alternation); 딥러닝(deep learning); 고의적 잡음(intended noise); 견고성(robustness); 언어 모형(language model); 평가(evaluation); Key Words: adversarial examples; case alternation; deep learning; intended noise; robustness; language model; evaluation
Citation
언어학, v.30, no.1, pp.45 - 72
Indexed
KCI
Journal Title
언어학
Volume
30
Number
1
Start Page
45
End Page
72
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/140488
DOI
10.24303/lakdoi.2022.30.1.45
ISSN
1225-7141
Abstract
Song, Sanghoun; Noh, Kang San; Park, Kwonsik; Shin, Un-sub & Hwang, Dongjin. (2022). Adversarial example-based evaluation of how language models understand Korean case alternation. The Linguistic Association of Korea Journal, 30(1), 45-72. In the field of deep learning-based language understanding, adversarial examples refer to deliberately constructed examples of data, slightly different from original examples. The contrasts between the original and adversarial examples are less perceivable to human readers, but the disruption has a notorious effect on the performance of machines. Thus, adversarial examples facilitate assessing whether and how  a specific deep learning architecture (e.g., a language model) robustly works. Out of the multiple layers of linguistic structures, this study lays focus on a morpho- syntactic phenomenon in Korean, namely, case alternation. We created a set of adversarial examples regarding case alternation, and then tested the morpho-syntactic ability of neural language models. We extracted the instances of case alternation from the Sejong Electronic Dictionary, and made use of mBERT and KR-BERT as the language models. The results (measured by means of surprisal) indicate that the language models are unexpectedly good at discerning case alternation in Korean. In addition, it turns out that the Korean-specific language model performs better than the multilingual model. These imply that an in-depth knowledge of linguistics is essential for creating adversarial examples in Korean.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Liberal Arts > Department of Linguistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE