Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

딥러닝 언어모형을 활용한 영어 비결속 재귀사 검증Probing the Unbound Reflexives in English via the Deep Learning-based Language Model

Other Titles
Probing the Unbound Reflexives in English via the Deep Learning-based Language Model
Authors
송상헌이규민김경민
Issue Date
2021
Publisher
한국언어과학회
Keywords
BERT; BERT; BYU Corpora; BYU 코퍼스; binding theory; deep learning; surprisal; unbound reflexives; 결속 이론; 딥러닝; 비결속 재귀사; 의외성
Citation
언어과학, v.28, no.3, pp.51 - 78
Indexed
KCI
Journal Title
언어과학
Volume
28
Number
3
Start Page
51
End Page
78
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/138111
DOI
10.14384/kals.2021.28.3.051
ISSN
1225-2522
Abstract
This article concerns the so-called unbound reflexive pronouns in English, which refer to self-forms without any sentence-internal antecedents, running counter to the classic Binding Principle A (Chomsky, 1981). To empirically investigate the distributional properties of the English unbound reflexives, the present study makes ample use of the BYU corpora including COCA, COHA, and GloWbE to collect relevant data, and implements the collected data into BERT, a machine learning technique for natural language processing, to explore how surprisingly the unbound reflexive forms appear in various types of contexts in comparison to the pronominal counter-parts. It is remarkable that the results replicate the findings and claims of the existing theoretical and corpus studies regarding the distribution of the unbound reflexives in English. This suggests that the deep learning skills can be sufficiently used to explore the syntactic phenomena in human languages.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Liberal Arts > Department of Linguistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE