Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep learning can contrast the minimal pairs of syntactic dataDeep learning can contrast the minimal pairs of syntactic data

Other Titles
Deep learning can contrast the minimal pairs of syntactic data
Authors
Kwonsik Park박명관송상헌
Issue Date
2021
Publisher
경희대학교 언어정보연구소
Keywords
deep learning; BERT; syntactic judgment; minimal pair; contrast
Citation
언어연구, v.38, no.2, pp.395 - 424
Indexed
SCOPUS
KCI
Journal Title
언어연구
Volume
38
Number
2
Start Page
395
End Page
424
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/129779
DOI
10.17250/khisli.38.2.202106.008
ISSN
1229-1374
Abstract
The present work aims to assess the feasibility of using deep learning as a useful tool to investigate syntactic phenomena. To this end, the present study concerns three research questions: (i) whether deep learning can detect syntactically inappropriate constructions, (ii) whether deep learning’s acceptability judgments are accountable, and (iii) whether deep learning’s aspects of acceptability judgments are similar to human judgments. As a proxy for a deep learning language model, this study chooses BERT. The current paper comprises syntactically contrasted pairs of English sentences which come from the three test suites already available. The first one is 196 grammatical –ungrammatical minimal pairs from DeKeyser (2000). The second one is examples in four published syntax textbooks excerpted from Warstadt et al. (2019). The last one is extracted from Sprouse et al. (2013), which collects the examples reported in a theoretical linguistics journal, Linguistic Inquiry. The BERT models, base BERT and large BERT, are assessed by judging acceptability of items in the test suites with an evaluation metric, surprisal, which is used to measure how ‘surprised’ a model is when encountering a word in a sequence of words, i.e., a sentence. The results are analyzed in the two frameworks: directionality and repulsion. The results of directionality reveals that the two versions of BERT are overall competent at distinguishing ungrammatical sentences from grammatical ones. The statistical results of both repulsion and directionality also reveal that the two variants of BERT do not differ significantly. Regarding repulsion, correct judgments and incorrect ones are significantly different. Additionally, the repulsion of the first test suite, which is excerpted from the items for testing learners’ grammaticality judgments, is higher than the other test suites, which are excerpted from the syntax textbooks and published literature. This study compares BERT’s acceptability judgments with magnitude estimation results reported in Sprouse et al. (2013) in order to examine if deep learning’s syntactic knowledge is akin to human knowledge. The error analyses on incorrectly judged items reveal that there are some syntactic constructions that the two BERTs have trouble learning, which indicates that BERT’s acceptability judgments are distributed not randomly.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Liberal Arts > Department of Linguistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE