Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Is c-command Machine-learnable?

Full metadata record
DC Field Value Language
dc.contributor.author신운섭-
dc.contributor.author박명관-
dc.contributor.author송상헌-
dc.date.accessioned2021-12-06T10:41:47Z-
dc.date.available2021-12-06T10:41:47Z-
dc.date.created2021-08-31-
dc.date.issued2021-
dc.identifier.issn1225-7141-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/129827-
dc.description.abstractShin, Unsub; Park, Myung-Kwan & Song, Sanghoun. (2021). Is c-command machine-learnable? The Linguistic Association of Korea Journal, 29(1), 183-204. Many psycholinguistic studies have tested whether pronouns and polarity items elicit additional processing cost when they are not c-commanded. The previous studies claim that the c-command constraint regulates the distribution of relevant syntactic objects. As such, the syntactic effects of the c-command relation are greatly affected by the types of licensing (e.g. quantificational binding) and reading comprehension patterns of subjects (e.g. linguistic illusion). The present study investigates the reading behavior of the language model BERT when the syntactic processing of relational information (i.e. X c-commands Y) is required. Specifically, our two experiments contrasted the BERT comprehension of a c-commanding licensor versus a non-c-commanding licensor with reflexive anaphora and negative polarity items. The analysis based on the information-theoretic measure of surprisal suggests that violations of the c-command constraint are unexpected for BERT representations. We conclude that deep learning models like BERT can learn the syntactic c-command restriction at least with respect to reflexive anaphors and negative polarity items. At the same time, BERT appeared to have some limitations in its flexibility to apply compensatory pragmatic reasoning when a non-c-commanding licensor intruded in the dependency structure.-
dc.languageEnglish-
dc.language.isoen-
dc.publisher대한언어학회-
dc.titleIs c-command Machine-learnable?-
dc.title.alternativeIs c-command Machine-learnable?-
dc.typeArticle-
dc.contributor.affiliatedAuthor송상헌-
dc.identifier.doi10.24303/lakdoi.2021.29.1.183-
dc.identifier.bibliographicCitation언어학, v.29, no.1, pp.183 - 204-
dc.relation.isPartOf언어학-
dc.citation.title언어학-
dc.citation.volume29-
dc.citation.number1-
dc.citation.startPage183-
dc.citation.endPage204-
dc.type.rimsART-
dc.identifier.kciidART002707313-
dc.description.journalClass2-
dc.description.journalRegisteredClasskci-
dc.subject.keywordAuthorKey Words: c-command-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorBERT-
dc.subject.keywordAuthorsurprisal-
dc.subject.keywordAuthorNPI-
dc.subject.keywordAuthorreflexive anaphor-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Liberal Arts > Department of Linguistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE