Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

An Experimental Investigation of Discourse Expectations in Neural Language Models

Authors
Yi, E.Cho, H.Song, S.
Issue Date
2022
Publisher
Korean Society for the Study of English Language and Linguistics
Keywords
LSTM; neural language model; next sentence prediction; surprisal; BERT; coreference resolution; discourse expectation; GPT-2; implicit causality bias
Citation
Korean Journal of English Language and Linguistics, v.22, pp.1101 - 1115
Indexed
SCOPUS
KCI
Journal Title
Korean Journal of English Language and Linguistics
Volume
22
Start Page
1101
End Page
1115
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/147001
DOI
10.15738/kjell.22..202210.1101
ISSN
1598-1398
Abstract
The present study reports on three language processing experiments with most up-to-date neural language models from a psycholinguistic perspective. We investigated whether and how discourse expectations demonstrated in the psycholinguistics literature are manifested in neural language models, using the language models whose architectures and assumptions are considered most appropriate for the given language processing tasks. We first attempted to perform a general assessment of a neural model’s discourse expectations about story continuity or coherence (Experiment 1), based on the next sentence prediction module of the bidirectional transformer-based model BERT (Devlin et al. 2019). We also studied language models’ expectations about reference continuity in discursive contexts in both comprehension (Experiment 2) and production (Experiment 3) settings, based on so-called Implicit Causality biases. We used the unidirectional (or left-to-right) RNN-based model LSTM (Hochreiter and Schmidhuber 1997) and the transformer-based generation model GPT-2 (Radford et al. 2019), respectively. The results of the three experiments showed, first, that neural language models are highly successful in distinguishing between reasonably expected and unexpected story continuations in human communication and also that they exhibit human-like bias patterns in reference expectations in both comprehension and production contexts. The results of the present study suggest language models can closely simulate the discourse processing features observed in psycholinguistic experiments with human speakers. The results also suggest language models can, beyond simply functioning as a technology for practical purposes, serve as a useful research tool and/or object for the study of human discourse processing. © 2022 KASELL All rights reserved.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Liberal Arts > Department of Linguistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE