DeepKLM - 통사 실험을 위한 전산 언어모델 라이브러리 -
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이규민 | - |
dc.contributor.author | 김성태 | - |
dc.contributor.author | 김현수 | - |
dc.contributor.author | 박권식 | - |
dc.contributor.author | 신운섭 | - |
dc.contributor.author | 왕규현 | - |
dc.contributor.author | 박명관 | - |
dc.contributor.author | 송상헌 | - |
dc.date.accessioned | 2021-08-30T05:15:58Z | - |
dc.date.available | 2021-08-30T05:15:58Z | - |
dc.date.created | 2021-06-17 | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 1738-1908 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/50713 | - |
dc.description.abstract | This paper introduces DeepKLM, a deep learning library for syntactic experiments. The library enables researchers to use the state-of-the-art deep computational language model, based on BERT (Bidirectional Encoder Representations from Transformers). The library, written in Python, works to fill the masked part of a sentence with a specific token, similar to the Cloze task in the traditional language experiments. The output value of surprisal is related to human language processing in terms of speed and complexity. The library additionally provides two visualization tools of the heatmap and the attention head visualization. This article also provides two case studies of NPIs and reflexives employing the library. The library has room for improvement in that the BERT-based components are not entirely on par with those in human language sentences. Despite such limits, the case studies imply that the library enables us to assess human and deep learning machines’ language ability. | - |
dc.language | Korean | - |
dc.language.iso | ko | - |
dc.publisher | 연세대학교 언어정보연구원 | - |
dc.title | DeepKLM - 통사 실험을 위한 전산 언어모델 라이브러리 - | - |
dc.title.alternative | DeepKLM - A Computational Language Model-based Library for Syntactic Experiments - | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | 송상헌 | - |
dc.identifier.doi | 10.20988/lfp.2021.52..265 | - |
dc.identifier.bibliographicCitation | 언어사실과 관점, v.52, pp.265 - 306 | - |
dc.relation.isPartOf | 언어사실과 관점 | - |
dc.citation.title | 언어사실과 관점 | - |
dc.citation.volume | 52 | - |
dc.citation.startPage | 265 | - |
dc.citation.endPage | 306 | - |
dc.type.rims | ART | - |
dc.identifier.kciid | ART002688747 | - |
dc.description.journalClass | 2 | - |
dc.description.journalRegisteredClass | kci | - |
dc.subject.keywordAuthor | BERT | - |
dc.subject.keywordAuthor | 언어모델 | - |
dc.subject.keywordAuthor | 서프라이절 | - |
dc.subject.keywordAuthor | 실험통사론 | - |
dc.subject.keywordAuthor | 말뭉치 | - |
dc.subject.keywordAuthor | BERT | - |
dc.subject.keywordAuthor | language model | - |
dc.subject.keywordAuthor | surprisal | - |
dc.subject.keywordAuthor | experimental syntax | - |
dc.subject.keywordAuthor | corpus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.