Variational Reward Estimator Bottleneck: Towards Robust Reward Estimator for Multidomain Task-Oriented Dialogue
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Jeiyoon | - |
dc.contributor.author | Lee, Chanhee | - |
dc.contributor.author | Park, Chanjun | - |
dc.contributor.author | Kim, Kuekyeng | - |
dc.contributor.author | Lim, Heuiseok | - |
dc.date.accessioned | 2022-02-27T21:40:45Z | - |
dc.date.available | 2022-02-27T21:40:45Z | - |
dc.date.created | 2022-02-09 | - |
dc.date.issued | 2021-07 | - |
dc.identifier.issn | 2076-3417 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/137179 | - |
dc.description.abstract | Despite its significant effectiveness in adversarial training approaches to multidomain task-oriented dialogue systems, adversarial inverse reinforcement learning of the dialogue policy frequently fails to balance the performance of the reward estimator and policy generator. During the optimization process, the reward estimator frequently overwhelms the policy generator, resulting in excessively uninformative gradients. We propose the variational reward estimator bottleneck (VRB), which is a novel and effective regularization strategy that aims to constrain unproductive information flows between inputs and the reward estimator. The VRB focuses on capturing discriminative features by exploiting information bottleneck on mutual information. Quantitative analysis on a multidomain task-oriented dialogue dataset demonstrates that the VRB significantly outperforms previous studies. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.title | Variational Reward Estimator Bottleneck: Towards Robust Reward Estimator for Multidomain Task-Oriented Dialogue | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Heuiseok | - |
dc.identifier.doi | 10.3390/app11146624 | - |
dc.identifier.scopusid | 2-s2.0-85111354381 | - |
dc.identifier.wosid | 000675999500001 | - |
dc.identifier.bibliographicCitation | APPLIED SCIENCES-BASEL, v.11, no.14 | - |
dc.relation.isPartOf | APPLIED SCIENCES-BASEL | - |
dc.citation.title | APPLIED SCIENCES-BASEL | - |
dc.citation.volume | 11 | - |
dc.citation.number | 14 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Materials Science | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.subject.keywordAuthor | dialogue policy | - |
dc.subject.keywordAuthor | inverse reinforcement learning | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | task-oriented dialogue | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.