Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Consistency Regularization을 적용한 멀티모달 한국어 감정인식Multi-modal Korean Emotion Recognition with Consistency Regularization

Other Titles
Multi-modal Korean Emotion Recognition with Consistency Regularization
Authors
김정희강필성
Issue Date
2021
Publisher
대한산업공학회
Keywords
Speech Emotion Recognition; Wav2vec 2.0; Multi-Modal Emotion Recognition
Citation
대한산업공학회지, v.47, no.6, pp.549 - 559
Indexed
KCI
Journal Title
대한산업공학회지
Volume
47
Number
6
Start Page
549
End Page
559
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/144790
ISSN
1225-0988
Abstract
Recently, the demand for artificial intelligence-based voice services, identifying and appropriately responding to user needs based on voice, is increasing. In particular, technology for recognizing emotions, which is non-verbal information of human voice, is receiving significant attention to improve the quality of voice services. Therefore, speech emotion recognition models based on deep learning is actively studied with rich English data, and a multi-modal emotion recognition framework with a speech recognition module has been proposed to utilize both voice and text information. However, the framework with speech recognition module has a disadvantage in an actual environment where ambient noise exists. The performance of the framework decreases along with the decrease of the speech recognition rate. In addition, it is challenging to apply deep learning-based models to Korean emotion recognition because, unlike English, emotion data is not abundant. To address the drawback of the framework, we propose a consistency regularization learning methodology that can reflect the difference between the content of speech and the text extracted from the speech recognition module in the model. We also adapt pre-trained models with self-supervised way such as Wav2vec 2.0 and HanBERT to the framework, considering limited Korean emotion data. Our experimental results show that the framework with pre-trained models yields better performance than a model trained with only speech on Korean multi-modal emotion dataset. The proposed learning methodology can minimize the performance degradation with poor performing speech recognition modules.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Industrial and Management Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kang, Pil sung photo

Kang, Pil sung
공과대학 (School of Industrial and Management Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE