Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Multimodal Emotion Recognition Fusion Analysis Adapting BERT With Heterogeneous Feature Unification

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Sanghyun-
dc.contributor.authorHan, David K.-
dc.contributor.authorKo, Hanseok-
dc.date.accessioned2021-12-07T10:41:47Z-
dc.date.available2021-12-07T10:41:47Z-
dc.date.created2021-08-30-
dc.date.issued2021-06-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/130068-
dc.description.abstractHuman communication includes rich emotional content, thus the development of multimodal emotion recognition plays an important role in communication between humans and computers. Because of the complex emotional characteristics of a speaker, emotional recognition remains a challenge, particularly in capturing emotional cues across a variety of modalities, such as speech, facial expressions, and language. Audio and visual cues are particularly vital for a human observer in understanding emotions. However, most previous work on emotion recognition has been based solely on linguistic information, which can overlook various forms of nonverbal information. In this paper, we present a new multimodal emotion recognition approach that improves the BERT model for emotion recognition by combining it with heterogeneous features based on language, audio, and visual modalities. Specifically, we improve the BERT model due to the heterogeneous features of the audio and visual modalities. We introduce the Self-Multi-Attention Fusion module, Multi-Attention fusion module, and Video Fusion module, which are attention based multimodal fusion mechanisms using the recently proposed transformer architecture. We explore the optimal ways to combine fine-grained representations of audio and visual features into a common embedding while combining a pre-trained BERT model with modalities for fine-tuning. In our experiment, we evaluate the commonly used CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets for multimodal sentiment analysis. Ablation analysis indicates that the audio and visual components make a significant contribution to the recognition results, suggesting that these modalities contain highly complementary information for sentiment analysis based on video input. Our method shows that we achieve state-of-the-art performance on the CMU-MOSI, CMU-MOSEI, and IEMOCAP dataset.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleMultimodal Emotion Recognition Fusion Analysis Adapting BERT With Heterogeneous Feature Unification-
dc.typeArticle-
dc.contributor.affiliatedAuthorKo, Hanseok-
dc.identifier.doi10.1109/ACCESS.2021.3092735-
dc.identifier.scopusid2-s2.0-85112214088-
dc.identifier.wosid000674231500001-
dc.identifier.bibliographicCitationIEEE ACCESS, v.9, pp.94557 - 94572-
dc.relation.isPartOfIEEE ACCESS-
dc.citation.titleIEEE ACCESS-
dc.citation.volume9-
dc.citation.startPage94557-
dc.citation.endPage94572-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.subject.keywordPlusSPEECH-
dc.subject.keywordAuthorBERT-
dc.subject.keywordAuthorBit error rate-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorEmotion recognition-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorMultimodal emotion recognition-
dc.subject.keywordAuthorSentiment analysis-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorattention based multimodal-
dc.subject.keywordAuthorheterogeneous features-
dc.subject.keywordAuthortransformer-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Ko, Han seok photo

Ko, Han seok
College of Engineering (School of Electrical Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE