Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Empathetic video clip experience through timely multimodal interaction

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Myunghee-
dc.contributor.authorKim, Gerard Jounghyun-
dc.date.accessioned2021-09-05T05:33:31Z-
dc.date.available2021-09-05T05:33:31Z-
dc.date.created2021-06-15-
dc.date.issued2014-09-
dc.identifier.issn1783-7677-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/97468-
dc.description.abstractIn this article, we describe a video clip playing system, named "Empatheater," that is controlled by multimodal interaction. As the video clip is played, the user can interact and emulate predefined video "events" through guidance and multimodal natural interaction (e.g. following the main character's motion, gestures or voice). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of Empatheater is to provide indirect experience of the given video content by eliciting the user to mimic and empathize with the main character. The user is given the illusion (or suspended disbelief) of playing an active role in the unraveling video content. We first discuss the overall system architecture, and important components and features of Empatheater. In addition, we report on the results of the experiments that were carried out to evaluate perceived empathy, presence/immersion and subjective user experience with it as compared to conventional passive video viewing for different content genres and forms of interaction.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherSPRINGER-
dc.subjectSENSE-
dc.titleEmpathetic video clip experience through timely multimodal interaction-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Gerard Jounghyun-
dc.identifier.doi10.1007/s12193-014-0151-6-
dc.identifier.scopusid2-s2.0-84908093681-
dc.identifier.wosid000342069200004-
dc.identifier.bibliographicCitationJOURNAL ON MULTIMODAL USER INTERFACES, v.8, no.3, pp.273 - 288-
dc.relation.isPartOfJOURNAL ON MULTIMODAL USER INTERFACES-
dc.citation.titleJOURNAL ON MULTIMODAL USER INTERFACES-
dc.citation.volume8-
dc.citation.number3-
dc.citation.startPage273-
dc.citation.endPage288-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Cybernetics-
dc.subject.keywordPlusSENSE-
dc.subject.keywordAuthorInteractive video-
dc.subject.keywordAuthorMultimodality-
dc.subject.keywordAuthorUser guidance-
dc.subject.keywordAuthorImmersion-
dc.subject.keywordAuthorPresence-
dc.subject.keywordAuthorSubjective user experience-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Altmetrics

Total Views & Downloads

BROWSE