Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Jaedong-
dc.contributor.authorLee, Changhyeon-
dc.contributor.authorKim, Gerard Jounghyun-
dc.date.accessioned2021-09-03T02:38:02Z-
dc.date.available2021-09-03T02:38:02Z-
dc.date.created2021-06-16-
dc.date.issued2017-09-
dc.identifier.issn1783-7677-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/82467-
dc.description.abstractWe consider a multimodal method for smart-watch text entry, called "Vouch," which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherSPRINGER-
dc.subjectMOBILE-
dc.subjectUSABILITY-
dc.titleVouch: multimodal touch-and-voice input for smart watches under difficult operating conditions-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Gerard Jounghyun-
dc.identifier.doi10.1007/s12193-017-0246-y-
dc.identifier.scopusid2-s2.0-85021063208-
dc.identifier.wosid000408247600005-
dc.identifier.bibliographicCitationJOURNAL ON MULTIMODAL USER INTERFACES, v.11, no.3, pp.289 - 299-
dc.relation.isPartOfJOURNAL ON MULTIMODAL USER INTERFACES-
dc.citation.titleJOURNAL ON MULTIMODAL USER INTERFACES-
dc.citation.volume11-
dc.citation.number3-
dc.citation.startPage289-
dc.citation.endPage299-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Cybernetics-
dc.subject.keywordPlusMOBILE-
dc.subject.keywordPlusUSABILITY-
dc.subject.keywordAuthorMultimodal interaction-
dc.subject.keywordAuthorVoice input-
dc.subject.keywordAuthorTouch input-
dc.subject.keywordAuthorSmart watch input-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Altmetrics

Total Views & Downloads

BROWSE