Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Jaedong | - |
dc.contributor.author | Lee, Changhyeon | - |
dc.contributor.author | Kim, Gerard Jounghyun | - |
dc.date.accessioned | 2021-09-03T02:38:02Z | - |
dc.date.available | 2021-09-03T02:38:02Z | - |
dc.date.created | 2021-06-16 | - |
dc.date.issued | 2017-09 | - |
dc.identifier.issn | 1783-7677 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/82467 | - |
dc.description.abstract | We consider a multimodal method for smart-watch text entry, called "Vouch," which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | SPRINGER | - |
dc.subject | MOBILE | - |
dc.subject | USABILITY | - |
dc.title | Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Gerard Jounghyun | - |
dc.identifier.doi | 10.1007/s12193-017-0246-y | - |
dc.identifier.scopusid | 2-s2.0-85021063208 | - |
dc.identifier.wosid | 000408247600005 | - |
dc.identifier.bibliographicCitation | JOURNAL ON MULTIMODAL USER INTERFACES, v.11, no.3, pp.289 - 299 | - |
dc.relation.isPartOf | JOURNAL ON MULTIMODAL USER INTERFACES | - |
dc.citation.title | JOURNAL ON MULTIMODAL USER INTERFACES | - |
dc.citation.volume | 11 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 289 | - |
dc.citation.endPage | 299 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Cybernetics | - |
dc.subject.keywordPlus | MOBILE | - |
dc.subject.keywordPlus | USABILITY | - |
dc.subject.keywordAuthor | Multimodal interaction | - |
dc.subject.keywordAuthor | Voice input | - |
dc.subject.keywordAuthor | Touch input | - |
dc.subject.keywordAuthor | Smart watch input | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.