Parametric Shape Estimation of Human Body Under Wide Clothing
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lu, Yucheng | - |
dc.contributor.author | Cha, Jin-Hyuck | - |
dc.contributor.author | Youm, Se-Kyoung | - |
dc.contributor.author | Jung, Seung-Won | - |
dc.date.accessioned | 2022-03-12T07:40:22Z | - |
dc.date.available | 2022-03-12T07:40:22Z | - |
dc.date.created | 2022-01-20 | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 1520-9210 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/138699 | - |
dc.description.abstract | The shape of the human body plays an important role in many applications, such as those involving personal healthcare and virtual clothing try-ons. However, accurate body shape measurements typically require the user to be wearing a minimal amount of clothing, which is not practical in many situations. To resolve this issue using deep learning techniques, we need a paired dataset of ground-truth naked human body shapes and their corresponding color images with clothes. As it is practically impossible to collect enough of this kind of data from real-world environments to train a deep neural network, in this paper, we present the Synthetic dataset of Human Avatars under wiDE gaRment (SHADER). The SHADER dataset consists of 300,000 paired ground-truth naked and dressed images of 1,500 synthetic humans with different body shapes, poses, garments, skin tones, and backgrounds. To take full advantage of SHADER, we propose a novel silhouette confidence measure and show that our silhouette confidence prediction network can help improve the performance of state-of-the-art shape estimation networks for human bodies under clothing. The experimental results demonstrate the effectiveness of the proposed approach. The code and dataset are available at https://github.com/YCL92/SHADER. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | HIP RATIO | - |
dc.subject | POSE | - |
dc.title | Parametric Shape Estimation of Human Body Under Wide Clothing | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Jung, Seung-Won | - |
dc.identifier.doi | 10.1109/TMM.2020.3029941 | - |
dc.identifier.scopusid | 2-s2.0-85118188258 | - |
dc.identifier.wosid | 000709093100018 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON MULTIMEDIA, v.23, pp.3657 - 3669 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON MULTIMEDIA | - |
dc.citation.title | IEEE TRANSACTIONS ON MULTIMEDIA | - |
dc.citation.volume | 23 | - |
dc.citation.startPage | 3657 | - |
dc.citation.endPage | 3669 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | HIP RATIO | - |
dc.subject.keywordPlus | POSE | - |
dc.subject.keywordAuthor | Shape | - |
dc.subject.keywordAuthor | Clothing | - |
dc.subject.keywordAuthor | Three-dimensional displays | - |
dc.subject.keywordAuthor | Two dimensional displays | - |
dc.subject.keywordAuthor | Biological system modeling | - |
dc.subject.keywordAuthor | Pose estimation | - |
dc.subject.keywordAuthor | Silhouette confidence | - |
dc.subject.keywordAuthor | convolutional neural network | - |
dc.subject.keywordAuthor | human shape estimation | - |
dc.subject.keywordAuthor | synthetic dataset | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.