UFC-Net with Fully-Connected Layers and Hadamard Identity Skip Connection for Image Inpainting
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Chung-Il | - |
dc.contributor.author | Rew, Jehyeok | - |
dc.contributor.author | Cho, Yongjang | - |
dc.contributor.author | Hwang, Eenjun | - |
dc.date.accessioned | 2021-12-07T12:41:55Z | - |
dc.date.available | 2021-12-07T12:41:55Z | - |
dc.date.created | 2021-08-30 | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 1546-2218 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/130088 | - |
dc.description.abstract | Image inpainting is an interesting technique in computer vision and artificial intelligence for plausibly filling in blank areas of an image by referring to their surrounding areas. Although its performance has been improved significantly using diverse convolutional neural network (CNN)-based models, these models have difficulty filling in some erased areas due to the kernel size of the CNN. If the kernel size is too narrow for the blank area, the models cannot consider the entire surrounding area, only partial areas or none at all. This issue leads to typical problems of inpainting, such as pixel reconstruction failure and unintended filling. To alleviate this, in this paper, we propose a novel inpainting model called UFC-net that reinforces two components in U-net. The first component is the latent networks in the middle of U-net to consider the entire surrounding area. The second component is the Hadamard identity skip connection to improve the attention of the inpainting model on the blank areas and reduce computational cost. We performed extensive comparisons with other inpainting models using the Places2 dataset to evaluate the effectiveness of the proposed scheme. We report some of the results. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | TECH SCIENCE PRESS | - |
dc.title | UFC-Net with Fully-Connected Layers and Hadamard Identity Skip Connection for Image Inpainting | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Hwang, Eenjun | - |
dc.identifier.doi | 10.32604/cmc.2021.017633 | - |
dc.identifier.scopusid | 2-s2.0-85105634962 | - |
dc.identifier.wosid | 000648894900037 | - |
dc.identifier.bibliographicCitation | CMC-COMPUTERS MATERIALS & CONTINUA, v.68, no.3, pp.3447 - 3463 | - |
dc.relation.isPartOf | CMC-COMPUTERS MATERIALS & CONTINUA | - |
dc.citation.title | CMC-COMPUTERS MATERIALS & CONTINUA | - |
dc.citation.volume | 68 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 3447 | - |
dc.citation.endPage | 3463 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Materials Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
dc.subject.keywordAuthor | Image processing | - |
dc.subject.keywordAuthor | computer vision | - |
dc.subject.keywordAuthor | image inpainting | - |
dc.subject.keywordAuthor | image restoration | - |
dc.subject.keywordAuthor | generative adversarial nets | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.