Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Seo, J.-W. | - |
dc.contributor.author | Jung, H.-G. | - |
dc.contributor.author | Lee, S.-W. | - |
dc.date.accessioned | 2021-12-02T07:42:27Z | - |
dc.date.available | 2021-12-02T07:42:27Z | - |
dc.date.created | 2021-08-31 | - |
dc.date.issued | 2021-06 | - |
dc.identifier.issn | 0893-6080 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/128833 | - |
dc.description.abstract | Few-shot learning aims to classify unseen classes with a few training examples. While recent works have shown that standard mini-batch training with carefully designed training strategies can improve generalization ability for unseen classes, well-known problems in deep networks such as memorizing training statistics have been less explored for few-shot learning. To tackle this issue, we propose self-augmentation that consolidates self-mix and self-distillation. Specifically, we propose a regional dropout technique called self-mix, in which a patch of an image is substituted into other values in the same image. With this dropout effect, we show that the generalization ability of deep networks can be improved as it prevents us from learning specific structures of a dataset. Then, we employ a backbone network that has auxiliary branches with its own classifier to enforce knowledge sharing. This sharing of knowledge forces each branch to learn diverse optimal points during training. Additionally, we present a local representation learner to further exploit a few training examples of unseen classes by generating fake queries and novel weights. Experimental results show that the proposed method outperforms the state-of-the-art methods for prevalent few-shot benchmarks and improves the generalization ability. © 2021 Elsevier Ltd | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | Elsevier Ltd | - |
dc.subject | Distillation | - |
dc.subject | Back-bone network | - |
dc.subject | Generalization ability | - |
dc.subject | Knowledge-sharing | - |
dc.subject | Optimal points | - |
dc.subject | State-of-the-art methods | - |
dc.subject | Training example | - |
dc.subject | Training strategy | - |
dc.subject | Deep learning | - |
dc.subject | article | - |
dc.subject | classifier | - |
dc.subject | distillation | - |
dc.subject | human | - |
dc.subject | learning | - |
dc.title | Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, S.-W. | - |
dc.identifier.doi | 10.1016/j.neunet.2021.02.007 | - |
dc.identifier.scopusid | 2-s2.0-85101651382 | - |
dc.identifier.wosid | 000640651000011 | - |
dc.identifier.bibliographicCitation | Neural Networks, v.138, pp.140 - 149 | - |
dc.relation.isPartOf | Neural Networks | - |
dc.citation.title | Neural Networks | - |
dc.citation.volume | 138 | - |
dc.citation.startPage | 140 | - |
dc.citation.endPage | 149 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Neurosciences & Neurology | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Neurosciences | - |
dc.subject.keywordPlus | Distillation | - |
dc.subject.keywordPlus | Back-bone network | - |
dc.subject.keywordPlus | Generalization ability | - |
dc.subject.keywordPlus | Knowledge-sharing | - |
dc.subject.keywordPlus | Optimal points | - |
dc.subject.keywordPlus | State-of-the-art methods | - |
dc.subject.keywordPlus | Training example | - |
dc.subject.keywordPlus | Training strategy | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | article | - |
dc.subject.keywordPlus | classifier | - |
dc.subject.keywordPlus | distillation | - |
dc.subject.keywordPlus | human | - |
dc.subject.keywordPlus | learning | - |
dc.subject.keywordAuthor | Classification | - |
dc.subject.keywordAuthor | Few-shot learning | - |
dc.subject.keywordAuthor | Generalization | - |
dc.subject.keywordAuthor | Knowledge distillation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.