Weakly Supervised Learning for Object Localization Based on an Attention Mechanism
- Authors
- Park, Nojin; Ko, Hanseok
- Issue Date
- 11월-2021
- Publisher
- MDPI
- Keywords
- attention mechanism; joint training; weakly supervised object localization
- Citation
- APPLIED SCIENCES-BASEL, v.11, no.22
- Indexed
- SCIE
SCOPUS
- Journal Title
- APPLIED SCIENCES-BASEL
- Volume
- 11
- Number
- 22
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/135967
- DOI
- 10.3390/app112210953
- ISSN
- 2076-3417
- Abstract
- Recently, deep learning has been successfully applied to object detection and localization tasks in images. When setting up deep learning frameworks for supervised training with large datasets, strongly labeling the objects facilitates good performance; however, the complexity of the image scene and large size of the dataset make this a laborious task. Hence, it is of paramount importance that the expensive work associated with the tasks involving strong labeling, such as bounding box annotation, is reduced. In this paper, we propose a method to perform object localization tasks without bounding box annotation in the training process by means of employing a two-path activation-map-based classifier framework. In particular, we develop an activation-map-based framework to judicially control the attention map in the perception branch by adding a two-feature extractor so that better attention weights can be distributed to induce improved performance. The experimental results indicate that our method surpasses the performance of the existing deep learning models based on weakly supervised object localization. The experimental results show that the proposed method achieves the best performance, with 75.21% Top-1 classification accuracy and 55.15% Top-1 localization accuracy on the CUB-200-2011 dataset.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > School of Electrical Engineering > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.