Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Sung Hee | - |
dc.contributor.author | Pae, Dong Sung | - |
dc.contributor.author | Kang, Tae-Koo | - |
dc.contributor.author | Kim, Dong W. | - |
dc.contributor.author | Lim, Myo Taeg | - |
dc.date.accessioned | 2021-09-02T04:55:48Z | - |
dc.date.available | 2021-09-02T04:55:48Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2018-11 | - |
dc.identifier.issn | 1975-0102 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/72394 | - |
dc.description.abstract | We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | SPRINGER SINGAPORE PTE LTD | - |
dc.subject | RECOGNITION | - |
dc.title | Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Myo Taeg | - |
dc.identifier.doi | 10.5370/JEET.2018.13.6.2468 | - |
dc.identifier.scopusid | 2-s2.0-85055627228 | - |
dc.identifier.wosid | 000447673000037 | - |
dc.identifier.bibliographicCitation | JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, v.13, no.6, pp.2468 - 2478 | - |
dc.relation.isPartOf | JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY | - |
dc.citation.title | JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY | - |
dc.citation.volume | 13 | - |
dc.citation.number | 6 | - |
dc.citation.startPage | 2468 | - |
dc.citation.endPage | 2478 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.identifier.kciid | ART002402287 | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.description.journalRegisteredClass | kci | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | RECOGNITION | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Online-training control | - |
dc.subject.keywordAuthor | Object recognition | - |
dc.subject.keywordAuthor | Classification | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.