Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

O-Net: Dangerous Goods Detection in Aviation Security Based on U-Net

Authors
Kim, WoongJun, SungchanKang, SuminLee, Chulung
Issue Date
2020
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
X-ray imaging; Feature extraction; Image segmentation; Search problems; Convolution; Image recognition; Explosives; Artificial intelligence security system; aviation security; detection algorithm; image segmentation; U-Net; X-ray detection
Citation
IEEE ACCESS, v.8, pp.206289 - 206302
Indexed
SCIE
SCOPUS
Journal Title
IEEE ACCESS
Volume
8
Start Page
206289
End Page
206302
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/59026
DOI
10.1109/ACCESS.2020.3037719
ISSN
2169-3536
Abstract
Aviation security X-ray equipment currently searches objects through primary screening, in which the screener has to re-search a baggage/person to detect the target object from overlapping objects. The advancements of computer vision and deep learning technology can be applied to improve the accuracy of identifying the most dangerous goods, guns and knives, from X-ray images of baggage. Artificial intelligence-based aviation security X-rays can facilitate the high-speed detection of target objects while reducing the overall security search duration and load on the screener. Moreover, the overlapping phenomenon was improved by using raw RGB images from X-rays and simultaneously converting the images into grayscale for input. An O-Net structure was designed through various learning rates and dense/depth-wise experiments as an improvement based on U-Net. Two encoders and two decoders were used to incorporate various types of images in processing and maximize the output performance of the neural network, respectively. In addition, we proposed U-Net segmentation to detect target objects more clearly than the You Only Look Once (YOLO) of Bounding-box (Bbox) type through the concept of a "confidence score". Consequently, the comparative analysis of basic segmentation models such as Fully Convolutional Networks (FCN), U-Net, and Segmentation-networks (SegNet) based on the major performance indicators of segmentation-pixel accuracy and mean-intersection over union (m-IoU)-revealed that O-Net improved the average pixel accuracy by 5.8%, 2.26%, and 5.01% and the m-IoU was improved by 43.1%, 9.84%, and 23.31%, respectively. Moreover, the accuracy of O-Net was 6.56% higher than that of U-Net, indicating the superiority of the O-Net architecture.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Industrial and Management Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Chul Ung photo

Lee, Chul Ung
College of Engineering (School of Industrial and Management Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE