A novel multiple instance learning framework for COVID-19 severity assessment via data augmentation and self-supervised learning
- Authors
- Li, Zekun; Zhao, Wei; Shi, Feng; Qi, Lei; Xie, Xingzhi; Wei, Ying; Ding, Zhongxiang; Gao, Yang; Wu, Shangjie; Liu, Jun; Shi, Yinghuan; Shen, Dinggang
- Issue Date
- 4월-2021
- Publisher
- ELSEVIER
- Keywords
- COVID-19; Chest CT; Data augmentation; Multiple instance learning; Self-supervised learning
- Citation
- MEDICAL IMAGE ANALYSIS, v.69
- Indexed
- SCIE
SCOPUS
- Journal Title
- MEDICAL IMAGE ANALYSIS
- Volume
- 69
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/137684
- DOI
- 10.1016/j.media.2021.101978
- ISSN
- 1361-8415
- Abstract
- How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works. (c) 2021 Elsevier B.V. All rights reserved.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.