An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Wanqi | - |
dc.contributor.author | Shi, Yinghuan | - |
dc.contributor.author | Park, Sang Hyun | - |
dc.contributor.author | Yang, Ming | - |
dc.contributor.author | Gao, Yang | - |
dc.contributor.author | Shen, Dinggang | - |
dc.date.accessioned | 2021-08-30T18:47:24Z | - |
dc.date.available | 2021-08-30T18:47:24Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2020-08 | - |
dc.identifier.issn | 2168-2194 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/54292 | - |
dc.description.abstract | Segmentation of prostate in medical imaging data (e.g., CT, MRI, TRUS) is often considered as a critical yet challenging task for radiotherapy treatment. It is relatively easier to segment prostate from MR images than from CT images, due to better soft tissue contrast of the MR images. For segmenting prostate from CT images, most previous methods mainly used CT alone, and thus their performances are often limited by low tissue contrast in the CT images. In this article, we explore the possibility of using indirect guidance from MR images for improving prostate segmentation in the CT images. In particular, we propose a novel deep transfer learning approach, i.e., MR-guided CT network training (namely MICS-NET), which can employ MR images to help better learning of features in CT images for prostate segmentation. In MICS-NET, the guidance from MRI consists of two steps: (1) learning informative and transferable features from MRI and then transferring them to CT images in a cascade manner, and (2) adaptively transferring the prostate likelihood of MRI model (i.e., well-trained convnet by purely using MR images) with a view consistency constraint. To illustrate the effectiveness of our approach, we evaluate MICS-NET on a real CT prostate image set, with the manual delineations available as the ground truth for evaluation. Our methods generate promising segmentation results which achieve (1) six percentages higher Dice Ratio than the CT model purely using CT images and (2) comparable performance with the MRI model purely using MR images. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | CONVOLUTIONAL NEURAL-NETWORKS | - |
dc.subject | FEATURE REPRESENTATION | - |
dc.subject | LEARNING ALGORITHM | - |
dc.subject | SEGMENTATION | - |
dc.subject | REGISTRATION | - |
dc.subject | CLASSIFICATION | - |
dc.subject | EVOLUTION | - |
dc.subject | BIOPSY | - |
dc.title | An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Shen, Dinggang | - |
dc.identifier.doi | 10.1109/JBHI.2019.2960153 | - |
dc.identifier.scopusid | 2-s2.0-85089202587 | - |
dc.identifier.wosid | 000557358500015 | - |
dc.identifier.bibliographicCitation | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, v.24, no.8, pp.2278 - 2291 | - |
dc.relation.isPartOf | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS | - |
dc.citation.title | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS | - |
dc.citation.volume | 24 | - |
dc.citation.number | 8 | - |
dc.citation.startPage | 2278 | - |
dc.citation.endPage | 2291 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Mathematical & Computational Biology | - |
dc.relation.journalResearchArea | Medical Informatics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Mathematical & Computational Biology | - |
dc.relation.journalWebOfScienceCategory | Medical Informatics | - |
dc.subject.keywordPlus | CONVOLUTIONAL NEURAL-NETWORKS | - |
dc.subject.keywordPlus | FEATURE REPRESENTATION | - |
dc.subject.keywordPlus | LEARNING ALGORITHM | - |
dc.subject.keywordPlus | SEGMENTATION | - |
dc.subject.keywordPlus | REGISTRATION | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | EVOLUTION | - |
dc.subject.keywordPlus | BIOPSY | - |
dc.subject.keywordAuthor | Computed tomography | - |
dc.subject.keywordAuthor | Image segmentation | - |
dc.subject.keywordAuthor | Magnetic resonance imaging | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Biomedical imaging | - |
dc.subject.keywordAuthor | Informatics | - |
dc.subject.keywordAuthor | Planning | - |
dc.subject.keywordAuthor | Prostate segmentation | - |
dc.subject.keywordAuthor | deep transfer learning | - |
dc.subject.keywordAuthor | fully convolutional network | - |
dc.subject.keywordAuthor | cascade learning | - |
dc.subject.keywordAuthor | view consistency constraint | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.