3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Yan | - |
dc.contributor.author | Zhou, Luping | - |
dc.contributor.author | Yu, Biting | - |
dc.contributor.author | Wang, Lei | - |
dc.contributor.author | Zu, Chen | - |
dc.contributor.author | Lalush, David S. | - |
dc.contributor.author | Lin, Weili | - |
dc.contributor.author | Wu, Xi | - |
dc.contributor.author | Zhou, Jiliu | - |
dc.contributor.author | Shen, Dinggang | - |
dc.date.accessioned | 2021-09-01T14:37:59Z | - |
dc.date.available | 2021-09-01T14:37:59Z | - |
dc.date.created | 2021-06-19 | - |
dc.date.issued | 2019-06 | - |
dc.identifier.issn | 0278-0062 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/65278 | - |
dc.description.abstract | Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we proposea 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each imagemodality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of differentmodalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1 x 1 x 1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusionmechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-contextLA-GANsmodel to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | BRAIN | - |
dc.subject | IMAGE | - |
dc.subject | RECONSTRUCTION | - |
dc.subject | SEGMENTATION | - |
dc.subject | MRI | - |
dc.title | 3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Shen, Dinggang | - |
dc.identifier.doi | 10.1109/TMI.2018.2884053 | - |
dc.identifier.scopusid | 2-s2.0-85057882665 | - |
dc.identifier.wosid | 000470829000002 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON MEDICAL IMAGING, v.38, no.6, pp.1328 - 1339 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON MEDICAL IMAGING | - |
dc.citation.title | IEEE TRANSACTIONS ON MEDICAL IMAGING | - |
dc.citation.volume | 38 | - |
dc.citation.number | 6 | - |
dc.citation.startPage | 1328 | - |
dc.citation.endPage | 1339 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalResearchArea | Radiology, Nuclear Medicine & Medical Imaging | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Engineering, Biomedical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Radiology, Nuclear Medicine & Medical Imaging | - |
dc.subject.keywordPlus | BRAIN | - |
dc.subject.keywordPlus | IMAGE | - |
dc.subject.keywordPlus | RECONSTRUCTION | - |
dc.subject.keywordPlus | SEGMENTATION | - |
dc.subject.keywordPlus | MRI | - |
dc.subject.keywordAuthor | Image synthesis | - |
dc.subject.keywordAuthor | positron emission topography (PET) | - |
dc.subject.keywordAuthor | generative adversarial networks (GANs) | - |
dc.subject.keywordAuthor | locality adaptive fusion | - |
dc.subject.keywordAuthor | multi-modality | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea+82-2-3290-2963
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.