Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis

Authors
Wang, YanZhou, LupingYu, BitingWang, LeiZu, ChenLalush, David S.Lin, WeiliWu, XiZhou, JiliuShen, Dinggang
Issue Date
6월-2019
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Image synthesis; positron emission topography (PET); generative adversarial networks (GANs); locality adaptive fusion; multi-modality
Citation
IEEE TRANSACTIONS ON MEDICAL IMAGING, v.38, no.6, pp.1328 - 1339
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume
38
Number
6
Start Page
1328
End Page
1339
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/65278
DOI
10.1109/TMI.2018.2884053
ISSN
0278-0062
Abstract
Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we proposea 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each imagemodality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of differentmodalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1 x 1 x 1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusionmechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-contextLA-GANsmodel to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE