Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xiang, Lei | - |
dc.contributor.author | Chen, Yong | - |
dc.contributor.author | Chang, Weitang | - |
dc.contributor.author | Zhan, Yiqiang | - |
dc.contributor.author | Lin, Weili | - |
dc.contributor.author | Wang, Qian | - |
dc.contributor.author | Shen, Dinggang | - |
dc.date.accessioned | 2021-09-01T12:51:48Z | - |
dc.date.available | 2021-09-01T12:51:48Z | - |
dc.date.created | 2021-06-19 | - |
dc.date.issued | 2019-07 | - |
dc.identifier.issn | 0018-9294 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/64244 | - |
dc.description.abstract | T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above-mentioned results imply great potential of our method in many clinical scenarios. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | IMAGE | - |
dc.subject | ACQUISITION | - |
dc.subject | NETWORKS | - |
dc.title | Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Shen, Dinggang | - |
dc.identifier.doi | 10.1109/TBME.2018.2883958 | - |
dc.identifier.wosid | 000473175400027 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, v.66, no.7, pp.2105 - 2114 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING | - |
dc.citation.title | IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING | - |
dc.citation.volume | 66 | - |
dc.citation.number | 7 | - |
dc.citation.startPage | 2105 | - |
dc.citation.endPage | 2114 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Biomedical | - |
dc.subject.keywordPlus | IMAGE | - |
dc.subject.keywordPlus | ACQUISITION | - |
dc.subject.keywordPlus | NETWORKS | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | dense block | - |
dc.subject.keywordAuthor | fast MR reconstruction | - |
dc.subject.keywordAuthor | multi-model fusion | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.