Segmenting hippocampal subfields from 3T MRI with multi-modality images
- Authors
- Wu, Zhengwang; Gao, Yaozong; Shi, Feng; Ma, Guangkai; Jewells, Valerie; Shen, Dinggang
- Issue Date
- 1월-2018
- Publisher
- ELSEVIER SCIENCE BV
- Keywords
- Hippocampal subfields segmentation; Multi-modality features; Structured random forest; Auto-context model
- Citation
- MEDICAL IMAGE ANALYSIS, v.43, pp.10 - 22
- Indexed
- SCIE
SCOPUS
- Journal Title
- MEDICAL IMAGE ANALYSIS
- Volume
- 43
- Start Page
- 10
- End Page
- 22
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/78484
- DOI
- 10.1016/j.media.2017.09.006
- ISSN
- 1361-8415
- Abstract
- Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal sub fields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI. (C) 2017 Elsevier B.V. All rights reserved.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.