Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization

Authors
Zhang, JunLiu, MingxiaWang, LiChen, SiYuan, PengLi, JianfuShen, Steve Guo-FangTang, ZhenChen, Ken-ChungXia, James J.Shen, Dinggang
Issue Date
2월-2020
Publisher
ELSEVIER
Keywords
Cone-beam computed tomography; Landmark digitization; Bone segmentation; Fully convolutional networks
Citation
MEDICAL IMAGE ANALYSIS, v.60
Indexed
SCIE
SCOPUS
Journal Title
MEDICAL IMAGE ANALYSIS
Volume
60
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/57750
DOI
10.1016/j.media.2019.101621
ISSN
1361-8415
Abstract
Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization. (C) 2019 Elsevier B.V. All rights reserved.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE