Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces From 3D Intraoral Scanners

Authors
Lian, ChunfengWang, LiWu, Tai-HsienWang, FanYap, Pew-ThianKo, Ching-ChangShen, Dinggang
Issue Date
Jul-2020
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Teeth; Dentistry; Three-dimensional displays; Labeling; Shape; Feature extraction; Surface morphology; 3D shape segmentation; geometric deep learning; automated tooth labeling; orthodontic treatment planning; 3D intraoral scanners
Citation
IEEE TRANSACTIONS ON MEDICAL IMAGING, v.39, no.7, pp.2440 - 2450
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume
39
Number
7
Start Page
2440
End Page
2450
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/54859
DOI
10.1109/TMI.2020.2971730
ISSN
0278-0062
Abstract
Precisely labeling teeth on digitalized 3D dental surface models is the precondition for tooth position rearrangements in orthodontic treatment planning. However, it is a challenging task primarily due to the abnormal and varying appearance of patients' teeth. The emerging utilization of intraoral scanners (IOSs) in clinics further increases the difficulty in automated tooth labeling, as the raw surfaces acquired by IOS are typically low-quality at gingival and deep intraoral regions. In recent years, some pioneering end-to-end methods (e.g., PointNet) have been proposed in the communities of computer vision and graphics to consume directly raw surface for 3D shape segmentation. Although these methods are potentially applicable to our task, most of them fail to capture fine-grained local geometric context that is critical to the identification of small teeth with varying shapes and appearances. In this paper, we propose an end-to-end deep-learning method, called MeshSegNet, for automated tooth labeling on raw dental surfaces. Using multiple raw surface attributes as inputs, MeshSegNet integrates a series of graph-constrained learning modules along its forward path to hierarchically extract multi-scale local contextual features. Then, a dense fusion strategy is applied to combine local-to-global geometric features for the learning of higher-level features for mesh cell annotation. The predictions produced by our MeshSegNet are further post-processed by a graph-cut refinement step for final segmentation. We evaluated MeshSegNet using a real-patient dataset consisting of raw maxillary surfaces acquired by 3D IOS. Experimental results, performed 5-fold cross-validation, demonstrate that MeshSegNet significantly outperforms state-of-the-art deep learning methods for 3D shape segmentation.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE