Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep Neural Network Regression for Automated Retinal Layer Segmentation in Optical Coherence Tomography Images

Authors
Ngo, LuaCha, JaepyeongHan, Jae-Ho
Issue Date
2020
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Image segmentation; Retina; Training; Image edge detection; Deep learning; Computational complexity; Neural networks; Artificial intelligence; biomedical optical imaging; image segmentation; neural network; optical coherence tomography
Citation
IEEE TRANSACTIONS ON IMAGE PROCESSING, v.29, pp.303 - 312
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume
29
Start Page
303
End Page
312
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/59010
DOI
10.1109/TIP.2019.2931461
ISSN
1057-7149
Abstract
Segmenting the retinal layers in optical coherence tomography (OCT) images helps to quantify the layer information in early diagnosis of retinal diseases, which are the main cause of permanent blindness. Thus, the segmentation process plays a critical role in preventing vision impairment. However, because there is a lack of practical automated techniques, expert ophthalmologists still have to manually segment the retinal layers. In this paper, we propose an automated segmentation method for OCT images based on a feature-learning regression network without human bias. The proposed deep neural network regression takes the intensity, gradient, and adaptive normalized intensity score (ANIS) of an image segment as features for learning, and then predicts the corresponding retinal boundary pixel. Reformulating the segmentation as a regression problem obviates the need for a huge dataset and reduces the complexity significantly, as shown in the analysis of computational complexity given here. In addition, assisted by ANIS, the method operates robustly on OCT images containing intensity variances, low-contrast regions, speckle noise, and blood vessels, yet remains accurate and time-efficient. In the evaluation of the method conducted using 114 images, the processing time was approximately 10.596 s per image for identifying eight boundaries, and the training phase for each boundary line took only 30 s. Further, the Dice similarity coefficient used for assessing accuracy gave a computed value of approximately 0.966. The absolute pixel distance of manual and automatic segmentation using the proposed scheme was 0.612, which is less than a one-pixel difference, on average.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Brain and Cognitive Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Han, Jae Ho photo

Han, Jae Ho
뇌공학과
Read more

Altmetrics

Total Views & Downloads

BROWSE