Interactive medical image segmentation via a point-based interaction
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Jian | - |
dc.contributor.author | Shi, Yinghuan | - |
dc.contributor.author | Sun, Jinquan | - |
dc.contributor.author | Wang, Lei | - |
dc.contributor.author | Zhou, Luping | - |
dc.contributor.author | Gao, Yang | - |
dc.contributor.author | Shen, Dinggang | - |
dc.date.accessioned | 2021-08-30T05:03:41Z | - |
dc.date.available | 2021-08-30T05:03:41Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2021-01 | - |
dc.identifier.issn | 0933-3657 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/50614 | - |
dc.description.abstract | Due to low tissue contrast, irregular shape, and large location variance, segmenting the objects from different medical imaging modalities (e.g., CT, MR) is considered as an important yet challenging task. In this paper, a novel method is presented for interactive medical image segmentation with the following merits. (1) Its design is fundamentally different from previous pure patch-based and image-based segmentation methods. It is observed that during delineation, the physician repeatedly check the intensity from area inside-object to outside-object to determine the boundary, which indicates that comparison in an inside-out manner is extremely important. Thus, the method innovatively models the segmentation task as learning the representation of bi-directional sequential patches, starting from (or ending in) the given central point of the object. This can be realized by the proposed ConvRNN network embedded with a gated memory propagation unit. (2) Unlike previous interactive methods (requiring bounding box or seed points), the proposed method only asks the physician to merely click on the rough central point of the object before segmentation, which could simultaneously enhance the performance and reduce the segmentation time. (3) The method is utilized in a multi-level framework for better performance. It has been systematically evaluated in three different segmentation tasks, including CT kidney tumor, MR prostate, and PROMISE12 challenge, showing promising results compared with state-of-the-art methods. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER | - |
dc.subject | PROSTATE | - |
dc.subject | FRAMEWORK | - |
dc.title | Interactive medical image segmentation via a point-based interaction | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Shen, Dinggang | - |
dc.identifier.doi | 10.1016/j.artmed.2020.101998 | - |
dc.identifier.scopusid | 2-s2.0-85097336855 | - |
dc.identifier.wosid | 000612815000005 | - |
dc.identifier.bibliographicCitation | ARTIFICIAL INTELLIGENCE IN MEDICINE, v.111 | - |
dc.relation.isPartOf | ARTIFICIAL INTELLIGENCE IN MEDICINE | - |
dc.citation.title | ARTIFICIAL INTELLIGENCE IN MEDICINE | - |
dc.citation.volume | 111 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Medical Informatics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Biomedical | - |
dc.relation.journalWebOfScienceCategory | Medical Informatics | - |
dc.subject.keywordPlus | PROSTATE | - |
dc.subject.keywordPlus | FRAMEWORK | - |
dc.subject.keywordAuthor | Point-based interaction | - |
dc.subject.keywordAuthor | Sequential patch learning | - |
dc.subject.keywordAuthor | Medical image segmentation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.