Learning a saliency map using fixated locations in natural scenes
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhao, Qi | - |
dc.contributor.author | Koch, Christof | - |
dc.date.accessioned | 2021-09-07T21:58:17Z | - |
dc.date.available | 2021-09-07T21:58:17Z | - |
dc.date.issued | 2011 | - |
dc.identifier.issn | 1534-7362 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/115056 | - |
dc.description.abstract | Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | ASSOC RESEARCH VISION OPHTHALMOLOGY INC | - |
dc.title | Learning a saliency map using fixated locations in natural scenes | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1167/11.3.9 | - |
dc.identifier.scopusid | 2-s2.0-79957836414 | - |
dc.identifier.wosid | 000289076200009 | - |
dc.identifier.bibliographicCitation | JOURNAL OF VISION, v.11, no.3 | - |
dc.citation.title | JOURNAL OF VISION | - |
dc.citation.volume | 11 | - |
dc.citation.number | 3 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Ophthalmology | - |
dc.relation.journalWebOfScienceCategory | Ophthalmology | - |
dc.subject.keywordPlus | EYE-MOVEMENTS | - |
dc.subject.keywordPlus | VISUAL-ATTENTION | - |
dc.subject.keywordPlus | SEARCH | - |
dc.subject.keywordPlus | GUIDANCE | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordPlus | OVERT | - |
dc.subject.keywordPlus | STRATEGIES | - |
dc.subject.keywordPlus | ALLOCATION | - |
dc.subject.keywordPlus | CONTRAST | - |
dc.subject.keywordPlus | SACCADES | - |
dc.subject.keywordAuthor | computational saliency model | - |
dc.subject.keywordAuthor | feature combination | - |
dc.subject.keywordAuthor | center bias | - |
dc.subject.keywordAuthor | inter-subject variability | - |
dc.subject.keywordAuthor | metric | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea+82-2-3290-2963
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.