Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Probabilistic models of vision and max-margin methods

Full metadata record
DC Field Value Language
dc.contributor.authorYuille, A.-
dc.contributor.authorHe, X.-
dc.date.accessioned2021-09-07T04:02:03Z-
dc.date.available2021-09-07T04:02:03Z-
dc.date.created2021-06-17-
dc.date.issued2012-
dc.identifier.issn1673-3460-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/110602-
dc.description.abstractIt is attractive to formulate problems in computer vision and related fields in term of probabilistic estimation where the probability models are defined over graphs, such as grammars. The graphical structures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The probability distributions defined over the graphs capture the statistical variability of these structures. These probability models can be learnt from training data with limited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normalization constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain computationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple-instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that better results can be obtained by using better bounds and approximations. © 2012 Higher Education Press and Springer-Verlag Berlin Heidelberg.-
dc.languageEnglish-
dc.language.isoen-
dc.titleProbabilistic models of vision and max-margin methods-
dc.typeArticle-
dc.contributor.affiliatedAuthorYuille, A.-
dc.identifier.doi10.1007/s11460-012-0170-6-
dc.identifier.scopusid2-s2.0-84863412765-
dc.identifier.bibliographicCitationFrontiers of Electrical and Electronic Engineering in China, v.7, no.1, pp.94 - 106-
dc.relation.isPartOfFrontiers of Electrical and Electronic Engineering in China-
dc.citation.titleFrontiers of Electrical and Electronic Engineering in China-
dc.citation.volume7-
dc.citation.number1-
dc.citation.startPage94-
dc.citation.endPage106-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthorloss function-
dc.subject.keywordAuthormax-margin learning-
dc.subject.keywordAuthorprobabilistic models-
dc.subject.keywordAuthorstructured prediction-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Brain and Cognitive Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE