Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Building and Interpreting Deep Similarity Models

Full metadata record
DC Field Value Language
dc.contributor.authorEberle, Oliver-
dc.contributor.authorBuettner, Jochen-
dc.contributor.authorKraeutli, Florian-
dc.contributor.authorMueller, Klaus-Robert-
dc.contributor.authorValleriani, Matteo-
dc.contributor.authorMontavon, Gregoire-
dc.date.accessioned2022-08-14T07:40:43Z-
dc.date.available2022-08-14T07:40:43Z-
dc.date.created2022-08-12-
dc.date.issued2022-03-01-
dc.identifier.issn0162-8828-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/143126-
dc.description.abstractMany learning algorithms such as kernel machines, nearest neighbors, clustering, or anomaly detection, are based on distances or similarities. Before similarities are used for training an actual machine learning model, we would like to verify that they are bound to meaningful patterns in the data. In this paper, we propose to make similarities interpretable by augmenting them with an explanation. We develop BiLRP, a scalable and theoretically founded method to systematically decompose the output of an already trained deep similarity model on pairs of input features. Our method can be expressed as a composition of LRP explanations, which were shown in previous works to scale to highly nonlinear models. Through an extensive set of experiments, we demonstrate that BiLRP robustly explains complex similarity models, e.g., built on VGG-16 deep neural network features. Additionally, we apply our method to an open problem in digital humanities: detailed assessment of similarity between historical documents, such as astronomical tables. Here again, BiLRP provides insight and brings verifiability into a highly engineered and problem-specific similarity model.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherIEEE COMPUTER SOC-
dc.subjectKERNEL-
dc.subjectREPRESENTATION-
dc.subjectSUPPORT-
dc.titleBuilding and Interpreting Deep Similarity Models-
dc.typeArticle-
dc.contributor.affiliatedAuthorMueller, Klaus-Robert-
dc.identifier.doi10.1109/TPAMI.2020.3020738-
dc.identifier.wosid000752018000007-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.3, pp.1149 - 1161-
dc.relation.isPartOfIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.citation.titleIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.citation.volume44-
dc.citation.number3-
dc.citation.startPage1149-
dc.citation.endPage1161-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordPlusKERNEL-
dc.subject.keywordPlusREPRESENTATION-
dc.subject.keywordPlusSUPPORT-
dc.subject.keywordAuthorMachine learning-
dc.subject.keywordAuthorData models-
dc.subject.keywordAuthorRobustness-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorTaylor series-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorSimilarity-
dc.subject.keywordAuthorlayer-wise relevance propagation-
dc.subject.keywordAuthordeep neural networks-
dc.subject.keywordAuthorexplainable machine learning-
dc.subject.keywordAuthordigital humanities-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE