Why Does a Hilbertian Metric Work Efficiently in Online Learning With Kernels?
- Authors
- Yukawa, Masahiro; Muller, Klaus-Robert
- Issue Date
- 10월-2016
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Kernel adaptive filter; online learning; reproducing kernel Hilbert space (RKHS)
- Citation
- IEEE SIGNAL PROCESSING LETTERS, v.23, no.10, pp.1424 - 1428
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE SIGNAL PROCESSING LETTERS
- Volume
- 23
- Number
- 10
- Start Page
- 1424
- End Page
- 1428
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/87400
- DOI
- 10.1109/LSP.2016.2598615
- ISSN
- 1070-9908
- Abstract
- The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eigenvalue spread of the autocorrelation matrix relevant to the hyperplane projection along affine subspace algorithm is approximately a square root of that for the kernel normalized least mean square algorithm. This clarifies the mechanism behind fast convergence due to the use of a Hilbertian metric. Second, for efficient function estimation, the dictionary needs to be constructed in general by taking into account the distribution of the input vector, so as to satisfy the condition. The theoretical results are justified by computer experiments.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.