Theory and algorithm for learning with dissimilarity functions

Liwei Wang, Masashi Sugiyama, Cheng Yang, kohei hatano, Jufu Feng

研究成果: ジャーナルへの寄稿レター

14 引用 (Scopus)

抄録

We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

元の言語英語
ページ(範囲)1459-1484
ページ数26
ジャーナルNeural Computation
21
発行部数5
DOI
出版物ステータス出版済み - 5 1 2009

Fingerprint

Learning
Databases
Classifier
Paradigm
Guidance
Ensemble
Data Base
Experiment

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

これを引用

Theory and algorithm for learning with dissimilarity functions. / Wang, Liwei; Sugiyama, Masashi; Yang, Cheng; hatano, kohei; Feng, Jufu.

:: Neural Computation, 巻 21, 番号 5, 01.05.2009, p. 1459-1484.

研究成果: ジャーナルへの寄稿レター

Wang, Liwei ; Sugiyama, Masashi ; Yang, Cheng ; hatano, kohei ; Feng, Jufu. / Theory and algorithm for learning with dissimilarity functions. :: Neural Computation. 2009 ; 巻 21, 番号 5. pp. 1459-1484.
@article{0d83dc8476ff4996b19b1cd51ab30ece,
title = "Theory and algorithm for learning with dissimilarity functions",
abstract = "We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.",
author = "Liwei Wang and Masashi Sugiyama and Cheng Yang and kohei hatano and Jufu Feng",
year = "2009",
month = "5",
day = "1",
doi = "10.1162/neco.2008.08-06-805",
language = "English",
volume = "21",
pages = "1459--1484",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "5",

}

TY - JOUR

T1 - Theory and algorithm for learning with dissimilarity functions

AU - Wang, Liwei

AU - Sugiyama, Masashi

AU - Yang, Cheng

AU - hatano, kohei

AU - Feng, Jufu

PY - 2009/5/1

Y1 - 2009/5/1

N2 - We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

AB - We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

UR - http://www.scopus.com/inward/record.url?scp=70349263820&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70349263820&partnerID=8YFLogxK

U2 - 10.1162/neco.2008.08-06-805

DO - 10.1162/neco.2008.08-06-805

M3 - Letter

C2 - 19718819

AN - SCOPUS:70349263820

VL - 21

SP - 1459

EP - 1484

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 5

ER -