### 抄録

We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

元の言語 | 英語 |
---|---|

ページ（範囲） | 1459-1484 |

ページ数 | 26 |

ジャーナル | Neural Computation |

巻 | 21 |

発行部数 | 5 |

DOI | |

出版物ステータス | 出版済み - 5 1 2009 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience

### これを引用

*Neural Computation*,

*21*(5), 1459-1484. https://doi.org/10.1162/neco.2008.08-06-805

**Theory and algorithm for learning with dissimilarity functions.** / Wang, Liwei; Sugiyama, Masashi; Yang, Cheng; hatano, kohei; Feng, Jufu.

研究成果: ジャーナルへの寄稿 › レター

*Neural Computation*, 巻. 21, 番号 5, pp. 1459-1484. https://doi.org/10.1162/neco.2008.08-06-805

}

TY - JOUR

T1 - Theory and algorithm for learning with dissimilarity functions

AU - Wang, Liwei

AU - Sugiyama, Masashi

AU - Yang, Cheng

AU - hatano, kohei

AU - Feng, Jufu

PY - 2009/5/1

Y1 - 2009/5/1

N2 - We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

AB - We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity- based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.

UR - http://www.scopus.com/inward/record.url?scp=70349263820&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70349263820&partnerID=8YFLogxK

U2 - 10.1162/neco.2008.08-06-805

DO - 10.1162/neco.2008.08-06-805

M3 - Letter

C2 - 19718819

AN - SCOPUS:70349263820

VL - 21

SP - 1459

EP - 1484

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 5

ER -