Top-Rank Learning Robust to Outliers

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Top-rank learning aims to maximize the number of absolute top samples, which are “doubtlessly positive” samples and very useful for the real applications that require reliable positive samples. However, top-rank learning is very sensitive to outliers of the negative class. This paper proposes a robust top-rank learning algorithm with an unsupervised outlier estimation technique called local outlier factor (LoF). Introduction of LoF can weaken the effect of the negative outliers and thus increase the stability of the learned ranking function. Moreover, we combine robust top-rank learning with representation learning by a deep neural network (DNN). Experiments on artificial datasets and a medical image dataset demonstrate the robustness of the proposed method to outliers.

Original languageEnglish
Title of host publicationNeural Information Processing - 28th International Conference, ICONIP 2021, Proceedings
EditorsTeddy Mantoro, Minho Lee, Media Anugerah Ayu, Kok Wai Wong, Achmad Nizar Hidayanto
PublisherSpringer Science and Business Media Deutschland GmbH
Pages608-619
Number of pages12
ISBN (Print)9783030922375
DOIs
Publication statusPublished - 2021
Event28th International Conference on Neural Information Processing, ICONIP 2021 - Virtual, Online
Duration: Dec 8 2021Dec 12 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13110 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference28th International Conference on Neural Information Processing, ICONIP 2021
CityVirtual, Online
Period12/8/2112/12/21

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Top-Rank Learning Robust to Outliers'. Together they form a unique fingerprint.

Cite this