### Abstract

Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p+n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p+n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p+n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.

Original language | English |
---|---|

Title of host publication | Algorithmic Learning Theory - 22nd International Conference, ALT 2011, Proceedings |

Pages | 324-337 |

Number of pages | 14 |

DOIs | |

Publication status | Published - Oct 20 2011 |

Event | 22nd International Conference on Algorithmic Learning Theory, ALT 2011 - Espoo, Finland Duration: Oct 5 2011 → Oct 7 2011 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 6925 LNAI |

ISSN (Print) | 0302-9743 |

ISSN (Electronic) | 1611-3349 |

### Other

Other | 22nd International Conference on Algorithmic Learning Theory, ALT 2011 |
---|---|

Country | Finland |

City | Espoo |

Period | 10/5/11 → 10/7/11 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Computer Science(all)

### Cite this

*Algorithmic Learning Theory - 22nd International Conference, ALT 2011, Proceedings*(pp. 324-337). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 6925 LNAI). https://doi.org/10.1007/978-3-642-24412-4_26

**Approximate reduction from AUC maximization to 1-norm soft margin optimization.** / Suehiro, Daiki; hatano, kohei; Takimoto, Eiji.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Algorithmic Learning Theory - 22nd International Conference, ALT 2011, Proceedings.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6925 LNAI, pp. 324-337, 22nd International Conference on Algorithmic Learning Theory, ALT 2011, Espoo, Finland, 10/5/11. https://doi.org/10.1007/978-3-642-24412-4_26

}

TY - GEN

T1 - Approximate reduction from AUC maximization to 1-norm soft margin optimization

AU - Suehiro, Daiki

AU - hatano, kohei

AU - Takimoto, Eiji

PY - 2011/10/20

Y1 - 2011/10/20

N2 - Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p+n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p+n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p+n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.

AB - Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p+n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p+n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p+n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.

UR - http://www.scopus.com/inward/record.url?scp=80054091131&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80054091131&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-24412-4_26

DO - 10.1007/978-3-642-24412-4_26

M3 - Conference contribution

AN - SCOPUS:80054091131

SN - 9783642244117

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 324

EP - 337

BT - Algorithmic Learning Theory - 22nd International Conference, ALT 2011, Proceedings

ER -