Barron and cover's theory in supervised learning and its application to lasso

Masanori Kawakita, Junnichi Takeuchi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

We study Barron and Cover's theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron & Luo (2008) and Chatteijee & Barron (2014a) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without bound-edness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.

Original languageEnglish
Title of host publication33rd International Conference on Machine Learning, ICML 2016
PublisherInternational Machine Learning Society (IMLS)
Pages2896-2905
Number of pages10
Volume4
ISBN (Electronic)9781510829008
Publication statusPublished - 2016
Event33rd International Conference on Machine Learning, ICML 2016 - New York City, United States
Duration: Jun 19 2016Jun 24 2016

Other

Other33rd International Conference on Machine Learning, ICML 2016
CountryUnited States
CityNew York City
Period6/19/166/24/16

Fingerprint

Supervised learning
Computer simulation

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Computer Networks and Communications

Cite this

Kawakita, M., & Takeuchi, J. (2016). Barron and cover's theory in supervised learning and its application to lasso. In 33rd International Conference on Machine Learning, ICML 2016 (Vol. 4, pp. 2896-2905). International Machine Learning Society (IMLS).

Barron and cover's theory in supervised learning and its application to lasso. / Kawakita, Masanori; Takeuchi, Junnichi.

33rd International Conference on Machine Learning, ICML 2016. Vol. 4 International Machine Learning Society (IMLS), 2016. p. 2896-2905.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kawakita, M & Takeuchi, J 2016, Barron and cover's theory in supervised learning and its application to lasso. in 33rd International Conference on Machine Learning, ICML 2016. vol. 4, International Machine Learning Society (IMLS), pp. 2896-2905, 33rd International Conference on Machine Learning, ICML 2016, New York City, United States, 6/19/16.
Kawakita M, Takeuchi J. Barron and cover's theory in supervised learning and its application to lasso. In 33rd International Conference on Machine Learning, ICML 2016. Vol. 4. International Machine Learning Society (IMLS). 2016. p. 2896-2905
Kawakita, Masanori ; Takeuchi, Junnichi. / Barron and cover's theory in supervised learning and its application to lasso. 33rd International Conference on Machine Learning, ICML 2016. Vol. 4 International Machine Learning Society (IMLS), 2016. pp. 2896-2905
@inproceedings{b8265288c2804c0d8ffab60e32dd8192,
title = "Barron and cover's theory in supervised learning and its application to lasso",
abstract = "We study Barron and Cover's theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron & Luo (2008) and Chatteijee & Barron (2014a) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without bound-edness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.",
author = "Masanori Kawakita and Junnichi Takeuchi",
year = "2016",
language = "English",
volume = "4",
pages = "2896--2905",
booktitle = "33rd International Conference on Machine Learning, ICML 2016",
publisher = "International Machine Learning Society (IMLS)",

}

TY - GEN

T1 - Barron and cover's theory in supervised learning and its application to lasso

AU - Kawakita, Masanori

AU - Takeuchi, Junnichi

PY - 2016

Y1 - 2016

N2 - We study Barron and Cover's theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron & Luo (2008) and Chatteijee & Barron (2014a) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without bound-edness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.

AB - We study Barron and Cover's theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron & Luo (2008) and Chatteijee & Barron (2014a) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without bound-edness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.

UR - http://www.scopus.com/inward/record.url?scp=84998773575&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84998773575&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84998773575

VL - 4

SP - 2896

EP - 2905

BT - 33rd International Conference on Machine Learning, ICML 2016

PB - International Machine Learning Society (IMLS)

ER -