Efficient distribution-free population learning of simple concepts

Atsuyoshi Nakamura, Naoki Abe, Junnichi Takeuchi

研究成果: 著書/レポートタイプへの貢献会議での発言

1 引用 (Scopus)

抄録

We consider a variant of the 'population learning model' proposed by Kearns and Seung, in which the learner is required to be 'distribution-free' as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and outputs a hypothesis. A polynomial time population learner is said to 'PAC learn' a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the 'supremum hypothesis finder,' the 'minimum superset finder' (a special case of the 'supremum hypothesis finder'), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the 'high-low game,' conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model, with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of 'population prediction,' in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.

元の言語英語
ホスト出版物のタイトルAlgorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings
出版者Springer Verlag
ページ500-515
ページ数16
872 LNAI
ISBN(印刷物)9783540585206
出版物ステータス出版済み - 1 1 1994
外部発表Yes
イベント4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994 - Reinhardsbrunn Castle, ドイツ
継続期間: 10 10 199410 15 1994

出版物シリーズ

名前Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
872 LNAI
ISSN(印刷物)0302-9743
ISSN(電子版)1611-3349

その他

その他4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994
ドイツ
Reinhardsbrunn Castle
期間10/10/9410/15/94

Fingerprint

Distribution-free
Population Size
Voting
Supremum
Polynomials
Sample Size
PAC Learning
Model
Concepts
Learning
Threshold Function
Target
Learning Strategies
Prediction
Output
Population Model
Rectangle
Prediction Model
Polynomial time
Exceed

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

これを引用

Nakamura, A., Abe, N., & Takeuchi, J. (1994). Efficient distribution-free population learning of simple concepts. : Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings (巻 872 LNAI, pp. 500-515). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 巻数 872 LNAI). Springer Verlag.

Efficient distribution-free population learning of simple concepts. / Nakamura, Atsuyoshi; Abe, Naoki; Takeuchi, Junnichi.

Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings. 巻 872 LNAI Springer Verlag, 1994. p. 500-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 巻 872 LNAI).

研究成果: 著書/レポートタイプへの貢献会議での発言

Nakamura, A, Abe, N & Takeuchi, J 1994, Efficient distribution-free population learning of simple concepts. : Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings. 巻. 872 LNAI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 巻. 872 LNAI, Springer Verlag, pp. 500-515, 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Reinhardsbrunn Castle, ドイツ, 10/10/94.
Nakamura A, Abe N, Takeuchi J. Efficient distribution-free population learning of simple concepts. : Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings. 巻 872 LNAI. Springer Verlag. 1994. p. 500-515. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Nakamura, Atsuyoshi ; Abe, Naoki ; Takeuchi, Junnichi. / Efficient distribution-free population learning of simple concepts. Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings. 巻 872 LNAI Springer Verlag, 1994. pp. 500-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{30d7ea1a02724d88b0efb5ee733ab22d,
title = "Efficient distribution-free population learning of simple concepts",
abstract = "We consider a variant of the 'population learning model' proposed by Kearns and Seung, in which the learner is required to be 'distribution-free' as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and outputs a hypothesis. A polynomial time population learner is said to 'PAC learn' a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the 'supremum hypothesis finder,' the 'minimum superset finder' (a special case of the 'supremum hypothesis finder'), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the 'high-low game,' conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model, with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of 'population prediction,' in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.",
author = "Atsuyoshi Nakamura and Naoki Abe and Junnichi Takeuchi",
year = "1994",
month = "1",
day = "1",
language = "English",
isbn = "9783540585206",
volume = "872 LNAI",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "500--515",
booktitle = "Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings",
address = "Germany",

}

TY - GEN

T1 - Efficient distribution-free population learning of simple concepts

AU - Nakamura, Atsuyoshi

AU - Abe, Naoki

AU - Takeuchi, Junnichi

PY - 1994/1/1

Y1 - 1994/1/1

N2 - We consider a variant of the 'population learning model' proposed by Kearns and Seung, in which the learner is required to be 'distribution-free' as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and outputs a hypothesis. A polynomial time population learner is said to 'PAC learn' a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the 'supremum hypothesis finder,' the 'minimum superset finder' (a special case of the 'supremum hypothesis finder'), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the 'high-low game,' conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model, with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of 'population prediction,' in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.

AB - We consider a variant of the 'population learning model' proposed by Kearns and Seung, in which the learner is required to be 'distribution-free' as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and outputs a hypothesis. A polynomial time population learner is said to 'PAC learn' a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the 'supremum hypothesis finder,' the 'minimum superset finder' (a special case of the 'supremum hypothesis finder'), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the 'high-low game,' conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model, with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of 'population prediction,' in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.

UR - http://www.scopus.com/inward/record.url?scp=0346926398&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0346926398&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0346926398

SN - 9783540585206

VL - 872 LNAI

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 500

EP - 515

BT - Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings

PB - Springer Verlag

ER -