Efficient distribution-free population learning of simple concepts

Atsuyoshi Nakamura, Naoki Abe, Jun Ichi Takeuchi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

We consider a variant of the 'population learning model' proposed by Kearns and Seung, in which the learner is required to be 'distribution-free' as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and outputs a hypothesis. A polynomial time population learner is said to 'PAC learn' a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the 'supremum hypothesis finder,' the 'minimum superset finder' (a special case of the 'supremum hypothesis finder'), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the 'high-low game,' conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model, with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of 'population prediction,' in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.

Original languageEnglish
Title of host publicationAlgorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings
EditorsSetsuo Arikawa, Klaus P. Jantke
PublisherSpringer Verlag
Pages500-515
Number of pages16
ISBN (Print)9783540585206
Publication statusPublished - Jan 1 1994
Externally publishedYes
Event4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994 - Reinhardsbrunn Castle, Germany
Duration: Oct 10 1994Oct 15 1994

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume872 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994
CountryGermany
CityReinhardsbrunn Castle
Period10/10/9410/15/94

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Nakamura, A., Abe, N., & Takeuchi, J. I. (1994). Efficient distribution-free population learning of simple concepts. In S. Arikawa, & K. P. Jantke (Eds.), Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings (pp. 500-515). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 872 LNAI). Springer Verlag.