### 抄録

Algorithms for learning feasibly Boolean functions from examples are explored. A class of functions we deal with is written as F_{1} oF_{2} ^{k} = {g(f_{1}(v),...f_{k}(v)) g ∈ F_{1}, f_{1}...,f_{k} ∈ F_{2}} for classes F_{1} and F_{2} given by somewhat "simple" description. Letting Γ = {0,1}, we denote by F_{1} and F_{2} a class of functions from Γ^{k} to Γ and that of functions from Γ^{n} to Γ, respectively. For exa.mple, let F_{Or} consist of an OR function of k variables, and let F_{n} be the class of all monomials of n variables. In the distribution free setting, it is known that F_{OR}o F_{n} ^{k}, denoted usually k-term DNF, is not learnable unless P≠NP In this paper, we first introduce a probabilistic distribution, called a smooth distribution, which is a generalization of both q-bounded distribution and product distribution, and define the learnability under this distribution. Then, we give an algorithm that properly learns F_{k}oT_{n} ^{k} under smooth distribution in polynomial time for constant k, where F_{k} is the class of all Boolean functions of k variables. The class F_{k}oT_{n} ^{k} is called the functions of k terms and although it was shown by Blum and Singh to be learned using DNF as a hypothesis class, it remains open whether it is properly learnable under distribution free setting.

元の言語 | 英語 |
---|---|

ホスト出版物のタイトル | Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 |

出版者 | Association for Computing Machinery, Inc |

ページ | 206-213 |

ページ数 | 8 |

ISBN（電子版） | 0897917235, 9780897917230 |

出版物ステータス | 出版済み - 7 5 1995 |

外部発表 | Yes |

イベント | 8th Annual Conference on Computational Learning Theory, COLT 1995 - Santa Cruz, 米国 継続期間: 7 5 1995 → 7 8 1995 |

### 出版物シリーズ

名前 | Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 |
---|---|

巻 | 1995-January |

### その他

その他 | 8th Annual Conference on Computational Learning Theory, COLT 1995 |
---|---|

国 | 米国 |

市 | Santa Cruz |

期間 | 7/5/95 → 7/8/95 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Artificial Intelligence
- Software

### これを引用

*Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995*(pp. 206-213). (Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995; 巻数 1995-January). Association for Computing Machinery, Inc.

**Proper learning algorithm for functions of κ terms under smooth distributions.** / Sakai, Yoshifumi; Takimoto, Eiji; Maruoka, Akira.

研究成果: 著書/レポートタイプへの貢献 › 会議での発言

*Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995.*Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995, 巻. 1995-January, Association for Computing Machinery, Inc, pp. 206-213, 8th Annual Conference on Computational Learning Theory, COLT 1995, Santa Cruz, 米国, 7/5/95.

}

TY - GEN

T1 - Proper learning algorithm for functions of κ terms under smooth distributions

AU - Sakai, Yoshifumi

AU - Takimoto, Eiji

AU - Maruoka, Akira

PY - 1995/7/5

Y1 - 1995/7/5

N2 - Algorithms for learning feasibly Boolean functions from examples are explored. A class of functions we deal with is written as F1 oF2 k = {g(f1(v),...fk(v)) g ∈ F1, f1...,fk ∈ F2} for classes F1 and F2 given by somewhat "simple" description. Letting Γ = {0,1}, we denote by F1 and F2 a class of functions from Γk to Γ and that of functions from Γn to Γ, respectively. For exa.mple, let FOr consist of an OR function of k variables, and let Fn be the class of all monomials of n variables. In the distribution free setting, it is known that FORo Fn k, denoted usually k-term DNF, is not learnable unless P≠NP In this paper, we first introduce a probabilistic distribution, called a smooth distribution, which is a generalization of both q-bounded distribution and product distribution, and define the learnability under this distribution. Then, we give an algorithm that properly learns FkoTn k under smooth distribution in polynomial time for constant k, where Fk is the class of all Boolean functions of k variables. The class FkoTn k is called the functions of k terms and although it was shown by Blum and Singh to be learned using DNF as a hypothesis class, it remains open whether it is properly learnable under distribution free setting.

AB - Algorithms for learning feasibly Boolean functions from examples are explored. A class of functions we deal with is written as F1 oF2 k = {g(f1(v),...fk(v)) g ∈ F1, f1...,fk ∈ F2} for classes F1 and F2 given by somewhat "simple" description. Letting Γ = {0,1}, we denote by F1 and F2 a class of functions from Γk to Γ and that of functions from Γn to Γ, respectively. For exa.mple, let FOr consist of an OR function of k variables, and let Fn be the class of all monomials of n variables. In the distribution free setting, it is known that FORo Fn k, denoted usually k-term DNF, is not learnable unless P≠NP In this paper, we first introduce a probabilistic distribution, called a smooth distribution, which is a generalization of both q-bounded distribution and product distribution, and define the learnability under this distribution. Then, we give an algorithm that properly learns FkoTn k under smooth distribution in polynomial time for constant k, where Fk is the class of all Boolean functions of k variables. The class FkoTn k is called the functions of k terms and although it was shown by Blum and Singh to be learned using DNF as a hypothesis class, it remains open whether it is properly learnable under distribution free setting.

UR - http://www.scopus.com/inward/record.url?scp=33646907847&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33646907847&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:33646907847

T3 - Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995

SP - 206

EP - 213

BT - Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995

PB - Association for Computing Machinery, Inc

ER -