抄録
When applied to high-dimensional datasets, feature selection algorithms might still leave dozens of irrelevant variables in the dataset. Therefore, even after feature selection has been applied, classifiers must be prepared to the presence of irrelevant variables. This paper investigates a new training method called Contingency Training which increases the accuracy as well as the robustness against irrelevant attributes. Contingency training is classifier independent. By subsampling and removing information from each sample, it creates a set of constraints. These constraints aid the method to automatically find proper importance weights of the dataset's features. Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables. For all of the tests, contingency training surpassed the unmodified training on datasets with irrelevant variables and even outperformed slightly when only a few or no irrelevant variables were present.
本文言語 | 英語 |
---|---|
ホスト出版物のタイトル | SICE 2013: International Conference on Instrumentation, Control, Information Technology and System Integration - SICE Annual Conference 2013, Conference Proceedings |
ページ | 1361-1366 |
ページ数 | 6 |
出版ステータス | 出版済み - 2013 |
イベント | 2013 52nd Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2013 - Nagoya, 日本 継続期間: 9月 14 2013 → 9月 17 2013 |
その他
その他 | 2013 52nd Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2013 |
---|---|
国/地域 | 日本 |
City | Nagoya |
Period | 9/14/13 → 9/17/13 |
!!!All Science Journal Classification (ASJC) codes
- 電子工学および電気工学
- 制御およびシステム工学
- コンピュータ サイエンスの応用