Contingency training

Research output: Chapter in Book/Report/Conference proceedingConference contribution


When applied to high-dimensional datasets, feature selection algorithms might still leave dozens of irrelevant variables in the dataset. Therefore, even after feature selection has been applied, classifiers must be prepared to the presence of irrelevant variables. This paper investigates a new training method called Contingency Training which increases the accuracy as well as the robustness against irrelevant attributes. Contingency training is classifier independent. By subsampling and removing information from each sample, it creates a set of constraints. These constraints aid the method to automatically find proper importance weights of the dataset's features. Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables. For all of the tests, contingency training surpassed the unmodified training on datasets with irrelevant variables and even outperformed slightly when only a few or no irrelevant variables were present.

Original languageEnglish
Title of host publicationSICE 2013: International Conference on Instrumentation, Control, Information Technology and System Integration - SICE Annual Conference 2013, Conference Proceedings
Number of pages6
Publication statusPublished - 2013
Event2013 52nd Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2013 - Nagoya, Japan
Duration: Sept 14 2013Sept 17 2013


Other2013 52nd Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2013

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications


Dive into the research topics of 'Contingency training'. Together they form a unique fingerprint.

Cite this