TY - GEN
T1 - Automatically mining relevant variable interactions via sparse Bayesian learning
AU - Yafune, Ryoichiro
AU - Sakuma, Daisuke
AU - Takayanagi, Mirai
AU - Tabei, Yasuo
AU - Saito, Noritaka
AU - Saigo, Hiroto
N1 - Funding Information:
This work was supported by JSPS KAKENHI Grant Number JP19H04176 (to HS). NS was supported by JSPS KAKENHI Grant Number JP18H01762. YT was supported by JST AIP-PRISM Grant Number JPMJCR18Y5.
Publisher Copyright:
© 2020 IEEE
PY - 2020
Y1 - 2020
N2 - With the rapid increase in the availability of large amount of data, prediction is becoming increasingly popular, and has widespread through our daily life. However, powerful nonlinear prediction methods such as deep learning and SVM suffer from interpretability problem, making it hard to use in domains where the reason for decision making is required. In this paper, we develop an interpretable non-linear model called itemset Sparse Bayes (iSB), which builds a Bayesian probabilistic model, while simultaneously considering variable interactions. In order to suppress the resulting large number of variables, sparsity is imposed on regression weights by a sparsity inducing prior. As a subroutine to search for variable interactions, itemset enumeration algorithm is employed with a novel bounding condition. In computational experiments using real-world dataset, the proposed method performed better than decision tree by 10% in terms of r2. We also demonstrated the advantage of our method in Bayesian optimization setting, in which the proposed approach could successfully find the maximum of an unknown function while maintaining transparency. Apart from Bayesian optimization with Gaussian process, iSB gives us a clue to understand which variables interactions are important in optimizing an unknown function.
AB - With the rapid increase in the availability of large amount of data, prediction is becoming increasingly popular, and has widespread through our daily life. However, powerful nonlinear prediction methods such as deep learning and SVM suffer from interpretability problem, making it hard to use in domains where the reason for decision making is required. In this paper, we develop an interpretable non-linear model called itemset Sparse Bayes (iSB), which builds a Bayesian probabilistic model, while simultaneously considering variable interactions. In order to suppress the resulting large number of variables, sparsity is imposed on regression weights by a sparsity inducing prior. As a subroutine to search for variable interactions, itemset enumeration algorithm is employed with a novel bounding condition. In computational experiments using real-world dataset, the proposed method performed better than decision tree by 10% in terms of r2. We also demonstrated the advantage of our method in Bayesian optimization setting, in which the proposed approach could successfully find the maximum of an unknown function while maintaining transparency. Apart from Bayesian optimization with Gaussian process, iSB gives us a clue to understand which variables interactions are important in optimizing an unknown function.
UR - http://www.scopus.com/inward/record.url?scp=85110558082&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85110558082&partnerID=8YFLogxK
U2 - 10.1109/ICPR48806.2021.9413236
DO - 10.1109/ICPR48806.2021.9413236
M3 - Conference contribution
AN - SCOPUS:85110558082
T3 - Proceedings - International Conference on Pattern Recognition
SP - 9635
EP - 9642
BT - Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Pattern Recognition, ICPR 2020
Y2 - 10 January 2021 through 15 January 2021
ER -