AIC for the non-concave penalized likelihood method

Yuta Umezu, Yusuke Shimizu, Hiroki Masuda, Yoshiyuki Ninomiya

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Non-concave penalized maximum likelihood methods are widely used because they are more efficient than the Lasso. They include a tuning parameter which controls a penalty level, and several information criteria have been developed for selecting it. While these criteria assure the model selection consistency, they have a problem in that there are no appropriate rules for choosing one from the class of information criteria satisfying such a preferred asymptotic property. In this paper, we derive an information criterion based on the original definition of the AIC by considering minimization of the prediction error rather than model selection consistency. Concretely speaking, we derive a function of the score statistic that is asymptotically equivalent to the non-concave penalized maximum likelihood estimator and then provide an estimator of the Kullback–Leibler divergence between the true distribution and the estimated distribution based on the function, whose bias converges in mean to zero.

Original languageEnglish
Pages (from-to)247-274
Number of pages28
JournalAnnals of the Institute of Statistical Mathematics
Volume71
Issue number2
DOIs
Publication statusPublished - Apr 1 2019

All Science Journal Classification (ASJC) codes

  • Statistics and Probability

Fingerprint Dive into the research topics of 'AIC for the non-concave penalized likelihood method'. Together they form a unique fingerprint.

  • Cite this