TY - GEN
T1 - Dynamic determinantal point processes
AU - Osogami, Takayuki
AU - Raymond, Rudy
AU - Shirai, Tomoyuki
AU - Goel, Akshay
AU - Maehara, Takanori
N1 - Funding Information:
T. O. and R. R. are supported by JST CREST Grant Number JPMJCR1304, Japan. A. G. is fully supported by JICA-Friendship Scholarship. T. S. is partially supported by JSPS Grant-in-Aid (26287019, 16H06338).
Publisher Copyright:
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.
AB - The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.
UR - http://www.scopus.com/inward/record.url?scp=85060458456&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060458456&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85060458456
T3 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018
SP - 3868
EP - 3875
BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PB - AAAI Press
T2 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018
Y2 - 2 February 2018 through 7 February 2018
ER -