Dynamic determinantal point processes

Takayuki Osogami, Rudy Raymond, Tomoyuki Shirai, Akshay Goel, Takanori Maehara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.

Original languageEnglish
Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PublisherAAAI Press
Pages3868-3875
Number of pages8
ISBN (Electronic)9781577358008
Publication statusPublished - Jan 1 2018
Event32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States
Duration: Feb 2 2018Feb 7 2018

Publication series

Name32nd AAAI Conference on Artificial Intelligence, AAAI 2018

Conference

Conference32nd AAAI Conference on Artificial Intelligence, AAAI 2018
CountryUnited States
CityNew Orleans
Period2/2/182/7/18

Fingerprint

Learning algorithms
Factorization
Learning systems

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Cite this

Osogami, T., Raymond, R., Shirai, T., Goel, A., & Maehara, T. (2018). Dynamic determinantal point processes. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 3868-3875). (32nd AAAI Conference on Artificial Intelligence, AAAI 2018). AAAI Press.

Dynamic determinantal point processes. / Osogami, Takayuki; Raymond, Rudy; Shirai, Tomoyuki; Goel, Akshay; Maehara, Takanori.

32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI Press, 2018. p. 3868-3875 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Osogami, T, Raymond, R, Shirai, T, Goel, A & Maehara, T 2018, Dynamic determinantal point processes. in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, AAAI Press, pp. 3868-3875, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, New Orleans, United States, 2/2/18.
Osogami T, Raymond R, Shirai T, Goel A, Maehara T. Dynamic determinantal point processes. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI Press. 2018. p. 3868-3875. (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).
Osogami, Takayuki ; Raymond, Rudy ; Shirai, Tomoyuki ; Goel, Akshay ; Maehara, Takanori. / Dynamic determinantal point processes. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI Press, 2018. pp. 3868-3875 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).
@inproceedings{597712bec45547cd8fa3ca8afab44af3,
title = "Dynamic determinantal point processes",
abstract = "The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.",
author = "Takayuki Osogami and Rudy Raymond and Tomoyuki Shirai and Akshay Goel and Takanori Maehara",
year = "2018",
month = "1",
day = "1",
language = "English",
series = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",
publisher = "AAAI Press",
pages = "3868--3875",
booktitle = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",

}

TY - GEN

T1 - Dynamic determinantal point processes

AU - Osogami, Takayuki

AU - Raymond, Rudy

AU - Shirai, Tomoyuki

AU - Goel, Akshay

AU - Maehara, Takanori

PY - 2018/1/1

Y1 - 2018/1/1

N2 - The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.

AB - The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2 K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.

UR - http://www.scopus.com/inward/record.url?scp=85060458456&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060458456&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85060458456

T3 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

SP - 3868

EP - 3875

BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

PB - AAAI Press

ER -