Path kernels and multiplicative updates

Eiji Takimoto, Manfred K. Warmuth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Citations (Scopus)

Abstract

We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

Original languageEnglish
Title of host publicationComputational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings
EditorsJyrki Kivinen, Robert H. Sloan
PublisherSpringer Verlag
Pages74-89
Number of pages16
ISBN (Electronic)354043836X, 9783540438366
Publication statusPublished - Jan 1 2002
Externally publishedYes
Event15th Annual Conference on Computational Learning Theory, COLT 2002 - Sydney, Australia
Duration: Jul 8 2002Jul 10 2002

Publication series

NameLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
Volume2375
ISSN (Print)0302-9743

Other

Other15th Annual Conference on Computational Learning Theory, COLT 2002
CountryAustralia
CitySydney
Period7/8/027/10/02

Fingerprint

Multiplicative
Update
kernel
Path
Vertex of a graph
Directed graphs
Convolution
Series-parallel Graph
Normalize
Regular Expressions
Pruning
Renormalization
Directed Graph
Predict
Prediction
Graph in graph theory

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Takimoto, E., & Warmuth, M. K. (2002). Path kernels and multiplicative updates. In J. Kivinen, & R. H. Sloan (Eds.), Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings (pp. 74-89). (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Vol. 2375). Springer Verlag.

Path kernels and multiplicative updates. / Takimoto, Eiji; Warmuth, Manfred K.

Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings. ed. / Jyrki Kivinen; Robert H. Sloan. Springer Verlag, 2002. p. 74-89 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Vol. 2375).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Takimoto, E & Warmuth, MK 2002, Path kernels and multiplicative updates. in J Kivinen & RH Sloan (eds), Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings. Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), vol. 2375, Springer Verlag, pp. 74-89, 15th Annual Conference on Computational Learning Theory, COLT 2002, Sydney, Australia, 7/8/02.
Takimoto E, Warmuth MK. Path kernels and multiplicative updates. In Kivinen J, Sloan RH, editors, Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings. Springer Verlag. 2002. p. 74-89. (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)).
Takimoto, Eiji ; Warmuth, Manfred K. / Path kernels and multiplicative updates. Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings. editor / Jyrki Kivinen ; Robert H. Sloan. Springer Verlag, 2002. pp. 74-89 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)).
@inproceedings{912bf393af924959b0e41ea03feca698,
title = "Path kernels and multiplicative updates",
abstract = "We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.",
author = "Eiji Takimoto and Warmuth, {Manfred K.}",
year = "2002",
month = "1",
day = "1",
language = "English",
series = "Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)",
publisher = "Springer Verlag",
pages = "74--89",
editor = "Jyrki Kivinen and Sloan, {Robert H.}",
booktitle = "Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings",
address = "Germany",

}

TY - GEN

T1 - Path kernels and multiplicative updates

AU - Takimoto, Eiji

AU - Warmuth, Manfred K.

PY - 2002/1/1

Y1 - 2002/1/1

N2 - We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

AB - We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

UR - http://www.scopus.com/inward/record.url?scp=84937420693&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84937420693&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84937420693

T3 - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)

SP - 74

EP - 89

BT - Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings

A2 - Kivinen, Jyrki

A2 - Sloan, Robert H.

PB - Springer Verlag

ER -