### Abstract

We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

Original language | English |
---|---|

Title of host publication | Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings |

Editors | Jyrki Kivinen, Robert H. Sloan |

Publisher | Springer Verlag |

Pages | 74-89 |

Number of pages | 16 |

ISBN (Electronic) | 354043836X, 9783540438366 |

Publication status | Published - Jan 1 2002 |

Externally published | Yes |

Event | 15th Annual Conference on Computational Learning Theory, COLT 2002 - Sydney, Australia Duration: Jul 8 2002 → Jul 10 2002 |

### Publication series

Name | Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) |
---|---|

Volume | 2375 |

ISSN (Print) | 0302-9743 |

### Other

Other | 15th Annual Conference on Computational Learning Theory, COLT 2002 |
---|---|

Country | Australia |

City | Sydney |

Period | 7/8/02 → 7/10/02 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Computer Science(all)

### Cite this

*Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings*(pp. 74-89). (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Vol. 2375). Springer Verlag.

**Path kernels and multiplicative updates.** / Takimoto, Eiji; Warmuth, Manfred K.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings.*Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), vol. 2375, Springer Verlag, pp. 74-89, 15th Annual Conference on Computational Learning Theory, COLT 2002, Sydney, Australia, 7/8/02.

}

TY - GEN

T1 - Path kernels and multiplicative updates

AU - Takimoto, Eiji

AU - Warmuth, Manfred K.

PY - 2002/1/1

Y1 - 2002/1/1

N2 - We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

AB - We consider a natural convolution kernel defined by a directed graph. Each edge contributes an input. The inputs along a path form a product and the products for all paths are summed. We also have a set of probabilities on the edges so that the outflow from each node is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. Now the total outflow out of each node is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each node is one again. Finally we discuss the use of regular expressions for speeding up the kernel and re-normalization computation. In particular we rewrite the multiplicative algorithms that predict as well as the best pruning of a series parallel graph in terms of efficient kernel computations.

UR - http://www.scopus.com/inward/record.url?scp=84937420693&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84937420693&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84937420693

T3 - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)

SP - 74

EP - 89

BT - Computational Learning Theory - 15th Annual Conference on Computational Learning Theory, COLT 2002, Proceedings

A2 - Kivinen, Jyrki

A2 - Sloan, Robert H.

PB - Springer Verlag

ER -