Pitfalls for categorizations of objective interestingness measures for rule discovery

Research output: Chapter in Book/Report/Conference proceedingChapter

17 Citations (Scopus)

Abstract

In this paper, we point out four pitfalls for categorizations of objective interestingness measures for rule discovery. Rule discovery, which is extensively studied in data mining, suffers from the problem of outputting a huge number of rules. An objective interestingness measure can be used to estimate the potential usefulness of a discovered rule based on the given data set thus hopefully serves as a countermeasure to circumvent this problem. Various measures have been proposed, resulting systematic attempts for categorizing such measures. We believe that such attempts are subject to four kinds of pitfalls: data bias, rule bias, expert bias, and search bias. The main objective of this paper is to issue an alert for the pitfalls which are harmful to one of the most important research topics in data mining. We also list desiderata in categorizing objective interestingness measures.

Original languageEnglish
Title of host publicationStatistical Implicative Analysis
Subtitle of host publicationTheory and Applications
EditorsRégis Gras, Einoshin Suzuki, Fabrice Guillet, Filippo Spagnolo
Pages383-395
Number of pages13
DOIs
Publication statusPublished - Jul 17 2008

Publication series

NameStudies in Computational Intelligence
Volume127
ISSN (Print)1860-949X

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Cite this

Suzuki, E. (2008). Pitfalls for categorizations of objective interestingness measures for rule discovery. In R. Gras, E. Suzuki, F. Guillet, & F. Spagnolo (Eds.), Statistical Implicative Analysis: Theory and Applications (pp. 383-395). (Studies in Computational Intelligence; Vol. 127). https://doi.org/10.1007/978-3-540-78983-3_17