We propose a hybrid context based topic model with an adaptive context window length for word sense disambiguation in document representation. Document representation is an essential part of various document based tasks, and word sense disambiguation is to capture the distinctions of word senses in the representation. Traditional methods mainly rely on knowledge libraries for data enrichment; however, semantics division for a word may vary in different domain-specific datasets. We aim to discover finer-grained word semantic differences, such as different entities or standpoints, and handle the disambiguation problem without data enrichment. There are two challenges for this disambiguation task: (1) dividing various senses for each polysemous word, and (2) preserving the differences between synonyms. Most of the existing models are either based on separate context clusters or integrating an auxiliary module to specify word senses. They can hardly achieve both (1) and (2) since different senses of a word are assumed to be independent and their intrinsic relationships are ignored. To solve this problem, we introduce the “Bag-of-Senses” (BoS) assumption: a document is a multiset of word senses, and the senses are generated instead of the words. The word senses are estimated by both the context in which it occurs and the contexts of its other occurrences. Besides, to handle the different scopes of the sense related context to each word occurrence, we introduce a variable to adjust the context window length adaptively. Our experiments on three standard datasets show that our proposal outperforms other state-of-the-art methods in terms of word sense estimation, topic modeling, and document classification.
All Science Journal Classification (ASJC) codes
- Information Systems
- Media Technology
- Computer Science Applications
- Management Science and Operations Research
- Library and Information Sciences