Abstract
This paper reports the ongoing development of a large-scale Web crawler and search engine infrastructure at National Institute of Information and Communications Technology. This infrastructure has the following characteristics: (1) It collects one billion Japanese Web pages while keeping them up-to-date. (2) It selects 100 million pages from among the collected pages and converts them into a standard data format to store the results of morphological analysis, dependency parsing, and synonym augmentation. (3) The selected set of pages is searchable and accessible to the users. (4) The scalability of the system is achieved by using a large-scale cluster machine for distributed data processing.
Original language | English |
---|---|
Title of host publication | Proceedings of the 3rd International Universal Communication Symposium, IUCS 2009 |
Pages | 126-131 |
Number of pages | 6 |
DOIs | |
Publication status | Published - Dec 1 2009 |
Event | 3rd International Universal Communication Symposium, IUCS 2009 - Tokyo, Japan Duration: Dec 3 2009 → Dec 4 2009 |
Other
Other | 3rd International Universal Communication Symposium, IUCS 2009 |
---|---|
Country | Japan |
City | Tokyo |
Period | 12/3/09 → 12/4/09 |
All Science Journal Classification (ASJC) codes
- Software
- Human-Computer Interaction
- Computer Vision and Pattern Recognition
- Computer Networks and Communications