TY - GEN
T1 - Spatial statistics with three-tier breadth first search for analyzing social geocontents
AU - Arakawa, Yutaka
AU - Tagashira, Shigeaki
AU - Fukuda, Akira
PY - 2011/9/29
Y1 - 2011/9/29
N2 - The objective of this paper is to clear out the relation ship between user's contexts and really used words in order to realize the context-aware Japanese text input method editor. We propose two spatial analyzing methods for finding location-dependent words among the huge Japanese data with geographical information. In this paper, we analyze a half million tweets gathered by our system since Dec. 2009. First, we analyze the standard deviation of latitude and longitude, which shows variation level. It is very simple way, but it can't find out the keywords that depend on several locations. For example, famous department stores distributed all over Japan have a large standard deviation, but they will depend on each location. Therefore, we propose three-tier breadth first search, where the searching area is divided into some square mesh, and we extract the area which include tweets more than average of upper area. In addition, we re-divide the extracted areas into more small areas. Our method can extract some locations for one keyword.
AB - The objective of this paper is to clear out the relation ship between user's contexts and really used words in order to realize the context-aware Japanese text input method editor. We propose two spatial analyzing methods for finding location-dependent words among the huge Japanese data with geographical information. In this paper, we analyze a half million tweets gathered by our system since Dec. 2009. First, we analyze the standard deviation of latitude and longitude, which shows variation level. It is very simple way, but it can't find out the keywords that depend on several locations. For example, famous department stores distributed all over Japan have a large standard deviation, but they will depend on each location. Therefore, we propose three-tier breadth first search, where the searching area is divided into some square mesh, and we extract the area which include tweets more than average of upper area. In addition, we re-divide the extracted areas into more small areas. Our method can extract some locations for one keyword.
UR - http://www.scopus.com/inward/record.url?scp=80053141631&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80053141631&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-23866-6_27
DO - 10.1007/978-3-642-23866-6_27
M3 - Conference contribution
AN - SCOPUS:80053141631
SN - 9783642238659
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 252
EP - 260
BT - Knowledge-Based and Intelligent Information and Engineering Systems - 15th International Conference, KES 2011, Proceedings
T2 - 15th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2011
Y2 - 12 September 2011 through 14 September 2011
ER -