Learning Curves for Automating Content Analysis: How Much Human Annotation is Needed?

Emi Ishita, Douglas W. Oard, Kenneth R. Fleischmann, Yoichi Tomiura, Yasuhiro Takayama, An Shou Cheng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we explore the potential for reducing human effort when coding text segments for use in content analysis. The key idea is to do some coding by hand, to use the results of that initial effort as training data, and then to code the remainder of the content automatically. The test collection includes 102 written prepared statements about Net neutrality from public hearings held by the U.S Congress and the U.S. Federal Communications Commission (FCC). Six categories used in this analysis: wealth, social order, justice, freedom, innovation and honor. A support vector machine (SVM) classifier and a Naïve Bayes (NB) classifier were trained on manually annotated sentences from between one and 51 documents and tested on a held out of set of 51 documents. The results show that the inflection point for a standard measure of classifier accuracy (F1) occurs early, reaching at least 85% of the best achievable result by the SVM classifier with only 30 training documents, and at least 88% of the best achievable result by NB classifier with only 30 training documents. With the exception of honor, the results show that the scale of machine classification would reasonably be scaled up to larger collections of similar documents without additional human annotation effort.

Original languageEnglish
Title of host publicationProceedings - 2015 IIAI 4th International Congress on Advanced Applied Informatics, IIAI-AAI 2015
EditorsSachio Hirokawa, Kiyota Hashimoto, Tokuro Matsuo, Tsunenori Mine
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages171-176
Number of pages6
ISBN (Electronic)9781479999583
DOIs
Publication statusPublished - Jan 6 2016
Event4th IIAI International Congress on Advanced Applied Informatics, IIAI-AAI 2015 - Okayama, Japan
Duration: Jul 12 2015Jul 16 2015

Publication series

NameProceedings - 2015 IIAI 4th International Congress on Advanced Applied Informatics, IIAI-AAI 2015

Other

Other4th IIAI International Congress on Advanced Applied Informatics, IIAI-AAI 2015
CountryJapan
CityOkayama
Period7/12/157/16/15

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

Ishita, E., Oard, D. W., Fleischmann, K. R., Tomiura, Y., Takayama, Y., & Cheng, A. S. (2016). Learning Curves for Automating Content Analysis: How Much Human Annotation is Needed? In S. Hirokawa, K. Hashimoto, T. Matsuo, & T. Mine (Eds.), Proceedings - 2015 IIAI 4th International Congress on Advanced Applied Informatics, IIAI-AAI 2015 (pp. 171-176). [7373896] (Proceedings - 2015 IIAI 4th International Congress on Advanced Applied Informatics, IIAI-AAI 2015). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IIAI-AAI.2015.295