Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On using active learning and self-training when mining performance discussions on stack overflow
RISE - Research Institutes of Sweden, ICT, SICS.
Lund University, Sweden.
Lund University, Sweden.
Lund University, Sweden.
2017 (English)In: ACM International Conference Proceeding Series, 2017, 308-313 p.Conference paper, Published paper (Refereed)
Abstract [en]

Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.

Place, publisher, year, edition, pages
2017. 308-313 p.
Keyword [en]
Active learning, Classification, Human annotation, Self-training, Text mining, Artificial intelligence, Classification (of information), Data mining, Software engineering, Text processing, Classification accuracy, Classification tasks, Human annotations, Inter-rater agreements, Mitigation strategy, Self training, Education
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-30875DOI: 10.1145/3084226.3084273Scopus ID: 2-s2.0-85025467713ISBN: 9781450348041 OAI: oai:DiVA.org:ri-30875DiVA: diva2:1139371
Conference
21st International Conference on Evaluation and Assessment in Software Engineering, EASE 2017, 15 June 2017 through 16 June 2017
Available from: 2017-09-07 Created: 2017-09-07 Last updated: 2017-09-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus
By organisation
SICS
Natural Sciences

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 10 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
v. 2.28.0