Author List: Martens, David; Provost, Foster;
MIS Quarterly, 2014, Volume 38, Issue 1, Page 73-99.
Many document classification applications require human understanding of the reasons for data-driven classification decisions by managers, client-facing employees, and the technical team. Predictive models treat documents as data to be classified, and document data are characterized by very high dimensionality, often with tens of thousands to millions of variables (words). Unfortunately, due to the high dimensionality, understanding the decisions made by document classifiers is very difficult. This paper begins by extending the most relevant prior theoretical model of explanations for intelligent systems to account for some missing elements. The main theoretical contribution is the definition of a new sort of explanation as a minimal set of words (terms, generally), such that removing all words within this set from the document changes the predicted class from the class of interest. We present an algorithm to find such explanations, as well as a framework to assess such an algorithm’s performance. We demonstrate the value of the new approach with a case study from a real-world document classification task: classifying web pages as containing objectionable content, with the goal of allowing advertisers to choose not to have their ads appear on those pages. A second empirical demonstration on news-story topic classification shows the explanations to be concise and document-specific, and to be capable of providing understanding of the exact reasons for the classification decisions, of the workings of the classification models, and of the business application itself. We also illustrate how explaining the classifications of documents can help to improve data quality and model performance.
Keywords: Document classification; instance level explanation; text mining; comprehensibility
Algorithm:

List of Topics

#215 0.195 data classification statistical regression mining models neural methods using analysis techniques performance predictive networks accuracy method variables prediction problem measure
#183 0.154 explanations explanation bias use kbs biases facilities cognitive making judgment decisions likely decision important prior judgments feedback types difficult lead
#299 0.142 office document documents retrieval automation word concept clustering text based automated created individual functions major approach operations prototype identify report
#110 0.140 theory theories theoretical paper new understanding work practical explain empirical contribution phenomenon literature second implications different building based insights need
#97 0.100 set approach algorithm optimal used develop results use simulation experiments algorithms demonstrate proposed optimization present analytical distribution selection number existing
#74 0.062 high low level levels increase associated related characterized terms study focus weak hand choose general lower best predicted conditions implications
#17 0.054 empirical model relationships causal framework theoretical construct results models terms paper relationship based argue proposed literature issues assumptions provide suggest