Author List: Elkins, Aaron C.; Dunbar, Norah E.; Adame, Bradley; Nunamaker, Jr., Jay F.;
Journal of Management Information Systems, 2013, Volume 29, Issue 4, Page 249-262.
Despite the improving accuracy of agent-based expert systems, human expert users aided by these systems have not improved their accuracy. Self-affirmation theory suggests that human expert users could be experiencing threat, causing them to act defensively and ignore the system's conflicting recommendations. Previous research has demonstrated that affirming an individual in an unrelated area reduces defensiveness and increases objectivity to conflicting information. Using an affirmation manipulation prior to a credibility assessment task, this study investigated if experts are threatened by counterattitudinal expert system recommendations. For our study, 178 credibility assessment experts from the American Polygraph Association (n = 134) and the European Union's border security agency Frontex (n = 44) interacted with a deception detection expert system to make a deception judgment that was immediately contradicted. Reducing the threat prior to making their judgments did not improve accuracy, but did improve objectivity toward the system. This study demonstrates that human experts are threatened by advanced expert systems that contradict their expertise. As more and more systems increase integration of artificial intelligence and inadvertently assail the expertise and abilities of users, threat and self-evaluative concerns will become an impediment to technology acceptance.
Keywords: credibility assessment systems; deception detection; expert systems; user anxiety
Algorithm:

List of Topics

#7 0.337 detection deception assessment credibility automated fraud fake cues detecting results screening study detect design indicators science important theory performance improved
#129 0.289 expert systems knowledge knowledge-based human intelligent experts paper problem acquisition base used expertise intelligence domain inductive rules machine artificial task
#189 0.119 recommendations recommender systems preferences recommendation rating ratings preference improve users frame contextual using frames sensemaking filtering manipulation specific collaborative items
#209 0.078 results study research information studies relationship size variables previous variable examining dependent increases empirical variance accounting independent demonstrate important addition
#284 0.057 users user new resistance likely benefits potential perspective status actual behavior recognition propose user's social associated existing base using acceptance