Author List: Poston, Robin S.; Speier, Cheri;
MIS Quarterly, 2005, Volume 29, Issue 2, Page 221-244.
Knowledge management systems (KMSs) facilitate the efficient and effective sharing of a firm's intellectual resources. However, sifting through the myriad of content available through KMSs can be challenging, and knowledge workers may be overwhelmed when trying to find the content most relevant for completing a new task. To address this problem, KMS designers often include content rating schemes (i.e., users of the KMS submit ratings to indicate the quality of specific content used) and credibility indicators (indicators describing the validity of the content and/or the ratings) to improve users' search and evaluation of KMS content. This study examines how content ratings and credibility indicators affect KMS users' search and evaluation processes and decision performance (how well and how quickly users selected alternatives offered by the KMS). Four Interrelated laboratory experiments provide evidence that ratings have a strong influence on KMS search and evaluation processes, which In turn affects decision performance. Finally, this study demonstrates that certain credibility indicators can moderate the relationship between rating validity and KMS content search and evaluation processes.
Keywords: content ratings; credibility indicators; decision making; Knowledge management systems; knowledge usage
Algorithm:

List of Topics

#53 0.216 knowledge application management domain processes kms systems study different use domains role comprehension effective types draw scope furthermore level levels
#189 0.157 recommendations recommender systems preferences recommendation rating ratings preference improve users frame contextual using frames sensemaking filtering manipulation specific collaborative items
#8 0.128 decision making decisions decision-making makers use quality improve performance managers process better results time managerial task significantly help indicate maker
#19 0.114 content providers sharing incentive delivery provider net incentives internet service neutrality broadband allow capacity congestion revenue cost efficient enhanced provides
#7 0.109 detection deception assessment credibility automated fraud fake cues detecting results screening study detect design indicators science important theory performance improved
#217 0.061 search information display engine results engines displays retrieval effectiveness relevant process ranking depth searching economics create functions incorporate low terms
#173 0.056 effect impact affect results positive effects direct findings influence important positively model data suggest test factors negative affects significant relationship
#157 0.052 evaluation effectiveness assessment evaluating paper objectives terms process assessing criteria evaluations methodology provides impact literature potential important evaluated identifying multiple