Author List: Marcolin, Barbara L.; Compeau, Deborah R.; Munro, Malcolm C.; Huff, Sid L.;
Information Systems Research, 2000, Volume 11, Issue 1, Page 37.
Organizations today face great pressure to maximize the benefits from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet the user competence construct is largely absent from prominent technology acceptance and fit models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed: all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.
Keywords: Competence; Empirical; End-User Computing; Self-Efficacy; Software Skills; Theoretical Framework
Algorithm:

List of Topics

#221 0.204 competence experience versus individual disaster employees form npd concept context construct effectively focus functionalities front-end knowledge-intensive stage explores set definition
#51 0.149 results study research experiment experiments influence implications conducted laboratory field different indicate impact effectiveness future participants evidence test controlled involving
#263 0.111 instrument measurement factor analysis measuring measures dimensions validity based instruments construct measure conceptualization sample reliability development develop responses assess use
#285 0.088 effects effect research data studies empirical information literature different interaction analysis implications findings results important set large provide using paper
#86 0.073 methods information systems approach using method requirements used use developed effective develop determining research determine assessment useful series critical existing
#53 0.056 knowledge application management domain processes kms systems study different use domains role comprehension effective types draw scope furthermore level levels
#153 0.053 usage use self-efficacy social factors individual findings influence organizations beliefs individuals support anxiety technology workplace key outcome behavior contextual longitudinal