Author List: Steelman, Zachary; Hammer, Bryan;
MIS Quarterly, 2014, Volume 38, Issue 2, Page 355-378.
Online crowdsourcing markets (OCM) are becoming more popular as a source for data collection. In this paper, we examine the consistency of survey results across student samples, consumer panels, and online crowdsourcing markets (specifically Amazon’s Mechanical Turk) both within the United States and outside. We conduct two studies examining the technology acceptance model (TAM) and the expectation– disconfirmation theory (EDT) to explore potential differences in demographics, psychometrics, structural model estimates, and measurement invariances. Our findings indicate that (1) U.S.-based OCM samples provide demographics much more similar to our student and consumer panel samples than the non-U.S.-based OCM samples; (2) both U.S. and non-U.S. OCM samples provide initial psychometric properties (reliability, convergent, and divergent validity) that are similar to those of both student and consumer panels; (3) non-U.S. OCM samples generally provide differences in scale means compared to those of our students, consumer panels, and U.S. OCM samples; and (4) one of the non-U.S. OCM samples refuted the highly replicated and validated TAM model in the relationship of perceived usefulness to behavioral intentions. Although our post hoc analyses isolated some cultural and demographic effects with regard to the non-U.S. samples in Study 1, they did not address the model differences found in Study 2. Specifically, the inclusion of non-U.S. OCM respondents led to statistically significant differences in parameter estimates, and hence to different statistical conclusions. Due to these unexplained differences that exist within the non-U.S. OCM samples, we caution that the inclusion of non-U.S. OCM participants may lead to different conclusions than studies with only U.S. OCM participants. We are unable to conclude whether this is due to of cultural differences, differences in the demographic profiles of non-U.S. OCM participants, or some unexplored factors within the models. Therefore, until further research is conducted to explore these differences in detail, we urge researchers utilizing OCMs with the intention to generalize to U.S. populations focus on U.S.-based participants and exercise caution in using non-U.S. participants. We further recommend that researchers should clearly describe their OCM usage and design (e.g., demographics, participant filters, etc.) procedures. Overall, we find that U.S. OCM samples produced models that lead to similar statistical conclusions as both U.S. students and U.S. consumer panels at a considerably reduced cost.
Keywords: Data collection; crowdsourcing; sampling; online research; crowdsourcing market
Algorithm:

List of Topics

#145 0.307 differences analysis different similar study findings based significant highly groups popular samples comparison similarities non-is variety reveals imitation versus suggests
#124 0.100 validity reliability measure constructs construct study research measures used scale development nomological scales instrument measurement researchers developed validation discriminant results
#190 0.088 new licensing license open comparison type affiliation perpetual prior address peer question greater compared explore competing crowdsourcing provide choice place
#5 0.081 consumer consumers model optimal welfare price market pricing equilibrium surplus different higher results strategy quality cost lower competition firm paper
#99 0.066 perceived usefulness acceptance use technology ease model usage tam study beliefs intention user intentions users behavioral perceptions determinants constructs studies
#6 0.055 data used develop multiple approaches collection based research classes aspect single literature profiles means crowd collected trend accuracy databases accurate
#136 0.054 expectations expectation music disconfirmation sales analysis vector experiences modeling response polynomial surface discuss panel new nonlinear period understand paper dissonance