Why Do Management Scholars Avoid Experiments? A Necessary Provocation
Main Article Content
Abstract
Despite the consolidation of experimental designs as a central standard for causal inference in adjacent fields, experiments remain peripheral in large segments of management research. This article argues that such marginalization is not primarily technical, but epistemic and institutional. It reconstructs six recurrent objections — complexity, external validity, feasibility, theory reduction, non-manipulability, and ethical scope — that structure skepticism toward experimentation and shows how they normalize the substitution of statistical sophistication for design-based identification. The analysis suggests that resistance to experiments reflects entrenched evaluative norms about what counts as rigor and relevance, rather than demonstrated methodological inadequacy. To move beyond dichotomous debates, the article introduces a simple evaluative framework structured along two dimensions: causal ambition and organizational embeddedness. By conceptualizing experimentation as a continuum, the framework aligns the strength of causal claims with the inferential capacities of different designs, making trade-offs explicit rather than implicit. The central contribution is disciplinary rather than technical: repositioning experimentation as a reference point for transparent causal reasoning. The article concludes by calling for greater alignment between causal claims and research design, emphasizing inferential discipline as a condition for credible knowledge in management research.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
References
Aguinis, H., & Bradley, K. J. (2014). Best practice recommendations for designing and implementing experimental vignette methodology studies. Organizational Research Methods, 17(4), 351-371. https://doi.org/10.1177/1094428114547952
Bacharach, S. B. (1989). Organizational theories: Some criteria for evaluation. Academy of Management Review, 14(4), 496–515. https://doi.org/10.5465/amr.1989.4308374
Cartwright, N. (2007). Hunting causes and using them: Approaches in philosophy and economics. Cambridge University Press.
Cook, T. D. (2015). Generalizing causal knowledge in the policy sciences: External validity as a research objective. Policy Studies Journal, 43(4), 527–549. https://doi.org/10.1111/psj.12123
Gerring, J. (2007). Case study research: Principles and practices. Cambridge University Press.
Gottschalk, P., & Moffitt, R. (2009). The Rising Instability of U.S. Earnings. Journal of Economic Perspectives, 23(4), 3–16. https://doi.org/10.1257/jep.23.4.3
Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009–1055. https://doi.org/10.1257/0022051043004577
Hausman, D. M. (2012). Preference, value, choice, and welfare. Cambridge University Press.
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-960. https://doi.org/10.1080/01621459.1986.10478354
Kohavi, R., Tang, D., Xu, Y., & Walker, T. (2020). Trustworthy online controlled experiments: A practical guide to A/B testing. Cambridge University Press.
Merton, R. K. (1968). Social theory and social structure. Free Press.
Rosenbaum, P. R. (2002). Observational studies (2nd ed.). Springer.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
Van de Ven, A. H. (2007). Engaged scholarship: A guide for organizational and social research. Oxford University Press.