全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

随机影响评估存在的问题及应对策略:一个研究综述
Problems and Countermeasures for Randomized Impact Evaluations: A Research Review

DOI: 10.12677/mm.2024.144076, PP. 627-638

Keywords: 随机影响评估,政策和项目评估,存在问题,应对策略
Randomized Impact Evaluations
, Policy and Project Evaluation, Problems, Countermeasures

Full-Text   Cite this paper   Add to My Lib

Abstract:

随机影响评估的“反事实”设计可以获得最可信的因果推断,是公共政策或项目评估工具箱中的标准工具。然而,现实中开展随机影响评估面临着诸多问题,其中一些问题会导致较小的障碍,造成估计结果存在偏倚,而另一些问题会导致实验失败,使随机影响评估无法对感兴趣的假设提供有效的检验。缺乏对这些问题的深入认识及其应对策略的全面把握,阻碍了随机影响评估在实际政策或项目评估中的运用。本文通过对重要文献进行系统综述,围绕“存在问题–问题来源–潜在后果”,全面、深入地揭示了随机影响评估中存在的损耗、不依从、溢出效应、驱动效应、伦理、随机偏倚、外部有效性等七种问题,并对每一种问题的应对策略进行了系统梳理和概括。本研究有助于对评估者运用随机对照实验开展政策评估提供一定的指导,推进随机影响评估的开展。
The “counterfactual” design of randomized impact evaluations can obtain the most credible causal inference, which is a standard tool in the toolbox of public policy or project evaluation. However, in reality, there are a number of problems with conducting random impact assessments, some of which can lead to minor barriers that can bias the estimation results, while others can lead to experimental failures that prevent random impact assessments from providing a valid test for hypotheses of interest. The lack of an in-depth understanding of these issues and their countermeasures hinders the use of randomized impact evaluations in the evaluation of actual policies or projects. This paper systematically reviews the important literature, and comprehensively and deeply reveals seven problems in randomized impact evaluations, including attrition, non-compliance, spillover effect, driving effect, ethics, random bias, and external effectiveness, focusing on “existing problems, problem sources, and potential consequences”, and systematically sorts out and summarizes the countermeasures for each problem. This study will help to provide some guidance for evaluators to use randomized controlled trials to conduct policy evaluations and promote the development of randomized impact evaluations.

References

[1]  Rachel, G. and Kudzai, T. (2013) Running Randomized Evaluations—A Practical Guide. Princeton University Press, Princeton.
[2]  Fisher, R.A. (1925) Statistical Methods for Research Workers. Genesis Publishing Pvt Ltd, New Delhi.
[3]  Levitt, S.D. and List, J.A. (2009) Field Experiments in Economics: The Past, the Present, and the Future. European Economic Review, 53, 1-18.
https://doi.org/10.1016/j.euroecorev.2008.12.001
[4]  Malina, D., Bothwell, L.E., Greene, J.A., et al. (2016) Assessing the Gold Standard—Lessons from the History of RCTs. The New England Journal of Medicine, 374, 2175-2181.
https://doi.org/10.1056/NEJMms1604593
[5]  Ravallion, M. (2020) Should the Randomistas (Continue to) Rule? NBER Working Papers No. 27554.
https://doi.org/10.3386/w27554
[6]  Deaton, A. (2010) Instruments, Randomization, and Learning about Development. Journal of Economic Literature, 48, 424-455.
https://doi.org/10.1257/jel.48.2.424
[7]  Heckman, J. and Smith, J. (1995) Assessing the Case of Social Experiments. Journal of Economic Perspectives, 9, 85-110.
https://doi.org/10.1257/jep.9.2.85
[8]  Eble, A., Boone, P. and Elbourne, D. (2016) On Minimizing the Risk of Bias in Randomized Controlled Trials in Economics. The World Bank Economic Review, 31, 687-707.
https://doi.org/10.1093/wber/lhw034
[9]  Burtless, G. and Orr, L. (1986) Are Classical Experiments Needed for Manpower Policy. The Journal of Human Resources, 21, 606-639.
https://doi.org/10.2307/145769
[10]  Glennerster, R. (2017) The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency. Handbook of Economic Field Experiments, 1, 175-243.
https://doi.org/10.1016/bs.hefe.2016.10.002
[11]  Greenberg, D. and Barnow, B.S. (2014) Flaws in Evaluations of Social Programs: Illustrations from Randomized Controlled Trials. Evaluation Review, 38, 359-387.
https://doi.org/10.1177/0193841X14545782
[12]  Banerjee, A., Cole, S., Duflo, E., et al. (2005) Remedying Education: Evidence from Two Randomized Experiments in India. NBER Working Paper No. 11904.
https://doi.org/10.3386/w11904
[13]  Deaton, A. (2020) Randomization in the Tropics Revisited: A Theme and Eleven Variations. NBER Working Papers No. 27600.
https://doi.org/10.3386/w27600
[14]  Macours, K. and Millan, T.M. (2017) Attrition in Randomized Control Trials: Using Tracking Information to Correct Bias. IZA Discussion Papers No. 10711.
[15]  Young, A. (2019) Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results. The Quarterly Journal of Economics, 134, 557-598.
https://doi.org/10.1093/qje/qjy029
[16]  Choi, E.S. and Kim, B. (2016) A Beginner’s Guide to Randomized Evaluations in Development Economics. Seoul Journal of Economics, 29, 529-552.
[17]  Hausman, J.A. and Wise, D.A. (1979) Attrition Bias in Experimental and Panel Data: The Gary Income Maintenance Experiment. Econometrica, 47, 455-473.
https://doi.org/10.2307/1914193
[18]  Duflo, E., Glennerster, R. and Kremer, M. (2006) Using Randomization in Development Economics Research: A Toolkit. NBER Technical Working Paper No. 333.
https://doi.org/10.3386/t0333
[19]  Athey, S. and Imbens, G. (2016) The Econometrics of Randomized Experiments. Handbook of Economic Field Experiments, 1, 73-140.
https://doi.org/10.1016/bs.hefe.2016.10.003
[20]  史耀疆, 王欢, 罗仁福, 等. 营养干预对陕西贫困农村学生身心健康的影响研究[J]. 中国软科学, 2013(10): 48-58.
[21]  Banerjee, A., Banerji, R., Berry, J., et al. (2017) From Proof of Concept to Scalable Policies: Challenges and Solutions, with an Application. NBER Working Paper No. 22931.
https://doi.org/10.3386/w22931
[22]  Dupas, P. (2014) Short-Run Subsidies and Long-Run Adoption of New Health Products: Evidence from a Field Experiment. Econometrica, 82, 197-228.
https://doi.org/10.3982/ECTA9508
[23]  Hotz, V.J. (1992) Designing an Evaluation of JTPA. In: Manski, C. and Irwin, G., Eds., Evaluating Welfare and Training Programs, Harvard University Press, Cambridge, 76-114.
[24]  Duflo, E. (2003) Scaling up and Evaluation 1. ABCDE Working Paper.
[25]  Banerjee, A.V. and Duflo, E. (2009) The Experimental Approach to Development Economics. Annual Review of Economics, 1, 151-178.
https://doi.org/10.1146/annurev.economics.050708.143235
[26]  Banerjee, A.V., Chassang, S. and Snowberg, E. (2016) Decision Theoretic Approaches to Experiment Design and External Validity. NBER Working Paper No. 22167.
https://doi.org/10.3386/w22167
[27]  Angrist, J., Bettinger, E. and Kremer, M. (2016) Long-Term Educational Consequences of Secondary School Vouchers: Evidence from Administrative Records in Colombia. The American Economic Review, 96, 847-862.
[28]  Wooldridge, J.M. (2002) Econometric Analysis of Cross Section and Panel Data. The MIT Press, Cambridge.
[29]  Heckman, J.A. (1979) Sample Selection Bias as a Specification Error. Econometrica, 47, 153-161.
https://doi.org/10.2307/1912352
[30]  Duflo, E. and Hanna, R. (2006) Monitoring Works: Getting Teachers to Come to School. NBER Working Paper No. 11880.
https://doi.org/10.3386/w11880
[31]  List, J. (2008) Informed Consent in Social Science. Science, 322, 672.
https://doi.org/10.1126/science.322.5902.672a
[32]  Banerjee, A.V., Duflo, E. and Kremer, M. (2016) The Influence of Randomized Controlled Trials on Development Economics Research and on Development Policy. In: Basu, K., Rosenblatt, D. and Sepúlveda, C., Eds., The State of Economics, the State of the World, The MIT Press, Cambridge, 439-487.
https://doi.org/10.7551/mitpress/11130.003.0015
[33]  Banerjee, A., Karlan, D. and Jonathan, Z. (2015) Six Randomized Evaluations of Microcredit: Introduction and Further Steps. American Economic Journal: Applied Economics, 7, 1-21.
https://doi.org/10.1257/app.20140287

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133