Background Assessing the quality of care provided by individual health practitioners is critical to identifying possible risks to the health of the public. However, existing assessment methods can be inaccurate, expensive, or infeasible in many developing country settings, particularly in rural areas and especially for children. Following an assessment of the strengths and weaknesses of the existing methods for provider assessment, we developed a synthesis method combining components of direct observation, clinical vignettes, and medical mannequins which we have termed “Observed Simulated Patient” or OSP. An OSP assessment involves a trained actor playing the role of a ‘mother’, a life-size doll representing a 5-year old boy, and a trained observer. The provider being assessed was informed in advance of the role-playing, and told to conduct the diagnosis and treatment as he normally would while verbally describing the examinations. Methodology/Principal Findings We tested the validity of OSP by conducting parallel scoring of medical providers in Myanmar, assessing the quality of their diagnosis and treatment of pediatric malaria, first by direct observation of true patients and second by OSP. Data were collected from 20 private independent medical practitioners in Mon and Kayin States, Myanmar between December 26, 2010 and January 12, 2011. All areas of assessment showed agreement between OSP and direct observation above 90% except for history taking related to past experience with malaria medicines. In this area, providers did not ask questions of the OSP to the same degree that they questioned real patients (agreement 82.8%). Conclusions/Significance The OSP methodology may provide a valuable option for quality assessment of providers in places, or for health conditions, where other assessment tools are unworkable.
References
[1]
Wallace P (1997) Following the threads of an innovation: the history of standardized patients in medical education. CADUCEUS-SPRINGFIELD- 13: 5–28.
[2]
Shah R, Edgar D, Evans BJ (2007) Measuring clinical practice. Ophthalmic Physiol Opt 27: 113–125.
[3]
Badger LW, deGruy F, Hartman J, Plant MA, Leeper J, et al. (1995) Stability of standardized patients' performance in a study of clinical decision making. Fam Med 27: 126–131.
[4]
Pieters HM, Touw-Otten F, Melker RA (1994) Simulated patients in assessing consultation skills of trainees in general practice vocational training: a validity study. Medical Education 28: 226–233.
[5]
Stillman PL, Wang Y, Ouyang Q, Zhang S, Yang Y, et al. (1997) Teaching and assessing clinical skills: a competency-based programme in China. Medical education 31: 33–40.
[6]
Luck J, Peabody JW, Lewis BL (2006) An automated scoring algorithm for computerized clinical vignettes: evaluating physician performance against explicit quality criteria. Int J Med Inform 75: 701–707.
[7]
Peabody JW, Tozija F, Munoz JA, Nordyke RJ, Luck J (2004) Using vignettes to compare the quality of clinical care variation in economically divergent countries. Health Serv Res 39: 1951–1970.
[8]
Leonard KL, Masatu MC, Vialou A (2007) Getting doctors to do their best: the roles of ability and motivation in health care quality. Journal of Human Resources 42: 682.
[9]
Das J, Hammer J, Leonard K (2008) The quality of medical advice in low-income countries. J Econ Perspect 22: 93–114.
[10]
Rethans JJ, Sturmans F, Drop R, van der Vleuten C, Hobus P (1991) Does competence of general practitioners predict their performance? Comparison between examination setting and actual practice. BMJ 303: 1377–1380.
[11]
Reisch LM, Fosse JS, Beverly K, Yu O, Barlow WE, et al. (2003) Training, quality assurance, and assessment of medical record abstraction in a multisite study. Am J Epidemiol 157: 546–551.
[12]
Hong TT, Walker SM, McKenzie K (2009) The quality of injury data from hospital records in Vietnam. HIM J 38: 15–21.
[13]
Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M (2000) Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA 283: 1715–1722.
[14]
Thompson HC, Osborne CE (1976) Office records in the evaluation of quality of care. Med Care 14: 294–314.
[15]
Charman N, Hovig D, Ahmed R (2010) personal communication from clinical care program managers working with private practitioners in developing countries.
[16]
Shah NM, Brieger WR, Peters DH (2011) Can interventions improve health services from informal private providers in low and middle-income countries? A comprehensive review of the literature. Health Policy Plan 26: 275–287.
[17]
Franco LM, Franco C, Kumwenda N, Nkhoma W (2002) Methods for assessing quality of provider performance in developing countries. Int J Qual Health Care 14: Suppl 117–24.
[18]
Kogan JR, Holmboe ES, Hauer KE (2009) Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA 302: 1316–1326.
[19]
Leonard KL, Masatu MC (2005) The use of direct clinician observation and vignettes for health services quality evaluation in developing countries. Soc Sci Med 61: 1944–1951.
[20]
Onishi J, Gupta S, Peters DH (2011) Comparative analysis of exit interviews and direct clinical observations in pediatric ambulatory care services in Afghanistan. Int J Qual Health Care 23: 76–82.
[21]
Schnelle JF, Ouslander JG, Simmons SF (2006) Direct observations of nursing home care quality: Does care change when observed? J Am Med Dir Assoc 7: 541–544.
[22]
Ministry of Health M (2008) National Policy for Treatment of Malaria in the Union of Myanmar.
[23]
Organization WH (2009) Guidelines for diagnosis and management of malaria in Myanmar. Yangon: World Health Organization, Country Office for Myanmar.
[24]
Fleiss JL (1981) Statistical methods for rates and proportions. New York, NY: Wiley. pp. 212–236.
[25]
Viera AJ, Garrett JM (2005) Understanding interobserver agreement: the kappa statistic. Fam Med 37: 360–363.
[26]
Leonard KL (2008) Is patient satisfaction sensitive to changes in the quality of care? An exploitation of the Hawthorne effect. J Health Econ 27: 444–459.