%0 Journal Article %T Faculty Development- Is Some Better Than None? %A Karl-Andr¨¦ Lalonde %A Kelsey Anne Crawford %A Nancy Dudek %A Timothy J Wood %J - %D 2019 %R 10.15694/mep.2019.000018.1 %X Introduction: Methods: In this three-phase study, two independent raters used the Completed Clinical Evaluation Report Rating (CCERR) to assess the quality of ITERs completed by all faculty in the Division of Orthopedic Surgery at the University of Ottawa. In phase one, ITERs from the previous nine months were evaluated. In phase two, the participants were aware that their ITERs were being evaluated, but they did not receive feedback. In phase three, participants received regular feedback on their performance in the form of their mean CCERR scores. Mean CCERR scores from the different phases of the study were compared. Results: CCERR scores were similar for all three phases (one: 17.56 ¡À 1.02, two: 17.65 ¡À 0.96, three: 17.54 ¡À 0.75, p=0.98). Discussion and Conclusions: There was no evidence in our study that participants¡¯ improved their ITER quality despite being aware that they were being evaluated and/or receiving feedback. Potentially, this was related to a lack of motivation. Alternatively, the intensity and/or frequency of the feedback may have been inadequate to create change. These results raise concerns that some faculty development may not necessarily be better than none. Medical education training programs, both at the undergraduate and postgraduate level, need to assess the clinical performance of their trainees to ensure that they are competent to move to the next level of training or into independent practice. In-training evaluation (ITE) by physician preceptors is a common component of many training programs¡¯ assessment process. This assessment is recorded on an In-Training Evaluation Report (ITER). ITERs are also referred to as clinical performance reports, performance assessment forms, clinical performance progress reports and end of clinical rotation reports. ITERs follow the typical format of many workplace based assessment (WBA) tools in that they consist of a list of items on a checklist or rating scale and written comments. Unfortunately, ITERs are often poorly completed, particularly in the case of the poorly performing resident (Cohen et al., 1993; Speer, Soloman and Ainsworth, 1996; Hatala and Norman, 1999). There is also evidence that clinical supervisors lack knowledge regarding what to document on ITERs and that this is in part responsible for their failure to report unsatisfactory clinical performance (Dudek, Marks and Regehr, 2005). With the advent of competency-based medical education (CBME) there is a substantial emphasis on WBA. Although ITERs are less likely to be used in a CBME program of assessment given their %K Assessment %K Faculty Development %K In-Training Assessment %K Workplace Based Assessment %K Feedback %U https://www.mededpublish.org/manuscripts/2140