Inter-Rater Reliability of Grade Evaluation of Post-Clinical Clerkship (Post-CC) OSCE Based on Kappa Coefficient and Agreement Rates
この論文をさがす
説明
type:TOHO University Scholarly Publication
Original Article
Introduction: Post-Clinical Clerkship (Post-CC) Objective Structured Clinical Examination (OSCE) has been implemented as a unified examination in all medical schools since fiscal 2020. In this study, differences in the grade evaluation of Post-CC OSCE made by faculty members in fiscal 2020 were investigated. Methods: Grade evaluations of Post-CC OSCE, which was taken by 119 students and 36 faculty members, were analyzed. For the calculation of differences in the evaluation scores between two evaluator faculty members within a station, weighted kappa coefficient, Spearman's rank correlation coefficient, and simple percent agreement were used. The overall evaluation directly linked to a student's pass/fail was compared in each series using the Kruskal-Wallis test and χ2 test. Results: The concordance rate of grade evaluation between 2 evaluator faculty members was low and a pass/fail judgment was disagreed on at 35% probability in “physical examination." There were no significant differences among the overall evaluations of each series. However, when the evaluations were categorized into 2 groups on the basis of the pass line (〓4 and <4), there was a significant difference in the pass rate of each series and it ranged from 36.8% to 90.0%. Conclusions: Disagreement of grade evaluation directly linked to pass/fail was noted between evaluator faculty members as well as between series. In addition to a review of the implementation and assessment methods of examination, it is important to construct a system capable of reviewing the evaluation after an examination.
収録刊行物
-
- Toho Journal of Medicine
-
Toho Journal of Medicine 8 (2), 61-70, 2022-06-01
The Medical Society of Toho University