A medical record peer-review system to evaluate residents’ clinical competence: Criterion validity analysis

Junichi Kameoka, Makoto Kikukawa, Daiki Kobayashi, Tomoya Okubo, Seiichi Ishii, Yutaka Kagaya

Research output: Contribution to journalArticle

Abstract

In contrast to input evaluation (education delivered at school) and output evaluation (students’ capability at graduation), the methods of outcome evaluation (performance after graduation) of medical education have not been sufficiently established. To establish a method to measure the quality of patient care and conduct outcome evaluation, we have been developing a peer review system of medical records. Here, we undertook a pilot study to evaluate the criterion validity of our system by using “evaluation by program directors (supervisors in the hospitals)” as a criterion standard. We selected 13 senior residents from three teaching hospitals. Five reviewers (general internists working in other hospitals) visited the hospitals independently and evaluated five patients’ records for each resident based on the previously established sheet comprising 15 items. Independently, program directors of the senior residents evaluated their clinical performance using an evaluation sheet comprising ten items. Pearson’s analysis revealed statistically significant correlation coefficients in three pairs of assessments including clinical reasoning (r = 0.5848, P = 0.0358). Bootstrap analysis revealed statistically significant correlation coefficients in additional 5 pairs including history taking (r = 0.509, 95% confidence interval: 0.034-0.847). In contrast, the correlation coefficients were low in some items: R = 0.132 (–0.393-0.639) for physical examination and r = 0.089 (–0.847-0.472) for attitude toward patients. To the best of our knowledge, this is the first study, albeit a pilot one, that investigates the criterion validity of medical record evaluations conducted by comparing the assessments of medical records with those by program directors.

Original languageEnglish
Pages (from-to)253-260
Number of pages8
JournalTohoku Journal of Experimental Medicine
Volume248
Issue number4
DOIs
Publication statusPublished - Aug 1 2019

Fingerprint

Peer Review
Clinical Competence
Medical Records
Medical education
Supervisory personnel
Teaching
Quality of Health Care
Education
Program Evaluation
Students
Medical Education
Teaching Hospitals
Physical Examination
Patient Care
Confidence Intervals

All Science Journal Classification (ASJC) codes

  • Biochemistry, Genetics and Molecular Biology(all)

Cite this

A medical record peer-review system to evaluate residents’ clinical competence : Criterion validity analysis. / Kameoka, Junichi; Kikukawa, Makoto; Kobayashi, Daiki; Okubo, Tomoya; Ishii, Seiichi; Kagaya, Yutaka.

In: Tohoku Journal of Experimental Medicine, Vol. 248, No. 4, 01.08.2019, p. 253-260.

Research output: Contribution to journalArticle

Kameoka, Junichi ; Kikukawa, Makoto ; Kobayashi, Daiki ; Okubo, Tomoya ; Ishii, Seiichi ; Kagaya, Yutaka. / A medical record peer-review system to evaluate residents’ clinical competence : Criterion validity analysis. In: Tohoku Journal of Experimental Medicine. 2019 ; Vol. 248, No. 4. pp. 253-260.
@article{bdcecec25fe646ae88fbb56943e24f1f,
title = "A medical record peer-review system to evaluate residents’ clinical competence: Criterion validity analysis",
abstract = "In contrast to input evaluation (education delivered at school) and output evaluation (students’ capability at graduation), the methods of outcome evaluation (performance after graduation) of medical education have not been sufficiently established. To establish a method to measure the quality of patient care and conduct outcome evaluation, we have been developing a peer review system of medical records. Here, we undertook a pilot study to evaluate the criterion validity of our system by using “evaluation by program directors (supervisors in the hospitals)” as a criterion standard. We selected 13 senior residents from three teaching hospitals. Five reviewers (general internists working in other hospitals) visited the hospitals independently and evaluated five patients’ records for each resident based on the previously established sheet comprising 15 items. Independently, program directors of the senior residents evaluated their clinical performance using an evaluation sheet comprising ten items. Pearson’s analysis revealed statistically significant correlation coefficients in three pairs of assessments including clinical reasoning (r = 0.5848, P = 0.0358). Bootstrap analysis revealed statistically significant correlation coefficients in additional 5 pairs including history taking (r = 0.509, 95{\%} confidence interval: 0.034-0.847). In contrast, the correlation coefficients were low in some items: R = 0.132 (–0.393-0.639) for physical examination and r = 0.089 (–0.847-0.472) for attitude toward patients. To the best of our knowledge, this is the first study, albeit a pilot one, that investigates the criterion validity of medical record evaluations conducted by comparing the assessments of medical records with those by program directors.",
author = "Junichi Kameoka and Makoto Kikukawa and Daiki Kobayashi and Tomoya Okubo and Seiichi Ishii and Yutaka Kagaya",
year = "2019",
month = "8",
day = "1",
doi = "10.1620/tjem.248.253",
language = "English",
volume = "248",
pages = "253--260",
journal = "Tohoku Journal of Experimental Medicine",
issn = "0040-8727",
publisher = "Tohoku University Medical Press",
number = "4",

}

TY - JOUR

T1 - A medical record peer-review system to evaluate residents’ clinical competence

T2 - Criterion validity analysis

AU - Kameoka, Junichi

AU - Kikukawa, Makoto

AU - Kobayashi, Daiki

AU - Okubo, Tomoya

AU - Ishii, Seiichi

AU - Kagaya, Yutaka

PY - 2019/8/1

Y1 - 2019/8/1

N2 - In contrast to input evaluation (education delivered at school) and output evaluation (students’ capability at graduation), the methods of outcome evaluation (performance after graduation) of medical education have not been sufficiently established. To establish a method to measure the quality of patient care and conduct outcome evaluation, we have been developing a peer review system of medical records. Here, we undertook a pilot study to evaluate the criterion validity of our system by using “evaluation by program directors (supervisors in the hospitals)” as a criterion standard. We selected 13 senior residents from three teaching hospitals. Five reviewers (general internists working in other hospitals) visited the hospitals independently and evaluated five patients’ records for each resident based on the previously established sheet comprising 15 items. Independently, program directors of the senior residents evaluated their clinical performance using an evaluation sheet comprising ten items. Pearson’s analysis revealed statistically significant correlation coefficients in three pairs of assessments including clinical reasoning (r = 0.5848, P = 0.0358). Bootstrap analysis revealed statistically significant correlation coefficients in additional 5 pairs including history taking (r = 0.509, 95% confidence interval: 0.034-0.847). In contrast, the correlation coefficients were low in some items: R = 0.132 (–0.393-0.639) for physical examination and r = 0.089 (–0.847-0.472) for attitude toward patients. To the best of our knowledge, this is the first study, albeit a pilot one, that investigates the criterion validity of medical record evaluations conducted by comparing the assessments of medical records with those by program directors.

AB - In contrast to input evaluation (education delivered at school) and output evaluation (students’ capability at graduation), the methods of outcome evaluation (performance after graduation) of medical education have not been sufficiently established. To establish a method to measure the quality of patient care and conduct outcome evaluation, we have been developing a peer review system of medical records. Here, we undertook a pilot study to evaluate the criterion validity of our system by using “evaluation by program directors (supervisors in the hospitals)” as a criterion standard. We selected 13 senior residents from three teaching hospitals. Five reviewers (general internists working in other hospitals) visited the hospitals independently and evaluated five patients’ records for each resident based on the previously established sheet comprising 15 items. Independently, program directors of the senior residents evaluated their clinical performance using an evaluation sheet comprising ten items. Pearson’s analysis revealed statistically significant correlation coefficients in three pairs of assessments including clinical reasoning (r = 0.5848, P = 0.0358). Bootstrap analysis revealed statistically significant correlation coefficients in additional 5 pairs including history taking (r = 0.509, 95% confidence interval: 0.034-0.847). In contrast, the correlation coefficients were low in some items: R = 0.132 (–0.393-0.639) for physical examination and r = 0.089 (–0.847-0.472) for attitude toward patients. To the best of our knowledge, this is the first study, albeit a pilot one, that investigates the criterion validity of medical record evaluations conducted by comparing the assessments of medical records with those by program directors.

UR - http://www.scopus.com/inward/record.url?scp=85071527173&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071527173&partnerID=8YFLogxK

U2 - 10.1620/tjem.248.253

DO - 10.1620/tjem.248.253

M3 - Article

C2 - 31434837

AN - SCOPUS:85071527173

VL - 248

SP - 253

EP - 260

JO - Tohoku Journal of Experimental Medicine

JF - Tohoku Journal of Experimental Medicine

SN - 0040-8727

IS - 4

ER -