Transcription of Understanding Interobserver Agreement: The Kappa Statistic
{{id}} {{{paragraph}}}
360 May 2005 Family MedicineIn reading medical literature on diagnosis and inter-pretation of diagnostic tests, our attention is generallyfocused on items such as sensitivity, specificity, pre-dictive values, and likelihood ratios. These items ad-dress the validity of the test. But if the people who ac-tually interpret the test cannot agree on the interpreta-tion, the test results will be of little us suppose that you are preparing to give a lec-ture on community-acquired pneumonia. As you pre-pare for the lecture, you read an article titled, Diag-nosing Pneumonia by History and Physical Examina-tion, published in the Journal of the American Medi-cal Association in You come across a table inthe article that shows agreement on physical examina-tion findings of the chest. You see that there was 79%agreement on the presence of wheezing with a kappaof and 85% agreement on the presence of tactilefremitus with a Kappa of How do you interpretthese levels of agreement taking into account the kappastatistic?
dress the validity of the test. But if the people who ac-tually interpret the test cannot agree on the interpreta-tion, the test results will be of little use. Let us suppose that you are preparing to give a lec-ture on community-acquired pneumonia. As you pre …
Domain:
Source:
Link to this page:
Please notify us if you found a problem with this document:
{{id}} {{{paragraph}}}