National Cancer Institute Home at the National Institutes of Health |
Please wait while this form is being loaded....
The Applied Research Program Web site is no longer maintained. ARP's former staff have moved to the new Healthcare Delivery Research Program, the Behavioral Research Program, or the Epidemiology & Genomics Research Program, and the content from this Web site is being moved to one of those sites as appropriate. Please update your links and bookmarks!

Publication Abstract

Authors: Barlow W

Title: Measurement of interrater agreement with adjustment for covariates.

Journal: Biometrics 52(2):695-702

Date: 1996 Jun

Abstract: The kappa coefficient measures chance-corrected agreement between two observers in the dichotomous classification of subjects. The marginal probability of classification by each rater may depend on one or more confounding variables, however. Failure to account for these confounders may lead to inflated estimates of agreement. A multinomial model is used that assumes both raters have the same marginal probability of classification, but this probability may depend on one or more covariates. The model may be fit using software for conditional logistic regression. Additionally, likelihood-based confidence intervals for the parameter representing agreement may be computed. A simple example is discussed to illustrate model-fitting and application of the technique.

Last Modified: 03 Sep 2013