Disparity in coding concordance: do physicians and coders agree?

J Health Care Finance. 2003 Summer;29(4):43-53.

Abstract

Increasing demands for large-scale comparative analysis of health care costs has led to a similar demand for consistently classified data. Evidence-based medicine demands evidence that can be trusted. This study sought to assess managers' observed levels of agreement with physician code selections when classifying patient data. Using a non-sampled research design of both mailed and telephone surveys, we employ a nationwide cross-section of over 16,000 accredited US medical record managers. As a main outcome measure, we evaluate reported levels of agreement between physician and information manager code selections made when classifying patient data. Results indicate about 19 percent of respondents report that coder-physician classification disagreement occurred on more than 5 percent of all patient encounters. In some cases, disagreement occurred in 20 percent or more instances of code selection. This phenomenon shows significant variation across key demographic and market indicators. With the growing practice of measuring coded data quality as an outcome of health care financial performance, along with adoption of electronic classification and patient record systems, the accuracy of coded data is likely to remain uncertain in the absence of more consistent classification and coding practices.

Publication types

  • Comparative Study

MeSH terms

  • Data Collection
  • Financial Audit
  • Forms and Records Control / classification
  • Forms and Records Control / standards*
  • Humans
  • Managed Care Programs
  • Management Audit
  • Medical Record Administrators / standards*
  • Medical Records / classification*
  • Medical Records / standards
  • Physicians / standards*
  • Professional Competence
  • Quality Control*
  • United States