Editorial

Author

1. Thai POCT forum coordinator, Bangkok, Thailand 2. Editorial Board Member, Iranian Journal of Pathology

Introduction

The use of electronic database in medicine has been increasing at present. Many clinical settings implement the electronic database system for helping routine clinical practice. In addition, the recorded data can useful for further research use. In pathological medical recording, the electronic database system can be helpful.

As previously noted, the assessment of pathological medical record can be useful in researching. However, an important concern is on the validity of the system. A discrepancy between actual results in written or manual medical record and electronic record can be expected. This might be due to the miscoding or incomplete coding which might lead to the false conclusion of study findings if those data are used in researching (1). Hence, validation is a requirement. The measurement of agreement of data between two record systems (manual VS electronic) can be helpful and has to be done (2).This requires a systematic technique for practicing.

 

Techniques for assessment of agreement

A simple technique is random testing on the agreement. A recruitment of cases for further assessment of agreement is the first step. A random selected pathological condition can be selected and used as a model. The list of all diagnosed cases in the electronic database has to be prepared. To test the validity, accuracy of the electronic record has to be tested. For assessment, the diagnosis in the electronic medical record has to be cross - checked with the diagnosis made at the same visit of the same patient in the manual record. It is hereby accepted that the written record by the practitioner is the gold standard since it is the first documentation for each test. To check the agreement, the whole two sets of data has to be approved by a single first observer with the second look and third look by the second and third observers. The agreement of the two datasets can be tested using Cohen’s kappa (3).The Cohen’s kappa values can imply the degree of agreement (0 – 0.2 for poor, 0.21 – 0.4 for fair, 0.41 – 0.6 for moderate, 0.61 – 0.8 for substantial and 0.81 – 1 for perfect levels)(3).Many computer programs can be applied for finding the Cohen’s kappa value such as Stata 10.0 software.

 

How can the results from assessment be useful?

One might have a question why this complex system has to be done. The simple answer is to approve for the validity of the electronic record system. It should be noted that the incorrect electronic records can be expected and the problem of human error is very common in pathology laboratory (4).The assessment can be helpful for guaranteeing of the standard of the laboratory and can confirm the data for further informatics researching. In a previous study in an ISO certified laboratory, the high prevalence of error could be observed and this confirm the need for rechecking system (4).Although some new informatics apparatus can be implemented for management of laboratory data it is still limited for usage in the microscopic laboratory, which the primary diagnosis has to be judged and confirmed by practitioners. For sure, the manual record is required before recording the data in the medical database. In a study of error of laboratory information system, the problem of incorrect recording into the electronic database can be identified as important error (5).

In addition, the data from agreement can be useful information for the pathology laboratory director for planning for management of quality in the laboratory. Repeated informing of the potential miscoding process and how to decrease the error to the practitioner can be the effective tool for reducing of the problem. 

Focusing on the question, “what is the acceptable level of agreement?”, a theoretical answer is perfect level or zero error. It should be noted that only one discordance means problem. One must further assess for the possible hidden problem in that period.  In some situations such as for medical service claims, it can be a serious problem (6). A complete rechecking of all records in that period is needed. Furthermore, if an error is seen from assessment, the correction of the electronic database should be done. This can be done for the informatics technician who has the full responsibility of the laboratory database system.

  1. Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol 2005 Apr;58(4):323-37.
  2. Motheral B, Brooks J, Clark MA, Crown WH, Davey P, Hutchins D, et al. A checklist for retrospective database studies--report of the ISPOR Task Force on Retrospective Databases. Value Health 2003 Mar;6(2):90-7.
  3. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977 Mar;33(1):159-74.
  4. Wiwanitkit V. Types and frequency of preanalytical mistakes in the first Thai ISO 9002:1994 certified clinical laboratory, a 6 - month monitoring. BMC Clin Pathol 2001;1(1):5.
  5. Wiwanitkit V. Recorded Problem for Trial of the New Computational Laboratory Information Management System, Experience in King Chulalongkorn Memorial Hospital. J Allied Health Sci 2000 Apr;(1):55-8.
  6. Wilchesky M, Tamblyn RM, Huang A. Validation of diagnostic codes within medical services claims. J Clin Epidemiol 2004 Feb;57(2):131-41.