Sensitivity Specificity Agreement

We have seen that the information produced for a COVID-19 rapid test uses the terms “relative” sensitivity and “relative” specificity compared to another test. The term “relative” is an erroneous term. This means that you can use these “relative” ratios to calculate the sensitivity/specificity of the new test based on the sensitivity/specificity of the comparative test. That is simply not possible. Nor do these statistics support the conclusion that one test is better than another. Recently, a British national newspaper published an article on a PCR test developed by Public Health of England and the fact that with a new commercial test in 35 samples out of 1144 (3%) disagreed. Of course, for many journalists, this was proof that the PHE test was imprecise. There is no way to know which test is correct and which is wrong in any of these 35 discrepancies. We simply do not know the actual state of the subject in unit studies. Only further investigation into these discrepancies would identify the reasons for these discrepancies.

Uncertainty in patient classification can be measured in different ways, most often using statistics from inter-observer agreements such as Cohens Kappa or correlation terms in a multitrait matrix. These statistics, as well as the statistics associated with them, assess the extent of matching in the classification of the same patients or samples by different tests or examiners, in relation to the extent of compliance that would be accidentally expected. Cohen`s Kappa goes from 0 to 1. Value 1 indicates perfect match and values below 0.65 are generally interpreted as having a high degree of variability when classifying the same patients or samples. Kappa values are frequently used to describe reliability between patients (i.e. the same patients between physicians) and the reliability of intra-rater service (i.e. the same patient with the same physician on different days). Kappa values can also be used to estimate the variability of .B measurements at home. Variability in patient classification can also be recorded directly as probability, as in the standard Bayesic analysis. Regardless of the measurement used to measure variability in classification, there is a direct correspondence between the variability measured in a test or a means of comparison, the thought-out uncertainty to that extent, and the erroneous classifications resulting from that uncertainty.