Why Cohen's Kappa should be avoided as performance measure in classification

被引:177
|
作者
Delgado, Rosario [1 ]
Tibau, Xavier-Andoni [2 ]
机构
[1] Univ Autonoma Barcelona, Dept Math, Campus UAB, Cerdanyola Del Valles, Spain
[2] Univ Autonoma Barcelona, Adv Stochast Modelling Res Grp, Campus UAB, Cerdanyola Del Valles, Spain
来源
PLOS ONE | 2019年 / 14卷 / 09期
关键词
INTEROBSERVER AGREEMENT; RELIABILITY; ASSOCIATION; MODELS; COEFFICIENT; PREVALENCE;
D O I
10.1371/journal.pone.0222916
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We show that Cohen's Kappa and Matthews Correlation Coefficient (MCC), both extended and contrasted measures of performance in multi-class classification, are correlated in most situations, albeit can differ in others. Indeed, although in the symmetric case both match, we consider different unbalanced situations in which Kappa exhibits an undesired behaviour, i.e. a worse classifier gets higher Kappa score, differing qualitatively from that of MCC. The debate about the incoherence in the behaviour of Kappa revolves around the convenience, or not, of using a relative metric, which makes the interpretation of its values difficult. We extend these concerns by showing that its pitfalls can go even further. Through experimentation, we present a novel approach to this topic. We carry on a comprehensive study that identifies an scenario in which the contradictory behaviour among MCC and Kappa emerges. Specifically, we find out that when there is a decrease to zero of the entropy of the elements out of the diagonal of the confusion matrix associated to a classifier, the discrepancy between Kappa and MCC rise, pointing to an anomalous performance of the former. We believe that this finding disables Kappa to be used in general as a performance measure to compare classifiers.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] A Formal Proof of a Paradox Associated with Cohen’s Kappa
    Matthijs J. Warrens
    Journal of Classification, 2010, 27 : 322 - 332
  • [32] Cohen's linearly weighted kappa is a weighted average
    Warrens, Matthijs J.
    Advances in Data Analysis and Classification, 2012, 6 (01): : 67 - 79
  • [33] Why internal weights should be avoided (not only) in MR-Egger regression
    Hartwig, Fernando Pires
    Davies, Neil Martin
    INTERNATIONAL JOURNAL OF EPIDEMIOLOGY, 2016, 45 (05) : 1676 - 1678
  • [34] Why the "last drinking occasion' approach to measuring alcohol consumption should be avoided
    Osthus, Stale
    Brunborg, Geir Scott
    DRUG AND ALCOHOL REVIEW, 2015, 34 (05) : 549 - 558
  • [35] WHY MEASURE THE CEOS PERFORMANCE
    SCHNEIER, CE
    BEATTY, RW
    SHAW, DG
    BOTTOM LINE RESULTS FROM STRATEGIC HUMAN RESOURCE PLANNING, 1991, : 247 - 260
  • [36] WHY MEASURE LABOR PERFORMANCE
    MANN, L
    HYDROCARBON PROCESSING, 1970, 49 (01): : 133 - &
  • [37] About the relationship between ROC curves and Cohen's kappa
    Ben-David, Arie
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2008, 21 (06) : 874 - 882
  • [38] Relationships of Cohen's Kappa, Sensitivity, and Specificity for Unbiased Annotations
    Wang, Juan
    Xia, Bin
    PROCEEDINGS OF 2019 4TH INTERNATIONAL CONFERENCE ON BIOMEDICAL SIGNAL AND IMAGE PROCESSING (ICBIP 2019), 2019, : 98 - 101
  • [39] Conditional inequalities between Cohen's kappa and weighted kappas
    Warrens, Matthijs J.
    STATISTICAL METHODOLOGY, 2013, 10 (01) : 14 - 22
  • [40] Why should we measure exhaled NO in asthma patients?
    Mahut, B.
    Delclaux, C.
    REVUE DES MALADIES RESPIRATOIRES, 2006, 23 (04) : S41 - S43