8
views
0
recommends
+1 Recommend
2 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

      journal-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract.

          Related collections

          Author and article information

          Journal
          arXiv
          2023
          05 February 2023
          07 February 2023
          20 February 2023
          21 February 2023
          12 March 2023
          14 March 2023
          February 2023
          Article
          10.48550/ARXIV.2302.02404
          02c1d95b-5bc4-45d5-bba6-173aab9ec2e4

          arXiv.org perpetual, non-exclusive license

          History

          Artificial Intelligence (cs.AI),Machine Learning (cs.LG),FOS: Computer and information sciences

          Comments

          Comment on this article

          scite_
          12
          0
          1
          0
          Smart Citations
          12
          0
          1
          0
          Citing PublicationsSupportingMentioningContrasting
          View Citations

          See how this article has been cited at scite.ai

          scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

          Similar content114

          Cited by1