Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz and Moritz Hardt in “Delayed Impact of Fair Machine Learning” rightly make the point that in considering notions of fairness, it is critical that we consider them dynamically in terms of their long-term effects on a population, and not just statically. They argue that seemingly attractive and fair decision rules can actually harm disadvantaged groups in the long run. They consider, in elegant mathematical detail, the case of loan decisions, arguing that rules such as “Accept the same proportion of applicants from each race” or “Have the same proportion of true-positives from each race in granting loans, even if it means accepting more false positives” can all decrease the credit score of the affected population in the long run if certain combinations of parameters exist. Thus a dynamically oriented approach is called for in setting fairness rules that considers the long term effects on the population of various rules, through many iterations, rather than just deciding what seems fair at a static snapshot. The authors aren’t simply suggesting a lasseiz faire approach with no concern for algorithmic fairness or social justice, rather they suggest a more nuanced fairness approach that has built in consideration for long term effects. The paper made quite a splash, winning the best paper award at the 35th conference International Conference on Machine Learning.
I don’t disagree with any of their analysis directly, however I feel the authors may have fallen into the very trap they describe- thinking statically rather than dynamically. In particular, government policies such as fairness requirements for banks don’t just affect group credit scores, they also affect the practical meaning of these group credit scores. Under some regulatory regimes it is much worse for a group to have its average credit score fall than it is under other regulatory regimes. Treating the effect of mean credit score on a group as stable between different regimes will lead to errors.
Suppose legislation exists such that every bank is required to accept the same proportion of applicants of each race. I agree it is entirely possible that this will lower the mean credit scores of disadvantaged races. What the authors don’t discuss in the paper however is that as a consequence of this legislation, the mean credit score of the disadvantaged group matters a lot less.The effect of a mean credit score on the utility of a group is itself not invariant between different regulatory regimes. One of the main ways a lower mean credit score can effect a group is by reducing that group’s access to credit, if banks are required to extend credit to an equal proportion of applicants of each race, it matters far less whether this policy has reduced the mean credit score of one racial group because under this regulatory regime, the mean credit score doesn’t do much. Effectively what matters in this regime is your rank relative to other applicants of your group.
I’m not saying this is a vindication of the decision rule “Accept an equal proportion of applicants from all races”, There are other lines of objection to this rule- for example, it may increase the frequency of the trauma of bankruptcy. What I am saying is that so long as the rule is enforced consistently, its effect on mean group credit scores won’t matter that much, because the very existence of this policy regime makes mean group credit scores matter less.
Perhaps the difference between myself and the here authors here is that the authors had in mind the situation of a single bank unilaterally setting fairness rules upon itself, so the effects of such a policy on the mean credit score of a group over time would be very important indeed since other banks may not employ similar fairness policies. I on the other hand am primarily considering the impact of a government policy directing all banks to act in a particular way. Certainly a bank considering setting policies like this for the sake of fairness should carefully consider the consequences of their actions, but I find it unlikely that an individual bank- a profit maximising entity- would set such strong fairness minded policies as “Accept an equal proportion of applicants from each race”. If such policies are going to be made, they will likely be made by government under pressure from the effected groups, thus I think it is prudent to consider such policies in the first instance in terms of government action, and thus it may be inadvisable to put much weight on the hypothetical case of a bank setting such stringent policies on itself.
More generally, the different effects of such a policy existing for a single bank v an entire national economy is a great example of why one cannot derive the effects of a national policy through a simple aggregation. A government policy that would fail if some individuals adopted it autonomously might have the opposite effect if imposed on everyone at once. Conversely a government policy that would work well if some individuals adopted it might fail to have any effect, or backfire, if adopted as universal policy. This is one member of a broad and sometimes distantly related family of connected phenomena among the most fascinating in the social sciences.
(Dear reader, at the moment I’m seeking Patreon support to enable my writing habit: https://www.patreon.com/deponysum)