Why accountability is an increasingly elusive endeavour

0 2

Stay informed with free updates

Imagine — or, if you fly frequently and are particularly unlucky, remember — this scenario. Your flight is overbooked, and in order to free up space, passengers are being removed. Those with the least frequent flyer miles are ushered out first. You can complain, of course, but the person implementing the policy has not set it — and nor can they change it. Even if you happen to have the mobile phone number of the airline’s chief executive, it’s not her fault: it is company policy.

This is the thought experiment that Dan Davies uses in his entertaining new book, The Unaccountability Machine, to illustrate what he dubs “the accountability sink”: rule books, procedures and in some cases whole institutions in modern life that have essentially removed individual responsibility for decision-making. The flight attendant who escorts you from the plane isn’t responsible, nor is their manager. After all, you signed up for this possibility when you booked your ticket.

Although the “accountability sink” is a new coinage, other thinkers talk about similar problems. FAT ML (fairness, accountability and transparency in machine learning) researchers talk about both “responsibility” (having a clear idea of whom to approach when an algorithm spits out a decision you don’t like) and “explainability” (that person should be able to explain why the decision was reached in accessible language) for this reason. Others worry that machine learning creates a “responsibility gap” in which human decision-makers no longer have to account for, or even understand, the decisions they are implementing.

There are worse things in life than a delayed journey. A graver example of an accountability sink might be the Windrush scandal, in which tens of thousands of Britons born in the West Indies suffered mistreatment, with many losing their jobs and homes, and some wrongly deported from the UK. Whose fault was it? The architects of the hostile environment policy, which moved the frontiers of the country’s border regulations to every workplace, landlord and public service? The bureaucrats who shredded landing cards that might have served as alternative forms of ID? The host of governments that legislated to restrict the right of citizens of the British empire to move freely within it? The only person to have resigned over the injustice was Amber Rudd, whose main offence was being home secretary when the scandal hit the news, rather than implementing any of the offending policies, and she soon returned to high office after a short absence, running another large department. A similar accountability sink is now at work in the Windrush compensation scheme, which has been beset by delays.

But not all accountability sinks are bad. The hiring process that allows my chief executive friend to screen out the application from my incompetent nephew without offending me is also an accountability sink. The clear rules set out in rule books, procedures and algorithms can produce sharp unfairnesses. But they are, at least, measurable and detectable, unlike the snap judgments we all make everyday. Sentencing guidelines for criminal cases are in part a way to remove personal prejudices and arbitrary unfairnesses from the justice process, though they can and do bring in arbitrary unfairnesses of their own.

Accountability sinks are a byproduct of living in a more complex society, and the roll out of machine learning and algorithmic decision-making in public policy is going to create many more of them. A judge, who in almost all societies has their autonomy heavily constrained by sentencing guidelines and mandatory limits, may still remain the ultimate decider even if they are assisted by algorithmic models. But who is to blame if or when those systems go wrong, or skew heavily in one direction or another?

One good example of the benefits and difficulties can be seen in the use of facial-recognition software. Although this technology now almost always performs better than human judgment alone, it is still more likely to misidentify people with darker skin tones and women. Who is to blame if this leads to a miscarriage of justice? And who should be responsible for ensuring that the software is better tomorrow than it is today?

But a regular human, unconstrained by a rule book, is going to make an awful lot of mistakes too. And I would much rather subject myself to the occasional unfairnesses of the average rule book — which in practice is all an algorithm is — than to unrestrained human decision-making. But for leaders, whether of companies or states, explaining what has gone wrong in machine-aided decision-making will mean being able to speak fluently about what accountability sinks mean, why we have them and whether we should get rid of them.

[email protected]

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy