In Britain over 20 years ago a large group of local postmasters were prosecuted for fraud. Many were convicted, some jailed, most lost jobs and incomes. Why? Because the Horizon computer system, used to manage their financial accounts was thought incapable of error. Only later, after much effort by campaigners, did it become clear that the technology’s builder, Fujitsu, and its client, The Post Office, had deliberately concealed Horizon’s faults.

Everyday another story emerges about how increasing use of automated algorithmic decision-making can produce a variety of unintended consequences. The effects are felt in health care, employment, education, finance and insurance, crime prevention or surveillance – the list goes on.

In response to these concerns and the growth of AI, the European Union in 2022 agreed a Declaration on Digital Rights and Principles for the Digital Decade. It aims to uphold EU fundamental rights (including the EU Charter on Fundamental Rights). Tellingly, the Declaration contains a new commitment to a ‘right to a human decision’ over systems based on algorithms that ‘affect people’s safety and fundamental rights’.

A recent post by Yuval Shany on the AI Ethics at Oxford Blog noted that this emerging right to oversight by a human decision-maker is increasingly being proposed and accepted by countries in both the global north and south. He writes:

Given the relationship between the justifications invoked for the right to a human decision and the justifications attached to other human rights protecting core human values such as due process, political participation, equality and dignity, it appears that a new right to a human decision could fit well within the corpus of universal and inalienable human rights.

But declarations of principle are one thing, deriving workable solutions to complex problems quite another. For the right to a human decision maker to make a difference, decision-makers will need to be competent (and equipped with the right expertise and skills) to oversee the increasingly complex and opaque processes embedded in computer systems.

In fact, the EU’s General Data Protection Regulation (GDPR) already requires human decision makers to undertake ‘meaningful human review’. They must, according to the independent DPO Centre (Data Protection Officer) blog

understand how the AI system works; to recognise when the system is likely to produce misleading or incorrect recommendations; to understand the external or other factors that should influence decision making that an AI system would not consider; and how to document their decision making to be transparent and accountable.

However, in March 2023, Mozilla Corporation published a study showing how in reality companies and organizations rarely prioritize ‘meaningful transparency’ about AI.

Until real transparency is actually implemented, there will be more Horizon scandals. The message is clear: people must be able to exercise their communication rights to know how decisions that affect them are made and who (or in the world of AI,’ ‘what’ algorithms) make them, on what criteria. This is a real, practical and urgent AI challenge that remains to be addressed.

Photo: Stock-Asso