Algorithms, face recognition and rights. (And exams, too.)

The Court of Appeal’s decision to uphold an appeal against South Wales Police’s use of facial recognition software has all kinds of interesting facets. But the interplay between its findings on the equality implications of facial recognition, and the rights we all have under GDPR, may have significant repercussions. Including, possibly, for the A-level/GCSE fiasco.

Most nerd lawyers will, like me, have been fascinated by the Court of Appeal’s decision to uphold the appeal in R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058. The tl;dr version is that the Court said South Wales Police (“SWP”) had acted unlawfully in mining CCTV to scan the faces of thousands of attendees of large public events, compare them to a “watchlist” of persons of interest using a software tool called “AFR Locate”, and identify people for further police attention.

It’s worth noting that the Court did not find SWP to have acted wholly improperly. It’s clear from the narrative that they made at least some efforts to build safeguards into their procedures and their use of AFR Locate. Nor did the Court find that an activity like this was unlawful per se. However, the Court found that both in who SWP chose to look for, and where they did so, its procedures and practice fell short of what would be required to make them lawful. To that extent, Edward Bridges, the appellant, was right.

It goes without saying that for privacy activists and lawyers, this case will be pored over in graphic and lengthy detail by minds better than mine. But one aspect does rather fascinate me – and may, given the tension between commercial interests and human rights, prove a trigger for further investigation.

That aspect is Ground 5 of Mr Bridges’ appeal, in which the Court of Appeal found SWP to have breached the Public Sector Equality Duty (PSED). The PSED, for those who may not be intimately familiar with s149 of the Equality Act 2010 (EqA), requires all public authorities – and other bodies exercising public functions – to have due regard to the need to, among other things, eliminate the conduct the EqA prohibits, such as discrimination, and advance equality of opportunity between people with a protected characteristic (such as race or sex) and those without it. As the Court noted (at []), the duty is an ongoing one, requiring authorities actively, substantively, rigorously and with an open mind, to consider whether what they are doing satisfies the PSED. It’s a duty which applies not so much to outcomes, but to the processes by which those outcomes are achieved.

Bye-bye to black box algorithms?

In the context of the Bridges case, SWP had argued (and the Divisional Court had accepted) that there wasn’t evidence to support an allegation that the proprietary (and therefore undisclosed and uncheckable) algorithm at the heart of AFR Locate was trained on a biased dataset. (For the less nerdy: a commonly-identified concern with algorithms used in criminal justice and elsewhere is that the data used to help the algorithm’s decision-making evolve to its final state may have inbuilt bias. For instance, and extremely simplistically, if a facial recognition system is trained on a standard Silicon Valley working population, it’s likely to have far fewer Black people and quite possibly far fewer women. And thus be far less accurate in distinguishing them.)

The Court of Appeal found this argument wholly unconvincing. The lack of evidence that the algorithm WAS biased wasn’t enough. There was no sign that SWP had even considered the possibility, let alone taken it seriously.

Most interestingly, and potentially of most far-reaching effect, the Court said at [199] that while it may be understandable that the company behind AFR Locate had refused to divulge the details of its algorithm, it “does not enable a public authority to discharge its own, non-delegable, duty under section 149“.

So – unless this can be distinguished – could it be the case that a black-box algorithm, by definition, can’t satisfy the PSED? Or that even an open one can’t, unless the public authority can show it’s looked into, and satisfied itself about, the training data?

If so, this is pretty big news. No algorithms without access. Wow. I have to say the implications of this are sufficiently wide-ranging to make me think I must be misreading or overthinking this. If so, please tell me.

Algorithms and data protection

There’s another key aspect of the lawfulness of algorithm use which SWP, given the design of their system, managed to avoid – but which could play a much bigger role in the ongoing, and shameful, exam fiasco.

GDPR is not fond of purely algorithmic decisions – what it calls at Recital 71 and Article 22 “solely automated processing”. (I’m using algorithm here in its broadest sense, as an automated system of rules applied to a dataset.) This applies with particular force to “profiling”, which Article 4 defines as automated processing which “evaluate[s] certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.

In fact, Article 22 prohibits any such decision-making on matters which either affect someone’s legal rights or otherwise “similarly significantly affects” them – unless it is:

  • necessary for entering into or performing a contract between the data subject and the data controller;
  • Authorised by EU or (in this case) UK law which incorporates safeguards to protect the data subject’s rights and freedoms; or
  • Based on the data subject’s explicit consent.

Unlike a number of other GDPR provisions, no exemptions are allowed.

Similarly, s14 of the 2018 Data Protection Act (“the DPA”) says such processing – even if authorised by law – must allow the data subject to ask for a decision to be made which is not “based solely on automated processing”. And that request must be honoured.

The key word here so far as Bridges is concerned is “solely”. The human agency at the end of SWP’s process, whether inadvertently or by design, takes this out of the realm of A22; so this didn’t form any part of the Court of Appeal’s reasoning, or of the grounds of appeal. Were there no human in the loop, this kind of processing might be in serious trouble, since there’s no contract, certainly no freely-given consent (which can only be given if it’s possible to withdraw it), and I don’t know of any law which explicitly authorises it, let alone building in safeguards. And using facial recognition to target individuals for police attention is a paradigm case of analysing someone’s “personal aspects, including… behaviour, location or movements”.

So what about exams?

[UPDATE: Unsurprisingly, the JR letters before action are coming out. And one in particular raises points similar to these, alongside others dealing with ultra vires and irrationality. The letter, from Leigh Day, can be found at Foxglove Law’s page for the exam situation.)

But even if A22 wasn’t an issue in Bridges, I suspect that the rapidly-accelerating disaster – no, that implies there’s no agency involved; let’s call it “fiasco” – involving A-levels and no doubt GCSE results will be a different story.

I won’t go into detail of the situation, except to say that an algorithm which marks anyone down from a predicted B/C to a U (a mark which is traditionally believed to denote someone who either doesn’t turn up, or can barely craft a coherent and on-point sentence or two) is an algorithm which is not only grossly unjust, but – given 18 months of pre-lockdown in-school work, even if it isn’t “official” coursework – is likely provably so.

But let’s look at it through firstly the PSED lens. The Court of Appeal in Bridges says that public authorities using algorithms have a duty to work out whether they could inherently discriminate. I haven’t read as much as the lawyers crafting the upcoming JRs of Ofqual’s materials, but I’m not at all certain Ofqual can show it’s thought that through properly – particularly where its algorithm seems heavily to privilege small-group results (which are far more likely in private schools) and to disadvantage larger groups (comprehensives and academies in cities and large towns).

(I have to acknowledge I haven’t spent any time thinking about other EqA issues. Indirect discrimination is certainly conceivable. I’ll leave that reasoning to other minds.)

Now let’s switch to the GDPR issue. We know from A22 that decisions made solely by automated processing are unlawful unless one of the three conditions applies. I can’t see any legal basis for the processing specific enough to satisfy the A22 requirements – certainly none which sufficiently safeguarded the rights and freedoms of the data subjects – that is, the students at the heart of this injustice. Nor am I aware of any data protection impact assessment that’s been carried out – which, by the way, is another legal obligation under A35 where there’s a “high risk” to individuals – self-evidently the case for students here whose futures have been decided by the algorithm. And the fact that the government has thus far set its face against individual students being able to challenge their grades seems to fly in the face of DPA s14.

One final kicker here, by the way. Recital 71 of the GDPR forms the context in which A22 sits, discussing in further detail the kind of “measures” – that is, systems for processing data – with which A22 deals, and which are only permitted under narrow circumstances. It stresses that any automated measures have to “prevent… discriminatory effects”.

Its final words? “Such measure should not concern a child.”

Watch this space.

Lies and freedom. They don’t mix.

“All politicians lie,” so they say. No; all human beings lie. What matters is what lie, when – and what it does to your ability to choose.

I’m a sucker for a series.

By which I mean a sequence of books (for preference) or a good serialised TV show. Genre, of course – you can critique me all you like, but good fantasy/scifi/etc, written with love and care, can’t be beat.

Pratchett’s Discworld*. DS9 – particularly later seasons as the story gained pace. The Broken Earth. B5, of course, and Farscape. Aubrey/Maturin. Rivers of London. And Dresden.

A long-running tale is part of it, to be sure. But the key is writers and creators who let their characters grow and change over time, rather than remain stable as the world shifts around them. It’s a privilege to be part of that.

My problem, particularly with books where there’s been a long gap between instalments – and I recognise this may just be me – is a tendency to want to re-read the whole series before diving into a new one. Which, with the Dresden Files, is taking a while.

Sometimes, though, doing this unearths gems you may have missed the first time round. There’s a couple buried in Ghost Story which hit me squarely between the eyes – and made me think about what I respect, what I despise, and why I make the distinction.

Late in the book – and I won’t spoil it with too much context for the uninitiated – the main character, Harry Dresden, is talking to someone far mightier, but also far gentler, than he. That person’s mission in life is to preserve people’s right to choose, because good and evil mean nothing unless that fundamental human right is preserved. He notes that a particularly vicious misfortune which befell Harry was born of a particularly well-crafted and well-timed lie: convincing him that what was, wasn’t, and making him think he had no choice but to walk down a bad road.

And the character says this: “When a lie is believed, it compromises the freedom of your will.”

That sticks with me. We all lie. Yes, we whinge about politicians doing it – but we all do. Mostly for self-protection. But there are big lies and little lies. And the difference is found not in the extent of the untruth, but in the anticipated consequence.

So a lie designed and intended to sway the world, to destroy the chance to make an honest decision: that’s the lie that’s unforgivable.

Perhaps this is why our profession’s greatest sin is to mislead the court. Sure, represent your client. Highlight the truths that help. Play down those that don’t. Tell the story in the best way for your side – the most believable way. But to mislead the court – even by hiding a relevant authority that doesn’t help – is to rob the tribunal of its chance to make its mind up. It’s not persuasion. It’s a con.

It’s also why I reserve a special hatred for con artists. Sure, I can admire the artistry bit – sort of. But the most successful cons which turn their marks into their best salespeople. Whose self-esteem has been warped by the lies, such that it can scarcely survive if the lies are challenged.

And that inevitably leads me back to politics. As I said, all politicians lie. They’re human. Sometimes to make life easier. Sometimes to protect secrets – whether for fair reasons or foul will depend on the circumstances. Sometimes to protect a confidence.

But outright lies, told to sway and shape opinion, when it’s clear on close inspection that the teller knows perfectly well what they’re doing? That’s treating people as pawns. Playthings.

As marks.

Some thinkers take this further. Harry Frankfurt’s famous essay (and later book), “On Bullshit“, made a distinction between lies on the one hand – where the liar at least placed some value on the truth, prizing it in the act of obscuring it – and bullshit, where the teller simply didn’t care what was true and what wasn’t as long as it served their purpose. It’s a distinction that has been often criticised.

I’m not sure where I stand. I see the distinction, and we do seem to be swimming nostril-deep in particularly noxious and damaging political bullshit in recent years. (Brexit, Johnson, Corbyn, Trump, so many others. Lord, the list goes on. And a special mention for Michael Gove, whose Ditchley speech was an example of extreme – and, I can only conclude, calculated – intellectual dishonesty.)

But I think I care less about the lie-vs-bullshit axis than I do about this question of choice. Whether in politics or people’s personal lives – think of abusers warping the world to rob their victims of a vision of anything different, for instance – robbing people of the freedom to choose feels like the big differentiator.

Dan Davies, author of a wonderful book called “Lying for Money”, put it particularly well, in something he wrote getting on for a couple of decades ago entitled “Avoiding projects pursued by morons 101“. Seriously, read it – it’s not long. But it boils down to three rules, all of which focus on lies and testing them:

  • Good ideas do not need lots of lies told about them in order to gain public acceptance. (If people won’t buy into them without being lied to, that tells you everything you need to know.)
  • Fibbers’ forecasts are worthless. (You can’t mark a liar to market. You can’t hope to fudge their numbers towards reality. If a liar says “this is what will happen”, the only safe thing is to assume the opposite.)
  • The vital importance of audit. (Any time someone won’t let their predictions or their advice get tested against reality, or moves the goalposts mid-game, run. Immediately.)

Put differently: If you catch someone deliberately lying to you, so as to change your mind about something important: that’s it. They’re done. Stop listening to them. Now.

You can accept lies as a fair form of discourse. Or you can – while accepting that we’re human, and so we fail – focus on the right to choose with your eyes open.

You can’t have both. And anyone who favours option one? Don’t trust them. Ever. About anything.

* I’m gradually re-reading the whole Discworld saga. Taking it very, very slow. Essentially to leave till the last possible moment the time when I pick up the Shepherd’s Crown – because it will be the last new Pratchett I ever read. And that hurts.