2021ii22, Monday: tl;dr.

Lord Leggatt is a judicial hero. And, as we now know, he understands that somethings really are too long; didn’t read. Also: creative conflict at its best.

Short thought: Smarter and better minds than mine have crawled all over the Uber judgment, handed down by the Supreme Court on Friday. It’s justifiably been the centre of attention in the employment law world: although obviously deriving from a specific set of facts, it nonetheless lays down a clear line as to whether companies can seek to dictate through contract terms whether their staff are workers – to whom they owe at least some employment rights – or independent contractors, to whom they owe nothing but payment for services rendered.

The tl;dr version: they can’t. It’s a question of statutory interpretation, not pure contract law. And it’s the reality of the relationship, not the words on the page, which make the difference.

Big news. Gig economy “employers” will have been poring over their business models and contracts over the weekend. Many, I anticipate, will find themselves in (for them) uncomfortable territory. I’m of the view (for what it’s worth) that the Supreme Court has gone the right way on this. That said, what will change straight away? Perhaps not much. Individual workers may need themselves to sue, given that the government seems notably uninterested in doing anything about it themselves. (The post of Director of Labour Market Enforcement, occupied on an interim basis by Matthew Taylor, falls vacant at the end of this week. He offered to stay in the job for nothing. He was turned down. Apparently none of the candidates were suitable. So clearly this wasn’t a high priority.) And as we’ve seen, the backlog in the employment tribunals, largely thanks also to government policy, is vast.

Among the mass of commentary, Jason Braier (unsurprisingly – his #ukemplaw Twitter form is near-unrivalled) has one of the best explanations. Fifty-plus tweets, but worth following through all the way. Although a pupil at our chambers, Ian Browne, managed to sum the whole thing up beautifully in two paragraphs:

But it’s not Jason’s (or indeed Ian’s) splendid work that I want to point to. No; it’s the magisterial judgment itself – the work of Lord Leggatt, a relatively recent addition to the UKSC. Friends and colleagues who’ve appeared before him are fans; even those he’s monstered with his questions or found against. I haven’t risen to those heights. But he was already one of my judicial heroes thanks to his judgment in the case of Gestmin v Credit Suissewhich I wrote about recently as the starting-point for the acceptance by the English courts (the Commercial Court, at least) that the fragility of memory was a critical consideration in how justice could be delivered.

Well, Lord Leggatt has done it again. We’ve all done that thing where you turn to the back of a judgment to find out the outcome, only to find there are 50 paragraphs of obiter addenda to wade back through. Not so here. In a glorious judicial tl;dr of his own, and perhaps in the knowledge that many reading a judgment with such significance for working people won’t be lawyers, Lord Leggatt gives the outcome upfront in paragraph 2, in just 39 words. Bless the man.


Someone is right on the internet: “Why can’t we just get along?” Because sometimes, just sometimes, we need to argue.

Argument is not, in itself, a bad thing. Debate and disagreement are like mistakes. Without them, you can’t learn, or grow, or find out you’re wrong. And if you can’t do those, there’s no hope for you – and no point in listening to you.

In this bit of writing, Ian Leslie calls on two examples. The first is an interview between noted right-wing poster-boy Jordan Peterson and Helen Lewis. Lewis is a great reporter, but this interview caused lots of people to accuse her of malice or unfairness, or (grow up, people) of some kind of “woke agenda”, in how she treated Peterson. I can’t see it. As Leslie suggests, she seems to be engaging in – to British eyes – a perfectly normal piece of searching and probing, but by no means unfair or aggressive, interviewing. To which Peterson seems to respond in a notably thin-skinned, take-it-personally manner. Odd, for someone whose shtick seems to be all about people toughing up, taking responsibility and stopping with the whining.

But it’s the second I really loved. You’ll have to scroll all the way down for it, but he describes a row (apparently well-known to Beatles fans, which I’m not particularly) between Paul McCartney and George Harrison during a rehearsal for a TV performance. Apparently it’s cited as an example of why the Beatles split, but Leslie instead sees it as an example of how conflict between collaborating artists can take their creativity to still loftier heights:

Maybe this won’t be interesting to anyone who isn’t a Beatles nerd but even if you’re not, isn’t it incredible have a raw and unfiltered record of one of the all-time great creative collaborations, as it happens – tensions, irritations, disagreements and all? If it is a little boring, that’s interesting too – it shows how magic can grow out of a long series of banal interactions. Anyway – it’s during this extended argument that Paul coins a favourite quote of mine, applicable to any creative process: “It’s complicated now. If we can get it simpler, and then complicate it where it needs to be complicated.” Whether you’re stuck on a song, an essay or a coding project, this is great advice: strip it back to its simplest form and then let the complications force their way in. (A little later, Paul rephrases it: “Let’s get the confusion unconfused, and then confuse it.”)

“Get the confusion unconfused, and then confuse it.” Wonderful. My new theme song.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021i15, Friday: Thank God it wasn’t me.

In (virtual) court for a 10 day hearing at the moment. So again I’ll be brief. A wrenching judgment, and a lovely bit of writing about a friendly neighbourhood hero.

Short thought: Whenever I’m talking to law students, I always say: read the judgments. Not just the brief snippets with the authoritative bit you want to quote. No; read the whole thing when you can. Partly for the context, of course. (And because every advocate has, albeit hopefully only once, done that thing where you find a fabulous quote, but overlook the perfect way of distinguishing and thus destroying your point two paragraphs further down. Which, of course, your opponent finds and seizes upon to devastating effect.) But mostly because the best judgments are some of the most phenomenal legal writing you’ll ever be exposed to; an education in themselves.

Put differently – when I read a really good one, I find myself thinking: I want to write like that when I grow up.

But every so often comes a judgment… and you’re so, so glad you weren’t the one who had to write it. Guy’s & Thomas’s v Pippa Knight [2021] EWHC 25 (Fam) is one such.

The story’s heart-wrenching. Pippa is five years old. She is on a ventilator. She suffered brain damage in 2017. Her father took his own life shortly afterwards, having already lost a child to meningitis. She can’t breathe on her own, is unconscious and has lost most function. The hospital went to court to ask whether it should withdraw life-sustaining care. 

I can’t do anything approaching justice to the care, consideration and professionalism of Poole J in reaching and writing this judgment. Katie Gollop QC has done a fine job of describing the key points. Read her twitter stream. Read the judgment. It will break your heart. But maybe some things should.

There’ll be those who say Poole J was wrong. That care should not, or should never, be withdrawn. There’ll even, perhaps, be those who see him as a monster, or as having committed a grievous sin. (On which subject: I’m a person of faith – and I have zero sympathy with, and some anger for, those who use tragic cases for politico-religious ends. I don’t think anyone has here, thank God. But still.)

But I see none of that. I see a fine jurist, facing a heart-rending choice with no good or easy answers, doing his level best to do what the law – and, I think, morality – requires: to put the child first. While still respecting and highlighting the awe-inspiring love and dedication that her mother has shown her throughout her life.

I want to write like him when I grow up. Just not about that. Please God, not about that.


Someone is right on the internet: I’m a sucker for Spider-Man. I sometimes find superheros somewhat annoying (although that doesn’t stop me watching Marvel movies, or the Arrowverse DC TV shows). But Spidey has always been special.

Like so many others, I watched Into the Spider-Verse with gratitude, wonder and delight. Not just because in Miles Morales there’s a whole new generation reflected in the best and most demotic hero ever. But simply because of the joy, craft, art and genius – and love! – that went into making it. It’s a genuine masterpiece. 

And I have to admit, Tom Holland does an excellent job in the new MCU ones.

But every so often I go back to the 2002 film that got Spidey onto the silver screen and kept him there. Sam Raimi’s Spider-Man creaks a bit at the edges, and the effects – well, you have to work a bit not to see the seams. But the film, and Tobey Maguire in it, get Peter Parker right. Like no other film truly has. (The sequel did too. More so, perhaps. Let’s agree not to talk about the third one, OK?)

The AV Club, home of some of the best culture and genre writing around (its TV reviews are to die for), in one of its long-running series (this one looks at the highest-grossing movie in the US for each year, starting in 1960), has made it to 2002. And their write-up on Spider-Man gets it just right.

Won’t say more. If the phrase “With great power comes great responsibility” means a thing to you, go and read it. You won’t regret it. And then, if you’re like me, you’ll want to push off and watch it. All over again.


(Don’t forget – if visiting a site doesn’t float your boat, you can get this stuff in your inbox. Subscribe at https://remoteaccessbar.substack.com/.)

Algorithms, face recognition and rights. (And exams, too.)

The Court of Appeal’s decision to uphold an appeal against South Wales Police’s use of facial recognition software has all kinds of interesting facets. But the interplay between its findings on the equality implications of facial recognition, and the rights we all have under GDPR, may have significant repercussions. Including, possibly, for the A-level/GCSE fiasco.

Most nerd lawyers will, like me, have been fascinated by the Court of Appeal’s decision to uphold the appeal in R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058. The tl;dr version is that the Court said South Wales Police (“SWP”) had acted unlawfully in mining CCTV to scan the faces of thousands of attendees of large public events, compare them to a “watchlist” of persons of interest using a software tool called “AFR Locate”, and identify people for further police attention.

It’s worth noting that the Court did not find SWP to have acted wholly improperly. It’s clear from the narrative that they made at least some efforts to build safeguards into their procedures and their use of AFR Locate. Nor did the Court find that an activity like this was unlawful per se. However, the Court found that both in who SWP chose to look for, and where they did so, its procedures and practice fell short of what would be required to make them lawful. To that extent, Edward Bridges, the appellant, was right.

It goes without saying that for privacy activists and lawyers, this case will be pored over in graphic and lengthy detail by minds better than mine. But one aspect does rather fascinate me – and may, given the tension between commercial interests and human rights, prove a trigger for further investigation.

That aspect is Ground 5 of Mr Bridges’ appeal, in which the Court of Appeal found SWP to have breached the Public Sector Equality Duty (PSED). The PSED, for those who may not be intimately familiar with s149 of the Equality Act 2010 (EqA), requires all public authorities – and other bodies exercising public functions – to have due regard to the need to, among other things, eliminate the conduct the EqA prohibits, such as discrimination, and advance equality of opportunity between people with a protected characteristic (such as race or sex) and those without it. As the Court noted (at []), the duty is an ongoing one, requiring authorities actively, substantively, rigorously and with an open mind, to consider whether what they are doing satisfies the PSED. It’s a duty which applies not so much to outcomes, but to the processes by which those outcomes are achieved.

Bye-bye to black box algorithms?

In the context of the Bridges case, SWP had argued (and the Divisional Court had accepted) that there wasn’t evidence to support an allegation that the proprietary (and therefore undisclosed and uncheckable) algorithm at the heart of AFR Locate was trained on a biased dataset. (For the less nerdy: a commonly-identified concern with algorithms used in criminal justice and elsewhere is that the data used to help the algorithm’s decision-making evolve to its final state may have inbuilt bias. For instance, and extremely simplistically, if a facial recognition system is trained on a standard Silicon Valley working population, it’s likely to have far fewer Black people and quite possibly far fewer women. And thus be far less accurate in distinguishing them.)

The Court of Appeal found this argument wholly unconvincing. The lack of evidence that the algorithm WAS biased wasn’t enough. There was no sign that SWP had even considered the possibility, let alone taken it seriously.

Most interestingly, and potentially of most far-reaching effect, the Court said at [199] that while it may be understandable that the company behind AFR Locate had refused to divulge the details of its algorithm, it “does not enable a public authority to discharge its own, non-delegable, duty under section 149“.

So – unless this can be distinguished – could it be the case that a black-box algorithm, by definition, can’t satisfy the PSED? Or that even an open one can’t, unless the public authority can show it’s looked into, and satisfied itself about, the training data?

If so, this is pretty big news. No algorithms without access. Wow. I have to say the implications of this are sufficiently wide-ranging to make me think I must be misreading or overthinking this. If so, please tell me.

Algorithms and data protection

There’s another key aspect of the lawfulness of algorithm use which SWP, given the design of their system, managed to avoid – but which could play a much bigger role in the ongoing, and shameful, exam fiasco.

GDPR is not fond of purely algorithmic decisions – what it calls at Recital 71 and Article 22 “solely automated processing”. (I’m using algorithm here in its broadest sense, as an automated system of rules applied to a dataset.) This applies with particular force to “profiling”, which Article 4 defines as automated processing which “evaluate[s] certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.

In fact, Article 22 prohibits any such decision-making on matters which either affect someone’s legal rights or otherwise “similarly significantly affects” them – unless it is:

  • necessary for entering into or performing a contract between the data subject and the data controller;
  • Authorised by EU or (in this case) UK law which incorporates safeguards to protect the data subject’s rights and freedoms; or
  • Based on the data subject’s explicit consent.

Unlike a number of other GDPR provisions, no exemptions are allowed.

Similarly, s14 of the 2018 Data Protection Act (“the DPA”) says such processing – even if authorised by law – must allow the data subject to ask for a decision to be made which is not “based solely on automated processing”. And that request must be honoured.

The key word here so far as Bridges is concerned is “solely”. The human agency at the end of SWP’s process, whether inadvertently or by design, takes this out of the realm of A22; so this didn’t form any part of the Court of Appeal’s reasoning, or of the grounds of appeal. Were there no human in the loop, this kind of processing might be in serious trouble, since there’s no contract, certainly no freely-given consent (which can only be given if it’s possible to withdraw it), and I don’t know of any law which explicitly authorises it, let alone building in safeguards. And using facial recognition to target individuals for police attention is a paradigm case of analysing someone’s “personal aspects, including… behaviour, location or movements”.

So what about exams?

[UPDATE: Unsurprisingly, the JR letters before action are coming out. And one in particular raises points similar to these, alongside others dealing with ultra vires and irrationality. The letter, from Leigh Day, can be found at Foxglove Law’s page for the exam situation.)

But even if A22 wasn’t an issue in Bridges, I suspect that the rapidly-accelerating disaster – no, that implies there’s no agency involved; let’s call it “fiasco” – involving A-levels and no doubt GCSE results will be a different story.

I won’t go into detail of the situation, except to say that an algorithm which marks anyone down from a predicted B/C to a U (a mark which is traditionally believed to denote someone who either doesn’t turn up, or can barely craft a coherent and on-point sentence or two) is an algorithm which is not only grossly unjust, but – given 18 months of pre-lockdown in-school work, even if it isn’t “official” coursework – is likely provably so.

But let’s look at it through firstly the PSED lens. The Court of Appeal in Bridges says that public authorities using algorithms have a duty to work out whether they could inherently discriminate. I haven’t read as much as the lawyers crafting the upcoming JRs of Ofqual’s materials, but I’m not at all certain Ofqual can show it’s thought that through properly – particularly where its algorithm seems heavily to privilege small-group results (which are far more likely in private schools) and to disadvantage larger groups (comprehensives and academies in cities and large towns).

(I have to acknowledge I haven’t spent any time thinking about other EqA issues. Indirect discrimination is certainly conceivable. I’ll leave that reasoning to other minds.)

Now let’s switch to the GDPR issue. We know from A22 that decisions made solely by automated processing are unlawful unless one of the three conditions applies. I can’t see any legal basis for the processing specific enough to satisfy the A22 requirements – certainly none which sufficiently safeguarded the rights and freedoms of the data subjects – that is, the students at the heart of this injustice. Nor am I aware of any data protection impact assessment that’s been carried out – which, by the way, is another legal obligation under A35 where there’s a “high risk” to individuals – self-evidently the case for students here whose futures have been decided by the algorithm. And the fact that the government has thus far set its face against individual students being able to challenge their grades seems to fly in the face of DPA s14.

One final kicker here, by the way. Recital 71 of the GDPR forms the context in which A22 sits, discussing in further detail the kind of “measures” – that is, systems for processing data – with which A22 deals, and which are only permitted under narrow circumstances. It stresses that any automated measures have to “prevent… discriminatory effects”.

Its final words? “Such measure should not concern a child.”

Watch this space.