Algorithms, face recognition and rights. (And exams, too.)

The Court of Appeal’s decision to uphold an appeal against South Wales Police’s use of facial recognition software has all kinds of interesting facets. But the interplay between its findings on the equality implications of facial recognition, and the rights we all have under GDPR, may have significant repercussions. Including, possibly, for the A-level/GCSE fiasco.

Most nerd lawyers will, like me, have been fascinated by the Court of Appeal’s decision to uphold the appeal in R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058. The tl;dr version is that the Court said South Wales Police (“SWP”) had acted unlawfully in mining CCTV to scan the faces of thousands of attendees of large public events, compare them to a “watchlist” of persons of interest using a software tool called “AFR Locate”, and identify people for further police attention.

It’s worth noting that the Court did not find SWP to have acted wholly improperly. It’s clear from the narrative that they made at least some efforts to build safeguards into their procedures and their use of AFR Locate. Nor did the Court find that an activity like this was unlawful per se. However, the Court found that both in who SWP chose to look for, and where they did so, its procedures and practice fell short of what would be required to make them lawful. To that extent, Edward Bridges, the appellant, was right.

It goes without saying that for privacy activists and lawyers, this case will be pored over in graphic and lengthy detail by minds better than mine. But one aspect does rather fascinate me – and may, given the tension between commercial interests and human rights, prove a trigger for further investigation.

That aspect is Ground 5 of Mr Bridges’ appeal, in which the Court of Appeal found SWP to have breached the Public Sector Equality Duty (PSED). The PSED, for those who may not be intimately familiar with s149 of the Equality Act 2010 (EqA), requires all public authorities – and other bodies exercising public functions – to have due regard to the need to, among other things, eliminate the conduct the EqA prohibits, such as discrimination, and advance equality of opportunity between people with a protected characteristic (such as race or sex) and those without it. As the Court noted (at []), the duty is an ongoing one, requiring authorities actively, substantively, rigorously and with an open mind, to consider whether what they are doing satisfies the PSED. It’s a duty which applies not so much to outcomes, but to the processes by which those outcomes are achieved.

Bye-bye to black box algorithms?

In the context of the Bridges case, SWP had argued (and the Divisional Court had accepted) that there wasn’t evidence to support an allegation that the proprietary (and therefore undisclosed and uncheckable) algorithm at the heart of AFR Locate was trained on a biased dataset. (For the less nerdy: a commonly-identified concern with algorithms used in criminal justice and elsewhere is that the data used to help the algorithm’s decision-making evolve to its final state may have inbuilt bias. For instance, and extremely simplistically, if a facial recognition system is trained on a standard Silicon Valley working population, it’s likely to have far fewer Black people and quite possibly far fewer women. And thus be far less accurate in distinguishing them.)

The Court of Appeal found this argument wholly unconvincing. The lack of evidence that the algorithm WAS biased wasn’t enough. There was no sign that SWP had even considered the possibility, let alone taken it seriously.

Most interestingly, and potentially of most far-reaching effect, the Court said at [199] that while it may be understandable that the company behind AFR Locate had refused to divulge the details of its algorithm, it “does not enable a public authority to discharge its own, non-delegable, duty under section 149“.

So – unless this can be distinguished – could it be the case that a black-box algorithm, by definition, can’t satisfy the PSED? Or that even an open one can’t, unless the public authority can show it’s looked into, and satisfied itself about, the training data?

If so, this is pretty big news. No algorithms without access. Wow. I have to say the implications of this are sufficiently wide-ranging to make me think I must be misreading or overthinking this. If so, please tell me.

Algorithms and data protection

There’s another key aspect of the lawfulness of algorithm use which SWP, given the design of their system, managed to avoid – but which could play a much bigger role in the ongoing, and shameful, exam fiasco.

GDPR is not fond of purely algorithmic decisions – what it calls at Recital 71 and Article 22 “solely automated processing”. (I’m using algorithm here in its broadest sense, as an automated system of rules applied to a dataset.) This applies with particular force to “profiling”, which Article 4 defines as automated processing which “evaluate[s] certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.

In fact, Article 22 prohibits any such decision-making on matters which either affect someone’s legal rights or otherwise “similarly significantly affects” them – unless it is:

  • necessary for entering into or performing a contract between the data subject and the data controller;
  • Authorised by EU or (in this case) UK law which incorporates safeguards to protect the data subject’s rights and freedoms; or
  • Based on the data subject’s explicit consent.

Unlike a number of other GDPR provisions, no exemptions are allowed.

Similarly, s14 of the 2018 Data Protection Act (“the DPA”) says such processing – even if authorised by law – must allow the data subject to ask for a decision to be made which is not “based solely on automated processing”. And that request must be honoured.

The key word here so far as Bridges is concerned is “solely”. The human agency at the end of SWP’s process, whether inadvertently or by design, takes this out of the realm of A22; so this didn’t form any part of the Court of Appeal’s reasoning, or of the grounds of appeal. Were there no human in the loop, this kind of processing might be in serious trouble, since there’s no contract, certainly no freely-given consent (which can only be given if it’s possible to withdraw it), and I don’t know of any law which explicitly authorises it, let alone building in safeguards. And using facial recognition to target individuals for police attention is a paradigm case of analysing someone’s “personal aspects, including… behaviour, location or movements”.

So what about exams?

[UPDATE: Unsurprisingly, the JR letters before action are coming out. And one in particular raises points similar to these, alongside others dealing with ultra vires and irrationality. The letter, from Leigh Day, can be found at Foxglove Law’s page for the exam situation.)

But even if A22 wasn’t an issue in Bridges, I suspect that the rapidly-accelerating disaster – no, that implies there’s no agency involved; let’s call it “fiasco” – involving A-levels and no doubt GCSE results will be a different story.

I won’t go into detail of the situation, except to say that an algorithm which marks anyone down from a predicted B/C to a U (a mark which is traditionally believed to denote someone who either doesn’t turn up, or can barely craft a coherent and on-point sentence or two) is an algorithm which is not only grossly unjust, but – given 18 months of pre-lockdown in-school work, even if it isn’t “official” coursework – is likely provably so.

But let’s look at it through firstly the PSED lens. The Court of Appeal in Bridges says that public authorities using algorithms have a duty to work out whether they could inherently discriminate. I haven’t read as much as the lawyers crafting the upcoming JRs of Ofqual’s materials, but I’m not at all certain Ofqual can show it’s thought that through properly – particularly where its algorithm seems heavily to privilege small-group results (which are far more likely in private schools) and to disadvantage larger groups (comprehensives and academies in cities and large towns).

(I have to acknowledge I haven’t spent any time thinking about other EqA issues. Indirect discrimination is certainly conceivable. I’ll leave that reasoning to other minds.)

Now let’s switch to the GDPR issue. We know from A22 that decisions made solely by automated processing are unlawful unless one of the three conditions applies. I can’t see any legal basis for the processing specific enough to satisfy the A22 requirements – certainly none which sufficiently safeguarded the rights and freedoms of the data subjects – that is, the students at the heart of this injustice. Nor am I aware of any data protection impact assessment that’s been carried out – which, by the way, is another legal obligation under A35 where there’s a “high risk” to individuals – self-evidently the case for students here whose futures have been decided by the algorithm. And the fact that the government has thus far set its face against individual students being able to challenge their grades seems to fly in the face of DPA s14.

One final kicker here, by the way. Recital 71 of the GDPR forms the context in which A22 sits, discussing in further detail the kind of “measures” – that is, systems for processing data – with which A22 deals, and which are only permitted under narrow circumstances. It stresses that any automated measures have to “prevent… discriminatory effects”.

Its final words? “Such measure should not concern a child.”

Watch this space.

Lies and freedom. They don’t mix.

“All politicians lie,” so they say. No; all human beings lie. What matters is what lie, when – and what it does to your ability to choose.

I’m a sucker for a series.

By which I mean a sequence of books (for preference) or a good serialised TV show. Genre, of course – you can critique me all you like, but good fantasy/scifi/etc, written with love and care, can’t be beat.

Pratchett’s Discworld*. DS9 – particularly later seasons as the story gained pace. The Broken Earth. B5, of course, and Farscape. Aubrey/Maturin. Rivers of London. And Dresden.

A long-running tale is part of it, to be sure. But the key is writers and creators who let their characters grow and change over time, rather than remain stable as the world shifts around them. It’s a privilege to be part of that.

My problem, particularly with books where there’s been a long gap between instalments – and I recognise this may just be me – is a tendency to want to re-read the whole series before diving into a new one. Which, with the Dresden Files, is taking a while.

Sometimes, though, doing this unearths gems you may have missed the first time round. There’s a couple buried in Ghost Story which hit me squarely between the eyes – and made me think about what I respect, what I despise, and why I make the distinction.

Late in the book – and I won’t spoil it with too much context for the uninitiated – the main character, Harry Dresden, is talking to someone far mightier, but also far gentler, than he. That person’s mission in life is to preserve people’s right to choose, because good and evil mean nothing unless that fundamental human right is preserved. He notes that a particularly vicious misfortune which befell Harry was born of a particularly well-crafted and well-timed lie: convincing him that what was, wasn’t, and making him think he had no choice but to walk down a bad road.

And the character says this: “When a lie is believed, it compromises the freedom of your will.”

That sticks with me. We all lie. Yes, we whinge about politicians doing it – but we all do. Mostly for self-protection. But there are big lies and little lies. And the difference is found not in the extent of the untruth, but in the anticipated consequence.

So a lie designed and intended to sway the world, to destroy the chance to make an honest decision: that’s the lie that’s unforgivable.

Perhaps this is why our profession’s greatest sin is to mislead the court. Sure, represent your client. Highlight the truths that help. Play down those that don’t. Tell the story in the best way for your side – the most believable way. But to mislead the court – even by hiding a relevant authority that doesn’t help – is to rob the tribunal of its chance to make its mind up. It’s not persuasion. It’s a con.

It’s also why I reserve a special hatred for con artists. Sure, I can admire the artistry bit – sort of. But the most successful cons which turn their marks into their best salespeople. Whose self-esteem has been warped by the lies, such that it can scarcely survive if the lies are challenged.

And that inevitably leads me back to politics. As I said, all politicians lie. They’re human. Sometimes to make life easier. Sometimes to protect secrets – whether for fair reasons or foul will depend on the circumstances. Sometimes to protect a confidence.

But outright lies, told to sway and shape opinion, when it’s clear on close inspection that the teller knows perfectly well what they’re doing? That’s treating people as pawns. Playthings.

As marks.

Some thinkers take this further. Harry Frankfurt’s famous essay (and later book), “On Bullshit“, made a distinction between lies on the one hand – where the liar at least placed some value on the truth, prizing it in the act of obscuring it – and bullshit, where the teller simply didn’t care what was true and what wasn’t as long as it served their purpose. It’s a distinction that has been often criticised.

I’m not sure where I stand. I see the distinction, and we do seem to be swimming nostril-deep in particularly noxious and damaging political bullshit in recent years. (Brexit, Johnson, Corbyn, Trump, so many others. Lord, the list goes on. And a special mention for Michael Gove, whose Ditchley speech was an example of extreme – and, I can only conclude, calculated – intellectual dishonesty.)

But I think I care less about the lie-vs-bullshit axis than I do about this question of choice. Whether in politics or people’s personal lives – think of abusers warping the world to rob their victims of a vision of anything different, for instance – robbing people of the freedom to choose feels like the big differentiator.

Dan Davies, author of a wonderful book called “Lying for Money”, put it particularly well, in something he wrote getting on for a couple of decades ago entitled “Avoiding projects pursued by morons 101“. Seriously, read it – it’s not long. But it boils down to three rules, all of which focus on lies and testing them:

  • Good ideas do not need lots of lies told about them in order to gain public acceptance. (If people won’t buy into them without being lied to, that tells you everything you need to know.)
  • Fibbers’ forecasts are worthless. (You can’t mark a liar to market. You can’t hope to fudge their numbers towards reality. If a liar says “this is what will happen”, the only safe thing is to assume the opposite.)
  • The vital importance of audit. (Any time someone won’t let their predictions or their advice get tested against reality, or moves the goalposts mid-game, run. Immediately.)

Put differently: If you catch someone deliberately lying to you, so as to change your mind about something important: that’s it. They’re done. Stop listening to them. Now.

You can accept lies as a fair form of discourse. Or you can – while accepting that we’re human, and so we fail – focus on the right to choose with your eyes open.

You can’t have both. And anyone who favours option one? Don’t trust them. Ever. About anything.

* I’m gradually re-reading the whole Discworld saga. Taking it very, very slow. Essentially to leave till the last possible moment the time when I pick up the Shepherd’s Crown – because it will be the last new Pratchett I ever read. And that hurts.

Adieu, Privacy Shield. Cat. Pigeons.

That’s torn it. The CJEU says Privacy Shield, the deal which lets EU companies send data to the US, is no good. Not only does this cause huge problems for everyone using Apple, Google, Microsoft and about 5,000 other firms – but it also foreshadows real problems for the UK come 2021.

As David Allen Green might put it: Well.

The tl;dr version: Max Schrems, the guy whose lawsuit did for the old Safe Harbour principles allowing trans-Atlantic data exchange, has done it again. He asked the Irish data protection commissioner to ban Facebook from sending his data to the US. The DPC asked the High Court to ask the Court of Justice of the EU. And now the CJEU says Privacy Shield, the mechanism which since 2016 has allowed the US to be seen as an “equivalent jurisdiction” for data protection purposes, isn’t good enough.

This isn’t entirely unexpected. Although the Advocate General’s opinion in December took the view that the Court didn’t have to examine the validity of Privacy Shield (while by no means giving it a clean bill of health), many in the data protection game pointed out that Privacy Shield didn’t resolve some core problems – in particular the fact that nothing in it stops US public authorities from accessing non-US citizens’ personal data well beyond the the boundaries established (since 2018) by GDPR – any more effectively than Safe Harbour had. As such, so the argument went, there wasn’t any good reason why the CJEU would think differently this time around.

And that’s how it’s turned out. The Court (decision here) says Privacy Shield is no good.

To be clear: this isn’t farewell to all trans-Atlantic transfers. If they’re necessary, it’s OK – so this doesn’t stop you from logging into Gmail or looking at a US-hosted website. But EU companies sending their people’s or their customers’ personal data to US servers for processing solely in reliance on Privacy Shield, will need to stop.

And most, on the whole, don’t. Instead, they rely on mechanisms such as standard corporate clauses or SCCs (almost nine-tenths of firms according to some research), which use language approved by the European Commission in 2010.

Today’s ruling says SCCs can stay. But it puts an obligation on data controllers to assess whether the contractual language can really provide the necessary protection when set against the prevailing legal landscape in the receiving jurisdiction, and to suspend transfer if it can’t.

And there’s the cat among the pigeons. Hard-pressed and under-resourced data protection authorities may need to start looking more intently at other jurisdictions, so as to set the boundaries of what’s permissible. And data controllers can’t simply point to the SCCs and say: look, we’re covered.

In other words: Stand by, people. This one’s going to get rocky.


(Obligatory Brexit alert…)

Just as a final word: as my friend Felicity McMahon points out, this is really, really not good news for the UK. When the transition (sorry, implementation) period ends on 31 December 2020, and our sundering from the EU is complete, we will need an adequacy decision if EU firms are to keep sending personal data to the UK unhindered. One view is that since the Data Protection Act 2018 in effect onshores the GDPR wholesale, we should be fine. But there are significant undercurrents in the opposite direction, thanks to our membership of the Five Eyes group of intelligence-sharing nations (with the US, Australia, Canada and New Zealand) and the extent to which (under the Investigatory Powers Act 2016) our intelligence services can interfere with communications.

Since this kind of (relatively) unhindered access by the state is what’s just torpedoed Privacy Shield, I’m not feeling terribly comfortable about the UK’s equivalence prospects. I hope I’m wrong. But I fear I’m not.

As news stories used to say: Developing…

A risk management approach to… no, you know what? Just wear a damn mask already.

I know plenty of risk management experts. I’m not one. But I know enough to know that mask-wearing when indoors with other people is a no-brainer. Not to mention just the decent thing to do.

I’ve spent a fair amount of time wrestling with risk assessments, mostly to do with corruption and data protection: planning them, doing them, revising them, advising on them, responding to them. I know plenty of people who’re far more expert in assessing and managing risk than I am. But I’ve learned enough to prize some of the fundamentals. And to realise how they can be applied more generally.

Say – just for instance – to how us normal human beings should respond to Covid, the easing lockdown, and any attempt to get back to a new normal. Particularly where facemasks are concerned.

The experts will wince at the next bit, when I try to boil down the bare basics of risk management into easy-to-digest chunks for the hard of thinking (like me, in this regard). Apologies in advance for how simplistic this is.

But fundamentally you assess risk and respond to it in three stages.

First, you work out what your risks are. Forget about whether they’re everyday or one-in-a-million for a minute. What could go wrong – not for any organisation, but specifically for yours? In a large organisation this can be a mammoth task, involving questionnaires, interviews, meetings and lord knows what else. But for small groups it’s essentially a test of imagination, and being honest with yourself.

Secondly, For each risk, try to assess how much you should worry about it, which is primarily about answering two questions:

  • How likely is it that it could happen?
  • And how bad would it be if it did?

Everyone will have their own way of combining these two factors – often there’s a three- or five-step measure for both probability (the first question) and impact (the second), and some kind of matrix to tell you what combinations you should really worry about. But often a RAG (red/amber/green for high/medium/low) rating is good enough, where you focus particularly on anything amber/red or with two reds (although depending on the circumstances green/reds and amber/ambers may at least need a bit of thought).

Third and finally comes the really important bit: what do you do about it. The classic four choices (not all of them exclusive, of course) are:

  • Avoid: just don’t run the risk at all. A company, for instance, could decide simply not to do business in certain jurisdictions.
  • Transfer: insure against it, so someone else picks up the tab. For anyone who drives a car, this will sound familiar.
  • Mitigate: what steps can you take to reduce the risk? How well will any given mitigant work? The best mitigants, of course, help with more than one risk.
  • Accept: sometimes you just have to suck it up. Particularly for relatively low-impact risks, this may be the only cost-effective answer.

So what does this have to do with mask-wearing? (And yes, I know we were told for months that it wasn’t worth it, that the evidence for it being helpful was marginal at best, that it didn’t really protect you from other people. Although how much of that was solid and how much was really about mitigating – there we go – the disastrous and negligent PPE shortage is anyone’s guess.)

Well, let’s walk through the steps.

  1. The risk is obvious. It’s getting Covid when in an enclosed space with people other than my household. Or giving it to someone in the same environment. (OK. Two risks. Easy to overlook the second one, though.)
  2. Impact: really high. Yes, I or the person I give it to might get lucky. (And yes, I already did – given that my dose of The Bug was horrible for a couple of weeks and now seems to have wholly departed. I’m humbled by how fortunate I was.) But those odds suck. Probability: also pretty high. And – what’s worse – impossible to calculate with any reliability, given that neither I nor anyone else will know we’re infectious until it’s far too late to avoid hurting people. I’m calling this Amber-Red at least, and probably Red-Red.
  3. Response: can I avoid it? Short of becoming an anchorite, no. Can I transfer it? No. Insurance won’t stop me from dying. Can I just accept it? Well, maybe if it was just about me – but it’s not. This is about the large and fundamentally uncalculable risk I pose to others. Call me judgmental, but prioritising my freedom if it puts others at grave risk just seems unutterably selfish and inhuman.

So what’s left? Mitigation. What can I do? I can wash my hands. Effective and easy. I can keep my distance. Less easy – yes, I’m looking at you, the gin-in-a-can buyers in Aldi yesterday who insisted on standing 18 inches from my back in the queue yesterday while laughing raucously. And problematic in some jobs and workplaces. But no reason not to do it to the extent reasonably possible.

And masks. Yes, masks. I can’t help noticing that most of the countries who are beating this thing take mask-wearing as a given. Japan, in particular, which was late as we were in taking concrete steps to protect its citizens, seems to have done surprisingly well. A place where mask-wearing to protect others when you’re sick is regarded as about as basic a propriety as not being naked on public transport. (I realise there are a number of other potential reasons for Japan having escaped our fate. But the consistency across habitual mask-wearing states is interesting.)

Even if – as some suggest – the benefit from masks is marginal, marginal makes a pretty big impact when multiplied across a multi-million-strong population. And where R is close to 1, or ticking above it, marginal becomes even more important. Literally (and imagine how much it pains a pedant like me to use that word) a life-and-death difference.

I recognise there will be medical reasons not to. I recognise it’s hard or impossible for small kids. I recognise (from personal experience) how damned annoying it is with fogged-up glasses.

But what it comes down to is this: I am my sister’s keeper, my brother’s keeper. I don’t know if I’m dangerous to them. I can reduce the chances of me hurting them by putting a mask on when indoors with others, at minimal cost to myself. So I’m going to. Please do likewise.

(Note: from everything I’ve seen, this is fundamentally an inside problem, not an outdoors one. I tend not to mask up when I walk down the street, or run, or cycle (although I do everything I can to keep my distance), and I wouldn’t blame anyone else for doing likewise. But in a shop, or an office, or a place of worship, or anywhere else which is indoors… well, just put the thing on, OK?)

A corruption hypothetical.

When people claim the UK is “clean” (usually while denigrating somewhere else) it always makes me angry. Because corruption creeps in everywhere, and never more so than where when people are convinced it doesn’t exist…

Imagine the following synopsis of a news story:

  • A property developer in a legendarily corruption-prone country – let’s call it Bribeia for the sake of argument – wants to build something that needs planning permission.
  • So he donates a chunk of cash to the coffers of the ruling party. As part of this donation, he gets to come to a rubber-chicken, thousands-a-plate fundraiser and hobnob with ministers.
  • He ends up sitting next to one such minister, and tells him about the development, urging that it be approved. The minister is non-committal, but they swap mobile numbers.
  • They then exchange multiple text messages. The minister continues to be carefully non-committal in his text messaging, but the developer tells him that he needs the approval by a deadline to avoid paying a whopping tax bill to the local government – coincidentally run by the main opposition party.
  • The non-committal communications notwithstanding, the minister tells his civil servants not only to approve the development, but to make sure it’s done in time to avoid the tax.

You’re all smart people, so you’ll all have instantly recognised that this is the Jenrick-Desmond affair, albeit transplanted elsewhere.

But tell me honestly. I mean it: do tell me, whether on Twitter, LinkedIn or otherwise. If this chain of events happened in a country in the bottom half of the TI CPI, would you hesitate for all that long before regarding both the developer’s conduct and that of the minister as potentially corrupt?

And if that’s the case for Bribeia, why’s it any different here?

(I’ll leave it as a thought exercise for the reader to analyse Desmond’s conduct in the context of Section 1 of the Bribery Act, pausing only to note that the person who is given or promised the advantage doesn’t have to be the same person as the one who performs a function improperly, but also noting that it might be tricky to prove intent. Similarly, an interesting academic exercise is to imagine that Desmond was indeed dealing with a Bribeian minister (that is, a foreign public official) rather than a UK one, and assess his conduct in the context of section 6. Although from what I’ve read, the test at s6(3)a)(ii) looks unsatisfied.)

Privacy: one step forward, one step back.

A quick hit here to memorialise two privacy-related bits of news: a German court bans Facebook from tracking you elsewhere, but US Republicans try – again – to ban encryption that actually works.

Like many people, I barely use Facebook. And when I do, I only do so when using Incognito (Chrome) or Private Browsing (Safari). It’s annoying logging in each time (albeit less so with 1Password). But it stops Facebook from doing something I viscerally loathe: tracking everything else I do, everywhere else, thanks to tracking code and cookies.

I get that this may make me a paranoid tin-hat type. I’m OK with that. Just like I’m OK with blocking ads which rely on adtech, preventing videos from auto-playing, and generally trying to stop a simple text website from downloading an extra double-digit MB load of data so they can show me ads so intrusive that I never want to go back to the site in question. (I’m fine with ads. I like free stuff, paid for by advertising. But adtech-delivered ads are essentially a conman’s dream. And from a data protection/privacy perspective, I have grave doubts about whether adtech is lawful. So I’m very happy to screw with it.)

Which makes a German court’s decision to reinstate a ruling banning Facebook from combining its own data with that from other sites into so-called “super-profiles” very interesting. The ban was at the behest of Germany’s Cartel Office, and the judge’s ruling (press release in German here) said there wasn’t any serious doubt over whether Facebook was (a) dominant and (b) had abused that position – particularly by getting information from non-Facebook sources.

The ruling only applies to Germany, of course. But this does seem to be the first time that cross-site tracking and data collection has been seriously set back. Which may make things slightly hotter for adtech’s widespread consent-less collection of personal data, legally speaking – although the dominance question doesn’t necessarily arise, of course, the ruling nonetheless explicitly addresses, in resolutely negative terms, what Techcrunch calls “track-and-target” and what writers like Shoshana Zuboff and many others call surveillance capitalism. It does so by noting that a significant number of Facebook users would prefer not to be tracked and targeted, and a properly-functioning market would allow them that option. It’s hard to see how the same can’t be said for adtech in general.


Less encouraging, and far more predictable, is US Senate Republicans’ move to introduce legislation (the LEAD Act – seriously, these acronyms…) to “end the use of warrant-proof encrypted technology by terrorists and other bad actors“. As almost any even slightly encryption-savvy person will know, this translates to “making encryption stop working securely”. Simply put, if – as this legislation would appear to require – a service provider keeps a key to your comms so it can give it to law enforcement, then end-to-end encryption is done and your comms aren’t secure any more. As Ars Technica puts it, “Encryption doesn’t work that way.” Anyone claiming it does is either ignorant or acting in bad faith. No real middle ground there.

John Gruber points out that describing the bill as “a balanced solution” as its proponents do because the key would only be handed over with a court order is hogwash. If a key exists, it becomes a target. “That’s how the law works today,” he writes. “What these fools are proposing is to make it illegal to build systems where even the company providing the service doesn’t hold the keys.”

Fools seems like a generous description. It presupposes good faith. I’m not sure I’d go that far.

Being a father without a father: the pleasure and pain of Father’s Day.

Sussex, late May, 2014.

Father’s Day is bittersweet.

Sweet, because my wife and daughter are blessings past compare, proof if any were needed that God, fate or the universe can forgive our failings and give us a life far better than we deserve.

And bitter because I no longer have a father to celebrate.

I lost him on 27 September 2014. I remember our last day together just the two of us, in late May that year: as we trod the West Sussex countryside, the limp from his 2012 stroke present but no longer dominant, talking and walking as we had so many times, ending on a bench outside his village’s church as we watched the birds swoop overhead. I remember our last phone call a few days before it happened, his voice a whisper, drained as he was by six weeks of radiotherapy. I remember his funeral in the cathedral in Winchester, whose bishop he’d been for 15 years, struggling to keep my voice clear and level as I read from the book of Ephesians (the end of Chapter 3 and the start of Chapter 4).

Almost six years on, the loss has long passed into normality. And aside from an ache that he never got to see his grand-daughter grow into the amazing person she’s becoming, mostly it doesn’t hurt too bad.

But one thing still stabs home. His death came just as I was considering – very late in life – becoming a lawyer. It was four months later that I started studying law. Two years later I started bar school. Four years later I became a pupil at Outer Temple. And five years later that I became a tenant.

And it hurts that he wasn’t a part of that decision. Because before every one of my significant, life-changing career calls till then, I’d always sought him out. And we’d walked. And talked. And asked and answered questions. And pondered in silence, the only sound our footfalls and nature around us. It was a part of my process. And it was gone.

He’d have loved the vicarious thrill of me becoming a barrister. Every millimetre of it, through GDL, BPTC, pupillage and tenancy. He’d have found it fascinating. Asked thoughtful questions. Wanted genuinely to understand the how and the why. And, I can’t but think, that having him do so would probably would have made me a better lawyer.

Perhaps that’s why, in fact, I didn’t really talk about the experience with my family (other than my wife and daughter, of course), until the BPTC results came through and I knew pupillage lay ahead. The thought of doing so without my dad being there was just – wrong, somehow.

So here I am. I made it. I love it. But every so often, as I encounter some abstruse but fascinating legal point and my face breaks into a smile as I ponder the sheer beauty of the reasoning around it, just for a split second, I think: you know who’d have loved to talk this one through? And the smile flickers.

Still, in some ways he’s at my shoulder. If I consider an argument that isn’t properly grounded, or a tactic that isn’t honourable, I can almost hear him gently asking me why I’m going that way. Not always, but sometimes. And that voice is usually right. And takes me back to that bit of Ephesians, which tells us to “live a life worthy of the calling you have received”. Yes, I know it’s talking about another kind of calling altogether. But still, it rings true.

So here’s to you, my father. Rest in peace. Rise in glory. Be blessed. I know I am.

Apple’s tin ear to competition timing.

I’ve been using Apple products my whole adult life. But that doesn’t make me a cheerleader. And on the very day a competition investigation was announced, Apple did something so apparently boneheaded that they‘re rightly being called out.

I do have a slim legal figleaf for writing this post, albeit not one born of any particularly deep legal insight. But some things demand comment. And Apple’s treatment of HEY, a new email service, is one of them.

(The legal figleaf is about whether this treatment is a symptom of broader behaviour which violates EU competition law. Scroll down to get to that bit.)

I can’t remember which of Patrick O’Brian’s Aubrey/Maturin books it was (one of these days, a start-to-finish reread beckons – my late dad introduced them to me, and it’ll be one more way of communing with him somehow). But in one of them, Stephen Maturin – never a one for hierarchy or cant – expresses his disdain for patriotism, at least in its early 19th-century format.

It “generally comes to mean either my country, right or wrong, which is infamous, or my country is always right, which is imbecile“, he tells Jack Aubrey. [UPDATE: It was Master and Commander, the first in the series.]

Personally, I’m fine with patriotism. I’m a patriot, so long as that encompasses being honest about my country’s flaws and misconduct and wanting them fixed. But no-one but a charlatan could deny that Maturin’s characterisation is, far too often, spot on.

Tech has traditionally suffered from a similar tendency. Windows vs MacOS. Google vs everyone. iPhone vs Android. PlayStation vs Xbox. Facebook vs – well, probably common decency and humanity? (That one’s an outlier.) The flame wars and arrogance besetting tech arguments are painfully legendary. While lots of us (most, even) manage to recognise that our preferred system, app, platform isn’t perfect, and can and should learn from its competition, the sheer ugliness of tech-on-tech “conversations” (huh) gets unutterably wearing. Even setting aside its truly poisonous emanations, such as the misery inflicted on women and minorities by #Gamergate and similar foulnesses.

All this is really a preamble to explain that while I’ve used predominantly Apple kit my entire adult life – from Mac SEs at college, to PowerBooks including the wonderful Pismo, then a succession of MacBook Pros and Airs; iPhones of varying types ever since the 3G; and best of all, a sequence of iPads that have genuinely, radically, revolutionised the way I can and do work – I’m not a fanboi. (My friends from the Windows/Android side of the fence who’ve ribbed me for years are I hope honest enough to recognise this. As I do about them. I’m lucky in my friends.)

And so when Apple does something truly boneheaded, to put it as gently as one can, its friends need to call it out.

It’s come to a head through what could be a coincidence, although it’s a pretty telling one. On the same day that the European Commission announced an investigation into possible breaches by Apple of EU competition law involving the App Store and Apple Pay, HEY, a new email app by the people who brought you Basecamp, is facing getting kicked off the App Store because it sells subscriptions other than through an in-app purchase.

This boneheadedness has been brewing a long while. Apple charges a 30% cut on purchases through its App Store on iOS. (And on the Mac, although apps can be directly downloaded there, so it’s slightly different.) Apps in theory can’t route around that by selling subscriptions or licences elsewhere. Except for the ones that can. The classing of who can and who can’t would be laughable to anyone who wasn’t suffering from it – Reader apps? Really? And an unwritten business-vs-consumer divide? Come on. Dieter Bohn called it out as a prime example of the No True Scotsman fallacy, and I think he’s right (his piece for The Verge, which is excellent, is here). John Gruber, meanwhile, pointed out that the biz-consumer divide was both artificial and unworkable, and – just as bad – a betrayal of Apple’s own history.

The hypocrisy, both in HEY’s case and elsewhere, is impressive. Loads of email apps sell subscriptions elsewhere. Basecamp, for heaven’s sake, sells subscriptions through its website. That’s its business! The “Reader” definition is woolly at best. It often feels far more as though whether you get pushed around like this depends on how big you are. HEY isn’t the first, by a long way. But it’s the latest. And perhaps the timing may finally make a difference.

If I sound angry, that’s because I am. Apple’s 30% App Store tax is way, way too high. Its application is (put neutrally) sporadic. And – and here comes the legal figleaf – while I know very little about competition law (a terrifyingly technical field; try someone like Monckton Chambers for that), I really want to read what EU antitrust specialists are thinking about this.

Because to a very shaky first approximation, I wonder whether an argument could be made that Apple’s App Store policies breach Article 102 of the TFEU, which bans improper exploitation of market dominance, as follows:

  • The relevant market here isn’t all smartphones (Apple would probably walk home on that one, given that 85% of phones are Android) but iOS devices, on the basis that for a majority of their users substitution for another brand isn’t really an acceptable option given both -reference and platform lock-in.
  • Needless to say, Apple is dominant in the iOS market…
  • That dominance exists within the EU’s internal market, since iOS devices are sold across the region.
  • The dominance affects trade between member states, since a developer in Austria will routinely sell its app to customers in Malta. And so on.
  • Its pricing is excessive – in that it is 10 times what, for instance, a credit card processor might charge – and also discriminatory, in that its rules (as described above) seem to be arbitrary.
  • And it abuses its dominance by imposing an exclusive dealing obligation – by preventing anyone from accessing iOS users other than through the App Store, or more narrowly preventing them from charging other than through the App Store.

I’m pretty sure any genuine competition lawyer is going to read the above back-of-an-envelope analysis and laugh till they choke. There is no doubt acres of relevant authority which shows I’m foolishly misreading Article 102. Aside from anything else, the relevant market point is a massive what-if. But it’s not a bad place to start. (If anyone’s seen any good stuff on Twitter or elsewhere about this, from a legal analysis perspective, do let me know – whether via email or Twitter. I haven’t had time to go looking this past 24 hours owing to other deadlines, but I want to learn about it.)

And whether I’m right or not, this leaves a really foul taste in the mouth. Apple’s a commercial firm, and will do what’s best for it. No illusions on that score. But its leaders always used to say that making money was what happened as a by-product of building great things, not an aim in itself. I can’t see how this possibly matches up to that aspiration. Not even close.

Trust, trash and privacy notices.

Some time last week – and it may have been up for longer, but I haven’t checked – several people on Twitter started commenting on NHS England’s privacy notice for the Test and Trace programme. And oh sweet Jesus, it’s a fail.

What’s worse, in the current environment, that fail may have deadly consequences.

I don’t want to take too long over the details. Suffice it to say that a programme which fails properly to address questions of whom personal data might be shared with, refers to it as “personally identifiable information” which is a concept wholly absent from UK data protection and privacy law, says it will hang onto everything for 20 years, demands the provision of huge amounts of information about other people – OK, only for 5 years, but still – and entrusts it to several private enterprises with (at best) dubious records with other people’s data (including inadvertently leaking the email addresses of 300 of its trainee tracers), is a programme for which the phrase “privacy by design” really doesn’t seem appropriate.

Add in the stories which suggest that the training of those to be working on the Test and Trace programme is appalling in its inadequacy, and the government’s refusal to undertake a data protection impact assessment first, and this is carelessness, bordering on (gross) negligence.

I’m trying to be polite here. You may have noticed.

Because this is deadly serious. Literally so.

Lockdown lifts, partially, tomorrow. Looking at foot traffic on the street, and at pix of a crowded Clapham Common, and hearing from school-age kids of how their friends are already acting as if it’s all over by visiting each other’s houses just as they were in February, it’s over.

I can’t say how much of that is a reaction to the insouciant arrogance of the Cummings/BoJo double-act re Cummings’ wilful breach of regulations, and his wholly implausible explanation for it. (I describe it as such because, if the other side’s witness gave that kind of explanation in the box, I would happily shut up and let them keep talking, providing gold dust for my closing submissions.)

But this I’m sure of. The trust, which undoubtedly existed in late March and early April, is now gone, “trashed” as one behavioural expert put it – even among many of BoJo’s natural supporters. And without trust, Test and Trace won’t work. The privacy policy might have been acceptable if we trusted the powers that be not to be cavalier with things that matter to us.

But I don’t, not any more, not after they’ve shown us just how little respect they have for those they govern. And I’m certain I’m not alone. Matt Hancock’s “just trust me” approach wasn’t good enough for Harriet Harman, and it isn’t for the rest of us.

This, by the way, shows that Apple and Google were right to take the decentralised, privacy-first approach they did to building exposure notification into their mobile OSes. I don’t hold a brief for either. Both have immense faults (on privacy, Google in particular). But this was the wise approach. Give people control, and put trust in them, have faith in them, to listen to their better angels.

This is something our government never did. Lockdown was slow because we couldn’t be trusted to obey. Yet we did, overwhelmingly. Until the rules were muddied and it became clear they only applied to the little people.

So where does that leave us? An under-trained Test and Trace workforce, run by private contractors proven to be untrustworthy, collecting data precious to us with minimal genuine controls, ignoring if not deliberately sidelining local authorities who both know their areas and know how to do this, properly, personally and professionally, in favour of a classic mass-outsourcing impersonal “pile it high” approach. Contrary, it won’t surprise you, to contact tracing best practice which has actually worked elsewhere.

On the basis of this, people are to be asked to self-isolate for 14 days with no guarantee of ongoing job or wage protection, by people who clearly don’t think this applies to them. And with lockdown being lifted just now, when our infection and death rates remain far, far higher than other countries who have lifted lockdown, but without masking in any material numbers? You don’t have to be a conspiracy theory-loving leftie to wonder whether the speed is, at least in part, a distraction from the Cummings fiasco.

I didn’t mean to sound angry. But I can’t help it. Like I said: this is deadly serious.I just can’t understand why those running the show don’t seem to be treating it that way. I really, really wish they did.

Scrivener. Wow.

Ten days ago, I wrote about Scrivener. I said I thought it might help me get through a book project with a tight deadline. Boy, did I understate things. I think I’m in love.

So that’s (nearly) it. 25 days and some 30,000 words later, the first draft of the book chunk I’ve been working on is done. (Nearly, because I still need to read and no doubt do some rewrites and cuts tomorrow. But I’m fine with that.) It’ll be with the friend and colleague who commissioned it by Monday morning.

And I couldn’t have done that without Scrivener. (And to a lesser extent Notion, the other app I wrote about.)

I wrote about Scrivener 10 days ago, lauding it (although complaining about its iPad app) and hoping it’d help me get this thing done. (And incidentally giving said friend and colleague the fear – which I can understand; after all, I did say, explicitly, that I was indulging in displacement activity by trying it out.)

I was wrong. In that I understated things. Truthfully, I don’t think I’d’ve done this, in this time, without the app.

I recognise that I’ve barely scratched the surface of it. Its manual, a wonderful old-school single PDF (albeit one with full and loving internal linking), is 921 pages long. I’ve probably looked at a dozen of them. I haven’t even begun to experiment with its ability to compile documents into specific formats. Frankly, right now, I don’t have the time.

But simply by encouraging one to split the project up into logical chunks, and then make it staggeringly easy to see, manipulate and write or edit them separately, in groups, on a pinboard, as an outline, or as a cohesive whole, it makes writing anything of any size conceptually straightforward. And it’s blindingly fast. The only downside is it seems only to sync through Dropbox, as far as I can tell, which I barely use. I’d prefer not to have to have it running all the time. But it feels a relatively small price to pay.

I finally understand why another friend, Naomi Cunningham, now pretty much refuses to use anything else.

I’m now actually going to RTFM. Honestly. I want to get under the skin of this thing.

A quick word about Notion, too, the other app I was experimenting with. I haven’t settled into it as an “everything bucket” yet, not least because it doesn’t seem to have a Safari web clipper – so I’m still on Evernote as a “clip web pages and store PDFs” dumping ground. But in other ways, it excels. I’ve pages running for several projects, and for a live case list. Each page is using a different kind of design – a list in one, a kanban board in another (that was for the book – I finally get the point of kanban, although it’s definitely a project thing rather than for general todos), a straightforward wiki in another. Once I’m used to it, I may simply import the Evernote stuff and stick with just the one. Time will tell.

In the meantime, though, Scrivener – wow. Just wow. And thanks.

(UPDATE, following a bit of further reflection. I might have been able to do this without Scrivener. But I probably wouldn’t have been able to do anything else. Whereas instead I’ve managed to keep other work running alongside – admittedly with long hours, but Scrivener has really helped me keep focus even in the wee hours. Now that’s the real miracle…)