2021v28, Friday: Not oil.

Getting the analogy wrong can ruin policy, as our approach to data has shown. And turning to real fossil fuels: two big, big events involving an oil company and a coal mine.

Short thought: Taking a short break from thingification, this week marks three years since the General Data Protection Regulation, or GDPR as most of us know it, came into force.

Many hate it. It’s caused a huge amount of work for organisations of all kinds. It’s clunky, imprecise, open to vast interpretation as to how its extensive obligations should be implemented, and therefore tends towards lengthy tickbox exercises rather than the “privacy by design and default” which is, to me, the heart of the whole exercise.

And, of course, it leads to all those dialogs every time you log into a website. And more recently, also the pointless and aggravating requests that you acknowledge the site’s legitimate interest in using your data any way they want. 

(Pointless because if push came to legal shove, I can’t believe a court would waste more than a few seconds on any such factor. Your legitimate interests aren’t something you can just sign away. Particularly not without genuine consideration. Yet another piece of annoying figleaf. Aggravating for the same reason.)

But I still think the anniversary is worth celebrating. Because GDPR did something really important. It enshrined, far more strongly than its predecessor legislation the core principle that your data is yours. It’s not some public resource that organisations can use however they want; some commons they can enclose at will. 

Analogies are important here – and yes, I realise we’re back on stories again, because when the story’s wrong, our responses to it are wrong too. Here, the problem is the dominant analogy: “data is the new oil” is a phrase often bandied around, but as Matt Locke notes here, this is entirely the wrong categorisation:

The discussions around data policy still feel like they are framing data as oil – as a vast, passive resource that either needs to be exploited or protected. But this data isn’t dead fish from millions of years ago – it’s the thoughts, emotions and behaviours of over a third of the world’s population, the largest record of human thought and activity ever collected. It’s not oil, it’s history. It’s people. It’s us.

To indulge in a bit of shameless exaggeration, treating data as a common untapped resource from which anyone can make a buck is akin – in direction if not in scale – to treating Swift’s Modest Proposal as a sensible contribution to the argument on population control.

Think of data as a part of ourselves, and suddenly the priorities change. Stories like the UK government’s attempt – again! – to give relatively unfettered commercial access to health data become as vile as they seem. (On that, instructions to opt out are here – the deadline’s 23 June.)

It’s not oil. It’s us.


Ultra-short before-thoughts: While we’re on the subject of oil, a couple of interesting items which I haven’t had time properly to process yet:

  • A small investment outfit has managed to force directors onto the board of Exxon who actually care about climate change. My recent reading of The Ministry for the Future, by Kim Stanley Robinson, has been scaring me witless, and bringing me to the belated realisation of just how much harm climate change naysayers have done to my daughter’s future. About damn time.
  • This one I really want to read and consider: an Australian federal court has denied an application by several children for an injunction to stop a vast open-cast coal mine. But in doing so, it’s found something legally fascinating and with huge potential implications: that there is a duty of care on a government minister to consider what such a project will do to those children’s futures. To anyone with a nodding acquaintance with the common law jurisprudence of negligence, this is immense: new duties of care rarely emerge, with courts (at least in England and Wales) highly reluctant to go beyond existing categories of duty; and only then incrementally and with small steps, based on analogy with existing duties. (For a really good explanation of how this works, see the case of Robinson in the UK Supreme Court.) I really, really need to see how the judge in Australia reached this conclusion (which is at [490-491] of the judgment). I’m on vacation next week, so I might have time to take it in. 

Because I’m on vacation, no promises about posts next week. I’’ll try to take thingification a bit further forward, and there’s so much to do on privacy. We’ll see where we get to.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021v17, Monday: It’s not just about you.

Why an apolitical workplace is a luxury only the comfortable can afford. And a cut-out-and-keep caustic guide to AI ethics.

Short thought: One of the more interesting “little firms that could” in the online services space has long been the outfit currently known as Basecamp. Its founder, Jason Fried, has been voluble – and thoughtful and interesting – about how to do good work remotely, long before the past year made that a necessity.

But now he and David Henmeier Hanson, known as “DHH” (together the senior management of Basecamp), have solidly put their feet in it. I won’t rehearse the background in detail, because others have done it far better. The tl;dr version (and this is a really thin summary of a big story):

  • Basecamp employees – a sizeable chunk of the 60-odd staff base – started to work on diversity and inclusion issues. Management blessed this.
  • In the process, the fact that for many years the firm’s internal systems had hosted a list of “funny customer names” – many of which, inevitably, were those of people of colour – came in for understandable criticism.
  • Initially, management were onside with this criticism; indeed, they owned their part in the list’s maintenance over the years.
  • But then it got ugly. A number of staff saw the list in the context of ongoing institutional discrimination – not just or even not mainly at Basecamp, to be clear, but societally. Management (Jason and DHH) pushed back against what they seemed to see as an over-reaction.
  • Jason and DHH announced that political discussion was now to be off-limits. (They later amended this – albeit apparently without making it clear that there was an amendment – to it being off-limits only on Basecamp’s own chat and comms systems.) They also said they would withdraw benefits, instead simply paying the cash value thereof, so as not to be “paternalist”.
  • This caused uproar. An all-staff meeting saw one senior and long-time executive play the “if you call this racism, you’re the racist”, “no such thing as white supremacy” card; he resigned shortly afterwards. As many as a third of the staff have now also taken redundancy.
  • This might seem like a tempest in a teacup. Small tech firm has row; news at 11.

But it’s not. Tech is still overwhelmingly white and overwhelmingly male, particularly at its senior levels. (It may not escape your notice that the Bar isn’t much better.) Which means its leadership often misses the key point, which is this: when you’re not rich and comfortable, when your life has incorporated a lot of moments where you don’t get to expect everything will go smoothly, when you don’t have that much of a safety net, when large numbers of people at all levels of power get to mess you about just because they can, without you having much recourse, just about everything is political.

Healthcare is political, if its availability and quality vary depending on where you live and what you look like. (Don’t doubt this: I’ve seen healthcare professionals, who I’m certain would be genuinely horrified by conscious prejudice, treat Black women with breathtaking disdain compared with how they talk to people like me.) Pay is political. Work is political, because expectations and yardsticks vary unless we pay honest attention to how they’re generated and applied.

Put simply: cutting political and social issues out of the workplace is a luxury only comfortable people can afford. A luxury which exacerbates, rather than diminishes, the power imbalance built into to workplaces by the sheer fact of people’s dependence on a paycheque. (This, by the way, is why in the UK and Europe we say people can’t consent to the use of their data in the workplace. If the alternative to consent is “find another job”, that isn’t free consent for anyone without a private income.)

For Jason and DHH to take this approach is to forget that the only people for whom politics doesn’t relate to business are those who get to dictate the terms of what goes and what doesn’t. The blindness appears to dismal effect in a post by DHH on “Basecamp’s new etiquette at work”:

Just don’t bring it into the internal communication platforms we use for work, unless it directly relates to our business. I’m applying that same standard to myself, and Jason is too.

Well, that’s nice. Reminds of that line about the right of the rich to sleep under bridges. I wonder why.


Someone is right on the internet: On a somewhat related topic, issues of ethics in AI are big news, at least among geeks. Which is as it should be: the more AI or quasi-AI comes to control, dictate or direct our lives, the more concern we should have about whether the black boxes in question are exacerbating structural or other unfairness or inequality. It’s not good enough to just blame – for instance – algorithms that can’t recognise Black people on “computer says no”. People make decisions, and they must be accountable.

(This, of course, is why Article 22 of the GDPR prohibits “solely automated processing, including profiling” – although it’s by no means impossible to get round this by inserting a human into the final stage of the process, or by making statutory arrangements to allow for it.)

Big Tech isn’t that comfortable about this, so it seems – as shown by Google’s removal (whether it’s officially sacking or not isn’t wholly clear, but it’s effectively an ejection anyway) of two senior women working on AI issues.

So MIT Technology Review’s caustic A-Z of how to talk about AI ethics is horrifically on the nose. A couple of examples will suffice, I hope, to encourage you to go and read it:

ethics principles – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.

human in the loop – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.

privacy trade-off – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.

And the best one comes first:

accountability – The act of holding someone else responsible for the consequences when your AI system fails.

Ouch. But yes.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021v7, Friday: Finding family.

Why I welcome the fact that I ache. And a quick link to a writeup of one of the most interesting Supreme Court cases around: Lloyd v Google.

Short thought: “I ache, therefore I am,” as Marvin once put it. “Or perhaps I am, therefore I ache.”

I ache. And I’m happy that I do. Because it’s 48 hours or so since I went back to capoeira for the first time in months.

It’s not the exercise that I’ve missed – from time to time I’ve stopped mid-run and trained a little, solo, in the park.

No. It’s that even for an introvert like me, the community of training with others in this most organic and communicative of martial arts has been a painful thing to lose. That feeling as your mind, soul and body ease into the ginga, the music wraps itself around you, and techniques start to flow the one into the next. As you smile, full of malandro, at the person you’re playing with. As the physical conversation between you ducks and weaves, slow, fast, slow.

God, it’s glorious. Although God, it hurts a couple of days after. I’m 50. I don’t bend as well as once I did.

But every ache is a benção, a blessing.

Because I’m back with family. Or rather, back with one of them.

Here’s the thing. We all have multiple families, which sometimes – but not always – overlap. If we’re fortunate (and my heart breaks for all those for whom this is tragically, painfully, sometimes dangerously not true) our first is with blood.

Another comes from the person we choose to bond our life with: spouse, partner, name them what you will. (My good fortune on this front is boundless; a wife and daughter who are both beyond compare.)

And then there are all the other communities which you find. Or which find you. Some of which will themselves wrap you in love and care, and so will become found families in themselves.

For all but the most wholly solitary among us, these multiple families are the earth from which our lifelong learning, growth, evolution, even our ongoing ability to be human, springs.

My capoeira family is one such. I’m blessed to have so many families. Blessed.

So, yes. I ache. Therefore, I am. Thank goodness.


Someone is right on the internet: Despite my best intentions, I wholly failed to make time to watch the submissions in Lloyd v Google, which sees the Supreme Court wrestle with some fundamental ideas in privacy and data protection.

I’ll try to make the time, then I’ll probably write something. (A radical idea: digest the source material before opining. Good lord.) As usual, the SC has the video of the hearing up on its website at the above link. Open justice for the win.

In the meantime, the UKSC Blog does a great job of summarising the submissions: a preview here, then a rundown of Day 1 and Day 2.

If privacy is at all important to you, and goodness knows it ought to be – it (along with worker status) seems to me to be the critical question of how individual rights interact with contract law and business for the next few years – the upsums richly repay a read.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021iv19, Monday: Privacy and the Supremes.

One of the most consequential cases on the law and privacy makes it to the Supreme Court next week. I’ll be watching. And some great stuff on gaming and moral panics.

Short thought: There’s no doubt that arguments about privacy are going to grow, and multiply, for years to come. On so many fronts, the question of what companies and governments can do with data about us affects us – literally – intimately. It’s going to be a central focus for so many areas of law – be it regulatory, public, commercial or otherwise – and we lawyers can’t and shouldn’t ignore it.

Which is why I’m blocking out next Wednesday and Thursday (28th and 29th) in the diary – at least as far as work will allow. Those are the days on which the Supreme Court will be hearing Lloyd v Google, probably the most important data protection and privacy case to make it all the way to the UK’s court of final appeal to date. 

As I’ve written before, the Court of Appeal fundamentally changed the landscape in 2019 when they decided that Richard Lloyd, a privacy campaigner, could issue proceedings against Google in relation to its workaround for Apple’s privacy protections. It’s no surprise that Google took the appeal all the way, since the CoA said (in very, very short) that a person’s control over one’s personal data had value in itself, and that no further harm – not even distress – need be proved for loss to exist. (There are other grounds of appeal too, but this to me is the most fascinating, and wide-ranging in potential effect.)

Next week is only the arguments, of course. Judgment will come – well, no idea. But Lord Leggatt is on the panel. I can’t wait to read what he has to say.

(I’ve had a piece on privacy brewing for some time. I just haven’t had the brainspace to let it out. Perhaps next week. I’ll try.)


Now hear this: I’ve always been rather allergic to team sports. Martial arts, on the other hand, have long been my thing. While I’ve dropped in and out, depending on levels of fitness and family commitments, there’s always been one at least at any given time which has given my joy like no other form of physical activity.

If one nosy trouble-maker had had their way, this would have been nipped in the bud. When I was doing karate in my teens, one clown wrote to my dad – then a canon at St Albans Abbey – claiming that my indulgence in this was Satanic and should stop immediately.

No, I don’t get the reasoning either. Needless to say, my dad treated it with the respect it deserved, and lobbed it into the wastebasket. And on I went, via aikido, tae kwon do and (these days) capoeira. No doubt this last, which I hope to keep doing with my current escola in Southend for as long as my ageing limbs can manage it, would have given the writer even greater conniptions, given that the music often name-checks saints and is thought in some quarters to have connections to candomblé.

But I think the writer missed a trick. Because back then, in the 80s, if he’d known I was a role-playing gamer he’d have been tapping totally into the zeitgeist.

By RPG I’m talking about pen and paper, not gaming. I loved these games; via an initial and very brief encounter with Dungeons & Dragons (2nd edition, for the cognoscenti – it was never really my thing), I found Traveller and Paranoia, and never looked back. It’s been a long while since I played, but my love of them, and conviction that they’re good and valuable, hasn’t dimmed.

These days, these games are pretty mainstream. But in the 80s, particularly in the US, they were the subject of significant, if now in retrospect batshit insane, panic. This panic is beautifully explored by Tim Harford in his podcast, Cautionary Tales. I warmly recommend it. You don’t have to know or care about the games themselves for the story to be engaging and fascinating, as an analysis of how societal panics can grow and evolve into something wholly unmoored from reality from even the most unpromising foundations. And yes, the irony there is palpable. 

(Tim’s a gamer himself of no little repute; I imagine a game GMed by him would be wonderful. But he’s fair on this, I think.)

The whole series is great (the one on Dunning-Kruger is particularly brilliant). Tim’s previous podcasts, in particular 50 Things that Made the Modern Economy, are just as good. And he always makes them relatively short, and scripts them properly. Not for him the 90-minute frustrating meander. Thank goodness.

Warmly recommended.

As an aside: A recent FT piece of Tim’s has just appeared on his own website (as usual, a month after FT publication). It’s superb. Lots of people have linked to it, but it’s good enough to do so again. 

It’s entitled: “What have we learnt from a year of Covid?” His last sentence is one with which I utterly concur:

I’ll remember to trust the competence of the government a little less, to trust mathematical models a little more and to have some respect for the decency of ordinary people.

Read the whole thing.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021iii3, Wednesday: data, privacy and the Golden Rule.

If people talk about changing data protection laws, always ask for their philosophy; if they won’t say, be suspicious. And two great tales about the file format that makes remote working possible.

I wouldn’t worry if it were still like this.

Short thought: On Friday, I’m giving a webinar on privacy issues in employment. It’s part of a series of employment law webinars over 15 days (two a day); the second such series since lockdown, organised by my friend Daniel Barnett. With signups come big donations to the Free Representation Unit, a splendid charity which organises advocates for people who can’t afford lawyers in employment and social entitlement cases. (So yes – if employment law is important to you, sign up. There’s still 80% of the sessions left to run, all are recorded for later viewing at your leisure, and it only costs £65 plus VAT a head. Discounted significantly the more heads you book for.)

Anyhow: prepping for it has got me thinking. There’s a lot of noise, particularly post-Brexit, about what kind of law governing privacy and data protection we should have. GDPR comes in for a lot of stick: it’s cumbersome, it’s civil rather than common law, it’s inflexible, it makes life harder for small businesses and is easy for large ones. Scrap it, some say. Other countries get adequacy decisions (that’s the European Commission saying: yes, your data protection laws give sufficiently equivalent protection that we won’t treat you as a pure third country, with the significant restrictions on cross-border data transfer that entails) with different laws. Why shouldn’t we? (Incidentally, initial signs are we should get an adequacy ruling. Phew.)

All of this, I tend to feel, misses the point. The first step to working out what data protection architecture we have isn’t common vs civil law. It’s identifying the essential philosophical and ethical – and, yes, moral – basis for why data protection is needed in the first place. When I hear people advocating for changing the onshored version of GDPR, I want to hear that philosophical basis. If I don’t, I’m going to start patting my pockets and checking my firewalls. Because the cynic in me is going to interpret that the same way I interpret – say – calls for restricting judicial review, or “updating” employment law: as a cover for a fundamental weakening of my protections as a citizen and an individual.

Here’s why. GDPR, for all its faults, did represent a root-and-branch shift, and it’s that shift rather than its shortcomings (and lord knows it has them) that has caused much of the outcry. The shift? The imposition, in clearer terms than ever before, of the idea that people’s data is theirs, unalienably so. And if you want to muck about with it, they get to tell you whether that’s OK or not.

I know this is a wild over-simplification. But in our data-rich, surveillance-capitalism world, as a citizen that’s what I want. Yes, it carries downsides. Some business models are rendered more difficult, or even impossible. But that’s a trade-off I’m happy with.

I’m aware of one case, for instance, where an employer is alleged to have accessed an ex-employee’s personal email and social media accounts (or tried to) using credentials left on their old work computer, because the credentials to a work system were missing and there might have been password re-use.

I’ll leave the reader to compile their own list of what potential problems this might give rise to. But it does bring into sharp relief what I think the core issue is in privacy and data protection, both generally and in the employment context.

And it boils down to this: don’t be (to use The Good Place’s euphemistic structure) an ash-hole.

Honestly, it’s that simple. So much of employment law (and here, as we saw in the Uber case and many others, it differs from “pure” contract law significantly) is about what’s fair and reasonable. (As employment silk Caspar Glyn QC put it in his webinar on Covid issues yesterday, he’d made a 30-year career out of the word “reasonable”.) And through the architecture of statutes and decisions and judgments, what the tribunals and courts are ultimately trying to do is apply the Golden Rule. Has this person, in the context of the inevitably unequal power relationship between them and their employer, been treated fairly?

Now, everyone’s definition of “fair” is going to differ. But that’s why we have laws and (in common law jurisdictions like ours) authority. So that we can have a common yardstick, refined over time as society evolves, by which to judge fairness.

What does this have to do with privacy in employment? Loads. For instance:

  • Can you record people on CCTV? Well, have you told them? Have you thought about whether it’s proportionate to the risk you’re confronting? Does it actually help with that risk more than other, less intrusive, means?
  • Can you record details of employees’ or visitors’ Covid test results? Well, why are you doing it? Do you really need it? If so, how are you keeping it safe – since this is highly personal and sensitive health data?

It’s difficult. But it’s difficult for a reason. Personal data is so-called for a reason. Its use and misuse have immense, and often incurable, effects. The power imbalance is significant.

We lawyers can and do advise on the letter of the law: what GDPR, the Data Protection Act, the e-Privacy Directive and so much more tell you about your obligations.

But a sensible starting point remains, always, to consider: if this was my sister, my brother, my child, working for someone else, how would this feel? How would their employer justify it to them? And if they came home, fuming about it, would I think it was fair?


I live here. Or so my family would probably say.

Someone is right on the internet: I haven’t been in my Chambers since September. My workplace is my home office. It’s evolved into a decent environment over the past year. Furniture, tech, habits: they’ll keep changing, but they’re in a pretty good place right now.

But in many senses, the single biggest enabler of my remote working isn’t a piece of kit, or even a piece of software. It’s a data format. The PDF.

PDFs have been around for decades. Adobe came up with it, and in one of the smarter bits of corporate thinking, gave it away. Not entirely, of course. But anyone can write a PDF app (my favourite being PDF Expert) without paying Adobe a penny; while Adobe, rightly, still makes shedloads from selling Acrobat software for manipulating PDFs as a de facto, rather than de jure, industry standard.

And I rely on PDF entirely. I convert almost everything to PDF. All my bundles. All my reading matter. Practitioner texts. Authorities. Everything. Even my own drafting gets converted to PDF for use in hearings. That way, I know I can read them on any platform. Add notes, whether type or scribble (thank you, Goodnotes, you wonderful iPad note-taking app), highlight, underline. And have those notes available, identically, everywhere, in the reasonable confidence that when I share them with someone else, they’ll see precisely what I see, on whatever platform and app they themselves have available. They’re now the required standard in the courts, with detailed and thoroughly sensible instructions on how PDF bundles are to be compiled and delivered. (Note that some courts and tribunals have their own rules, so follow them. But this is a good starting point.)

My utter reliance on, and devotion to, PDF means I’m interested in its history. And two excellent pieces tell that story well. One describes Adobe’s long game. The other describes PDF as “the world’s most important file format”. Neither are terribly short, but neither really qualify as long reads. And given how much we now rely on this file format, they’re both well worth your time.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

Unlawful? Or – far worse – dangerous?

Test and trace relies on trust. Undermine it, and people’s lives are at risk. If the Times is to be believed, companies gathering customers’ data on behalf of restaurants and bars are doing just that.

If there’s one thing that the nations which have succeeded in containing Covid have in common, it’s that a robust, successful and trusted test/trace/isolate system.

Technical skill and fearsome logistics are critical, of course. But trust is the key. If citizens don’t trust the system, they won’t comply with it – or won’t even participate to start with. And then we’re stuffed.

Which is why today’s Times story (paywalled) is so disturbing. It alleges that companies which run services gathering details of customers for restaurants, bars and pubs via QR codes are holding onto the customer data and selling it on. If that’s the case, the companies (and the outlets they’re hired by) are not only acting in a way that’s potentially unlawful. What’s in some ways worse is that they’re undermining the trust without which a test-and-trace system is useless. And that puts us all in genuine danger.

Assumptions

Of course, I don’t know if the story is true. I haven’t seen the T&Cs that allegedly customers are being asked to sign up to. And I don’t know how transparent the process is.

But let’s work on the hypothesis that the story is essentially true, but also that there’s at least a nod to data protection rights by the companies concerned. Let’s therefore assume the following:

  1. Customers snap a QR code;
  2. They’re taken to a web page on their device which asks for their personal details;
  3. there’s either a privacy policy on the page, or a link to one;
  4. customers are asked to consent to the policy as they provide their personal details.

Note that it isn’t clear whether consent to the policy is a condition of providing details through this service (and thus a refusal means they can’t come in), or whether the system allows customers to provide their details (and thus permits entry) even if they refuse to consent to the privacy policy.

For the purposes of this analysis, I’m going to assume it’s the latter. (Consider this a form of steel-manning.) I’m also going to ignore the distinction between data controllers and processors. Without seeing the contractual arrangements between (say) pub and QR firm, I don’t for certain know who’s in which box. For this purpose, though, it doesn’t really matter. Both roles need a lawful basis on which to process customers’ personal data, and that’s the focus of this analysis.

Is it lawful?

Let’s start with the easy bit. Collecting customers’ personal data for the purposes of supporting NHS Test and Trace is not only lawful. For restaurants and so on (I’m going to just say “restaurants” from now on), it’s been obligatory since 18 September under the Health Protection (Coronavirus, Collection of Contact Details etc and Related Requirements) Regulations 2020. Of course, these regulations also require the NHS QR code to be displayed as the primary option – so it’s hard to see why anyone would snap a private sector code instead of that one. (Despite justified earlier concerns, the current NHS app is in fact pretty privacy-friendly, working as it does – at long last! – on the Apple/Google keep-it-on-the-phone basis instead of the abortive, and thoroughly arrogant and foolish, previous centralised approach.)

But if someone doesn’t want to use the NHS App, the restaurant is still obliged to collect the data another way. Whether through a QR code, or otherwise. And till 18 September, restaurants were doing so because they were asked to, although it wasn’t a legal requirement.

Which is where the data collection firms stepped in. Paper forms are a pain. Unless you pre-book everyone (in which case you’re collecting the data in any case) far better to allow the walk-in customer to snap a QR code, fill in a few details and bingo! All done. With the bonus that your own staff aren’t harassing and annoying your customers, to their detriment and yours.

But here’s the problem. Clearly collecting someone’s name and contact details means processing their personal data. And to do that under the Data Protection Act 2018 and GDPR, you need a lawful basis: at least one out of consent, a contractual requirement, a legal obligation, the data subject’s vital interests, a public task, or your legitimate interests.

Note that each purpose for processing needs its own lawful basis. So just because one purpose is fine, that doesn’t mean others will be too.

If all you’re doing (or were doing prior to 18 September) is to take records for Test and Trace purposes, holding them solely for that, and junking them after the recommended 21 days (or even a bit longer if need be), I don’t see a problem. As of now, it’s both a legal obligation and likely a public task, and a fair argument can be made for it being in the vital interests (that is, protection of life) of the data subject as well. Honestly: I can’t see a challenge on this basis holding up.

But what about keeping the data for marketing or onward sale?

If you’re the restaurant itself, and you make clear to the customer that they have the option of allowing you to use the data for you to stay in contact with them – and, critically, they can say no as easily as saying yes without being barred from entering or otherwise inconvenienced – there’s not a huge problem. So long as you explain really clearly, in plain English, what you’re doing, make it easy for customers to opt out later, and don’t abuse the data for other purposes.

In other words, you’re relying on consent. There really isn’t any other basis that works. Legitimate interest is a non-starter, since your interest in hanging onto the data for marketing purposes without consent is dwarfed by their interests in privacy; your contract with them to serve them food in exchange for money can easily happen without contact details; and the others are just ludicrous. (Sending them emails about your Winter menu will save their life? Please.)

And consent is tricksy. It has to be (by Article 4(11) GDPR) a “freely-given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she by a statement or a clear affirmative action signifies agreement”. It has to be possible to get the service in question without consenting to the data processing (Art 7(4)). And the data subject has clearly to be able to distinguish between the processing they’re being asked to consent to, and other matters.

I can imagine how a restaurant could write a request for customers to allow it to retain the test-and-trace data with sufficient clarity and choice.

But I really struggle to see how the business running the collection on their behalf could do so, in any meaningful sense. Or at least, in any way that wouldn’t drive away the vast majority of customers.

Let’s take the best-case scenario. ([Steel-manning], after all, is a good intellectual and ethical practice.) Let’s say the QR code landing site says, in capital letters (to paraphrase): You’re filling this in for health protection purposes. But alongside that, we’d like to hang onto the data you provide, sell it to data brokers and other customers, and to keep doing so for the next 20 years. Please tick this box if you’re OK with that. But you don’t have to, because it’s totally optional. So feel free not to bother.

Good luck getting anyone to tick that. And even that’s arguably non-GDPR compliant, since it’s hard to work out how anyone could meaningfully later opt out.

What seems rather more likely is either a link to a lengthy privacy policy, coupled with a box that asks people to confirm they’ve read it, or at best something mealy-mouthed about “other purposes” or “providing you with information and services you may like”. In either case, I don’t think this comes close to sufficing.

And consider the context. People are providing their data for what they think is a vital health protection purpose. Against that context, satisfying the Article 7 requirement for consent to a collateral purpose to be clearly distinguished from “other matters” is, in my view, a pretty high hurdle. Nothing short of crystal clarity is likely to suffice.

So if the Times is right, I suspect the 15 firms that ICO is apparently investigating could have an interesting time.

Why do you say it’s dangerous?

Lawfulness, or rather a lack of it, is bad enough.

But what really concerns me is the trust factor. We’ve had enough trouble in the UK with people mistrusting the Government’s actions and motives. Whether it’s people claiming the North is being treated far worse than the South as far as lockdowns are concerned, or the furore over the infamous Cummings odyssey in May, or the frustration of the UK Statistics Authority over what it saw as misleading or even manipulated test statistics, it’s clear that for many the predominant response to Coronavirus restrictions is now suspicion, rather than acceptance or support.

And that’s the public sector. The private sector, meanwhile, is being asked to police the restrictions – restaurants, for instance, have to refuse entry (under regulation 16 of the Regulations linked to above) to anyone who refuses to use the NHS QR code and won’t give their information otherwise. This is hard enough for restaurant staff. Imagine how much worse it could be if the refusenik customer thinks their data’s being stolen at the same time.

The real threat, though, is to broader trust. As I said earlier, the countries who are coming through this nightmare without terrible social and economic damage (not to mention, of course, with far fewer deaths and debilitating illnesses) are those with political leaders who have played it straight. Who haven’t exaggerated or appeared to use Covid as a tool for other political ends. Who’ve shown that this isn’t just the highest policy priority, but the only one that matters. And who’ve shown that competence is more important than ideology or loyalty.

In other words: those whose leaders have taken trust seriously, and done everything in their power to earn it, every day. It’s not that they haven’t made mistakes. It’s that they’ve been recognised and learned from.

We have a trust deficit. It’s killing people. Anyone deliberately or recklessly (as opposed to accidentally or inadvertently) undermining that trust is playing with lives.

And when you do something that discourages people from engaging with test/trace/isolate, you’re doing just that.

Algorithms, face recognition and rights. (And exams, too.)

The Court of Appeal’s decision to uphold an appeal against South Wales Police’s use of facial recognition software has all kinds of interesting facets. But the interplay between its findings on the equality implications of facial recognition, and the rights we all have under GDPR, may have significant repercussions. Including, possibly, for the A-level/GCSE fiasco.

Most nerd lawyers will, like me, have been fascinated by the Court of Appeal’s decision to uphold the appeal in R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058. The tl;dr version is that the Court said South Wales Police (“SWP”) had acted unlawfully in mining CCTV to scan the faces of thousands of attendees of large public events, compare them to a “watchlist” of persons of interest using a software tool called “AFR Locate”, and identify people for further police attention.

It’s worth noting that the Court did not find SWP to have acted wholly improperly. It’s clear from the narrative that they made at least some efforts to build safeguards into their procedures and their use of AFR Locate. Nor did the Court find that an activity like this was unlawful per se. However, the Court found that both in who SWP chose to look for, and where they did so, its procedures and practice fell short of what would be required to make them lawful. To that extent, Edward Bridges, the appellant, was right.

It goes without saying that for privacy activists and lawyers, this case will be pored over in graphic and lengthy detail by minds better than mine. But one aspect does rather fascinate me – and may, given the tension between commercial interests and human rights, prove a trigger for further investigation.

That aspect is Ground 5 of Mr Bridges’ appeal, in which the Court of Appeal found SWP to have breached the Public Sector Equality Duty (PSED). The PSED, for those who may not be intimately familiar with s149 of the Equality Act 2010 (EqA), requires all public authorities – and other bodies exercising public functions – to have due regard to the need to, among other things, eliminate the conduct the EqA prohibits, such as discrimination, and advance equality of opportunity between people with a protected characteristic (such as race or sex) and those without it. As the Court noted (at []), the duty is an ongoing one, requiring authorities actively, substantively, rigorously and with an open mind, to consider whether what they are doing satisfies the PSED. It’s a duty which applies not so much to outcomes, but to the processes by which those outcomes are achieved.

Bye-bye to black box algorithms?

In the context of the Bridges case, SWP had argued (and the Divisional Court had accepted) that there wasn’t evidence to support an allegation that the proprietary (and therefore undisclosed and uncheckable) algorithm at the heart of AFR Locate was trained on a biased dataset. (For the less nerdy: a commonly-identified concern with algorithms used in criminal justice and elsewhere is that the data used to help the algorithm’s decision-making evolve to its final state may have inbuilt bias. For instance, and extremely simplistically, if a facial recognition system is trained on a standard Silicon Valley working population, it’s likely to have far fewer Black people and quite possibly far fewer women. And thus be far less accurate in distinguishing them.)

The Court of Appeal found this argument wholly unconvincing. The lack of evidence that the algorithm WAS biased wasn’t enough. There was no sign that SWP had even considered the possibility, let alone taken it seriously.

Most interestingly, and potentially of most far-reaching effect, the Court said at [199] that while it may be understandable that the company behind AFR Locate had refused to divulge the details of its algorithm, it “does not enable a public authority to discharge its own, non-delegable, duty under section 149“.

So – unless this can be distinguished – could it be the case that a black-box algorithm, by definition, can’t satisfy the PSED? Or that even an open one can’t, unless the public authority can show it’s looked into, and satisfied itself about, the training data?

If so, this is pretty big news. No algorithms without access. Wow. I have to say the implications of this are sufficiently wide-ranging to make me think I must be misreading or overthinking this. If so, please tell me.

Algorithms and data protection

There’s another key aspect of the lawfulness of algorithm use which SWP, given the design of their system, managed to avoid – but which could play a much bigger role in the ongoing, and shameful, exam fiasco.

GDPR is not fond of purely algorithmic decisions – what it calls at Recital 71 and Article 22 “solely automated processing”. (I’m using algorithm here in its broadest sense, as an automated system of rules applied to a dataset.) This applies with particular force to “profiling”, which Article 4 defines as automated processing which “evaluate[s] certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.

In fact, Article 22 prohibits any such decision-making on matters which either affect someone’s legal rights or otherwise “similarly significantly affects” them – unless it is:

  • necessary for entering into or performing a contract between the data subject and the data controller;
  • Authorised by EU or (in this case) UK law which incorporates safeguards to protect the data subject’s rights and freedoms; or
  • Based on the data subject’s explicit consent.

Unlike a number of other GDPR provisions, no exemptions are allowed.

Similarly, s14 of the 2018 Data Protection Act (“the DPA”) says such processing – even if authorised by law – must allow the data subject to ask for a decision to be made which is not “based solely on automated processing”. And that request must be honoured.

The key word here so far as Bridges is concerned is “solely”. The human agency at the end of SWP’s process, whether inadvertently or by design, takes this out of the realm of A22; so this didn’t form any part of the Court of Appeal’s reasoning, or of the grounds of appeal. Were there no human in the loop, this kind of processing might be in serious trouble, since there’s no contract, certainly no freely-given consent (which can only be given if it’s possible to withdraw it), and I don’t know of any law which explicitly authorises it, let alone building in safeguards. And using facial recognition to target individuals for police attention is a paradigm case of analysing someone’s “personal aspects, including… behaviour, location or movements”.

So what about exams?

[UPDATE: Unsurprisingly, the JR letters before action are coming out. And one in particular raises points similar to these, alongside others dealing with ultra vires and irrationality. The letter, from Leigh Day, can be found at Foxglove Law’s page for the exam situation.)

But even if A22 wasn’t an issue in Bridges, I suspect that the rapidly-accelerating disaster – no, that implies there’s no agency involved; let’s call it “fiasco” – involving A-levels and no doubt GCSE results will be a different story.

I won’t go into detail of the situation, except to say that an algorithm which marks anyone down from a predicted B/C to a U (a mark which is traditionally believed to denote someone who either doesn’t turn up, or can barely craft a coherent and on-point sentence or two) is an algorithm which is not only grossly unjust, but – given 18 months of pre-lockdown in-school work, even if it isn’t “official” coursework – is likely provably so.

But let’s look at it through firstly the PSED lens. The Court of Appeal in Bridges says that public authorities using algorithms have a duty to work out whether they could inherently discriminate. I haven’t read as much as the lawyers crafting the upcoming JRs of Ofqual’s materials, but I’m not at all certain Ofqual can show it’s thought that through properly – particularly where its algorithm seems heavily to privilege small-group results (which are far more likely in private schools) and to disadvantage larger groups (comprehensives and academies in cities and large towns).

(I have to acknowledge I haven’t spent any time thinking about other EqA issues. Indirect discrimination is certainly conceivable. I’ll leave that reasoning to other minds.)

Now let’s switch to the GDPR issue. We know from A22 that decisions made solely by automated processing are unlawful unless one of the three conditions applies. I can’t see any legal basis for the processing specific enough to satisfy the A22 requirements – certainly none which sufficiently safeguarded the rights and freedoms of the data subjects – that is, the students at the heart of this injustice. Nor am I aware of any data protection impact assessment that’s been carried out – which, by the way, is another legal obligation under A35 where there’s a “high risk” to individuals – self-evidently the case for students here whose futures have been decided by the algorithm. And the fact that the government has thus far set its face against individual students being able to challenge their grades seems to fly in the face of DPA s14.

One final kicker here, by the way. Recital 71 of the GDPR forms the context in which A22 sits, discussing in further detail the kind of “measures” – that is, systems for processing data – with which A22 deals, and which are only permitted under narrow circumstances. It stresses that any automated measures have to “prevent… discriminatory effects”.

Its final words? “Such measure should not concern a child.”

Watch this space.

Adieu, Privacy Shield. Cat. Pigeons.

That’s torn it. The CJEU says Privacy Shield, the deal which lets EU companies send data to the US, is no good. Not only does this cause huge problems for everyone using Apple, Google, Microsoft and about 5,000 other firms – but it also foreshadows real problems for the UK come 2021.

As David Allen Green might put it: Well.

The tl;dr version: Max Schrems, the guy whose lawsuit did for the old Safe Harbour principles allowing trans-Atlantic data exchange, has done it again. He asked the Irish data protection commissioner to ban Facebook from sending his data to the US. The DPC asked the High Court to ask the Court of Justice of the EU. And now the CJEU says Privacy Shield, the mechanism which since 2016 has allowed the US to be seen as an “equivalent jurisdiction” for data protection purposes, isn’t good enough.

This isn’t entirely unexpected. Although the Advocate General’s opinion in December took the view that the Court didn’t have to examine the validity of Privacy Shield (while by no means giving it a clean bill of health), many in the data protection game pointed out that Privacy Shield didn’t resolve some core problems – in particular the fact that nothing in it stops US public authorities from accessing non-US citizens’ personal data well beyond the the boundaries established (since 2018) by GDPR – any more effectively than Safe Harbour had. As such, so the argument went, there wasn’t any good reason why the CJEU would think differently this time around.

And that’s how it’s turned out. The Court (decision here) says Privacy Shield is no good.

To be clear: this isn’t farewell to all trans-Atlantic transfers. If they’re necessary, it’s OK – so this doesn’t stop you from logging into Gmail or looking at a US-hosted website. But EU companies sending their people’s or their customers’ personal data to US servers for processing solely in reliance on Privacy Shield, will need to stop.

And most, on the whole, don’t. Instead, they rely on mechanisms such as standard corporate clauses or SCCs (almost nine-tenths of firms according to some research), which use language approved by the European Commission in 2010.

Today’s ruling says SCCs can stay. But it puts an obligation on data controllers to assess whether the contractual language can really provide the necessary protection when set against the prevailing legal landscape in the receiving jurisdiction, and to suspend transfer if it can’t.

And there’s the cat among the pigeons. Hard-pressed and under-resourced data protection authorities may need to start looking more intently at other jurisdictions, so as to set the boundaries of what’s permissible. And data controllers can’t simply point to the SCCs and say: look, we’re covered.

In other words: Stand by, people. This one’s going to get rocky.


(Obligatory Brexit alert…)

Just as a final word: as my friend Felicity McMahon points out, this is really, really not good news for the UK. When the transition (sorry, implementation) period ends on 31 December 2020, and our sundering from the EU is complete, we will need an adequacy decision if EU firms are to keep sending personal data to the UK unhindered. One view is that since the Data Protection Act 2018 in effect onshores the GDPR wholesale, we should be fine. But there are significant undercurrents in the opposite direction, thanks to our membership of the Five Eyes group of intelligence-sharing nations (with the US, Australia, Canada and New Zealand) and the extent to which (under the Investigatory Powers Act 2016) our intelligence services can interfere with communications.

Since this kind of (relatively) unhindered access by the state is what’s just torpedoed Privacy Shield, I’m not feeling terribly comfortable about the UK’s equivalence prospects. I hope I’m wrong. But I fear I’m not.

As news stories used to say: Developing…

What’s your data worth?

In allowing Google to appeal against the Court of Appeal’s findings in Lloyd v Google llc, the Supreme Court holds out the prospect that we’ll conclusively know whether a personal data breach is a loss in itself – or whether a pecuniary loss or some specific distress is required.

In 2019, the Court of Appeal did something special to personal data. It gave it a value in and of itself – such that a loss of control over personal data became an actionable loss in itself. No actual pecuniary loss or distress was necessary. Now the case in question is going to the Supreme Court, and the understandable controversy triggered by the Court of Appeal’s decision may (once the case is heard, late this year or more probably next) finally be resolved one way or the other.

The Court of Appeal’s decision came in Lloyd v Google llc [2019] EWCA Civ 1599, a case involving one of the highest-profile tech firms in the world, and one of the foremost examples of either (depending on your perspective) finding an inventive solution to another firm’s (in this case Apple’s) unreasonable intransigence, or shamelessly evading that firm’s customers’ privacy protections for one’s own gain. The issue was Google’s use of what was generally termed the “Safari workaround”, a means of tracking users of websites on which a Google subsidiary had placed ads even if a user’s Apple device was set up to stop this from happening.

Richard Lloyd, a privacy activist, was trying to initiate a class action on an opt-out basis, which could in principle encompass millions of users of Apple’s Safari web browser. This was in itself highly controversial, although I don’t propose to address that side of things. Of more direct interest from a privacy perspective was the claim made on Mr Lloyd’s behalf that the Safari Workaround was actionable in itself: that users didn’t have to prove they’d lost out emotionally or financially, but that the loss of control over their personal data which the Workaround caused was per se a loss sufficient to allow them to sue under the Data Protection Act (the 1998 version, which had been in force at the time).

The High Court had no truck with this argument, Warby J concluding that without some actual loss or distress, section 13(1) of the 1998 Act wasn’t engaged. The Court of Appeal disagreed, with Vos LJ finding (at [44-47]) that a person’s control over their personal data had an intrinsic value such that loss of that control must also have a value if privacy rights (including those arising from article 8 of the European Convention on Human Rights) were to be properly protected.

(The Court of Appeal also reversed Warby J’s findings on the other key point: that the potential claimants all had the same “interest” in the matter, such that a representative action under CPR r.19.6 could proceed. As such, it ruled that it could exercise its discretion to allow Mr Lloyd to serve proceedings on Google even though it was out of the jurisdiction.)

To no-one’s surprise, Google sought permission to appeal the matter to the Supreme Court. The Court has now given permission for the appeal to proceed – not just on the core question of whether loss of control over personal data is actionable in itself, but on the other points on which the Court of Appeal disagreed with Warby J.

The question of where the cut-off point lies in privacy breaches involving loss of control over personal data has been a live one ever since an earlier case involving Google, Vidal-Hall v Google Inc [2015] EWCA 311, which (as Vos LJ put it in Lloyd) had analogous facts but one critical difference: it had been pleaded on the basis of distress caused by a personal data breach, rather than the idea that such a breach was intrinsically harmful and thus actionable in itself. The Court of Appeal in Lloyd went beyond Vidal-Hall in expanding the scope of actionable harm. Now, at last, we may conclusively get to identify the outer borders of that scope. Watch this space.

(Incidentally – I’m rather late to this party. The Supreme Court granted permission in March. I’d intended to write about it a little earlier, but the Bug got in the way. My apologies.)

A good day for employers. With a data protection sting in the tail.

I’m not going to usurp those who know a lot more than me (Panopticon, I’m looking at you). But today’s Supreme Court decisions on vicarious liability are a big deal.

There’s a thematic beauty to the fact that the Supreme Court decided to release its judgments in WM Morrisons Supermarkets plc v Various Claimants [2020] UKSC 12 and Barclays Bank plc v Various Claimants [2020] UKSC 13 on the same day. Taken together, the two judgments offer a solid dose of relief to employers worried about the circumstances in which they can be held liable for the acts of employees (Morrisons) and independent contractors (Barclays). But there’s at least a slight sting in the tail of the Morrisons judgment, which anyone responsible for keeping an organisation on the data protection straight and narrow would do well to recognise.

I don’t propose here to go into huge detail. If you want a really in-depth look at Morrisons – and it pains me to point you to another Chambers, of course – 11KBW’s Panopticon blog does a lovely job, while the estimable UKSC Blog’s writeup of Barclays will give you what you need in just a few paragraphs.

But these cases are so interesting that I couldn’t let the day pass without at least a quick note.

On the vicarious liability front, the main lesson from Barclays appears to be that nothing dilutes the fundamental question where the wrongdoer is in fact an independent contractor, which is to determine whether their role and their actions are akin to an employment relationship. In doing so, it’s important not to get hung up on the five “incidents”, factors identified in the Christian Brothers case ([2012] UKSC 56) such that one loses sight of that central question. If the independence of the contractor is clear, there’s no need to waste time going through the incidents. They’re a guide, not a test.. So the incidents aren’t a test; they’re a guide.

Unsurprisingly I find Morrisons even more fascinating. Just to recap the facts: Andrew Skelton, a Morrisons employee with access to payroll data as part of his job was disciplined for misconduct in 2013. In retaliation, in early 2014 Skelton put a copy of payroll data for the supermarket group’s entire workforce online, and tried to leak it to the papers – who, thankfully, instead told Morrisons. (Skelton is now in jail for having done this.)

Some of Morrisons’ employees sought to hold the company vicariously liable for the leaker’s breach of their data protection rights. At first instance and appeal, they won.

The Supreme Court has now decided otherwise. Critically, the Court points out that just because Morrisons gave Skelton access to the data, making him a data controller, that doesn’t make them responsible for everything he did with it. In this case the Christian Brothers incidents aren’t relevant – no-one argues Skelton wasn’t an employee. But his misuse of the data wasn’t sufficiently part of the task he was entrusted with (which was to send it to Morrisons’ auditors) to make Morrisons responsible for his actions. The fact that he had a strongly personal motive – to retaliate against Morrisons – was highly relevant to the analysis too.

Before everyone starts getting too comfortable, though, Morrisons doesn’t leave companies with a free pass for their employees’ data protection errors:

  • For one thing, the Data Protection Act and the GDPR (for as long as it remains applicable…) can impose direct liability on organisations if the wrongdoing is in practice on the employer’s behalf, or if the organisation’s slipshod controls played a part in enabling it.
  • For another, and this is the real sting in the tail: Morrisons sought to argue that the DPA excluded vicarious liability, whether for common law or statutory wrongs, limiting liability only on data controllers and even then only if they’d acted without reasonable care. The Supreme Court had little time for this. It drew the comparison with vicarious liability for an employee’s negligence: assuming the normal test for vicarious liability was met, there was no reason why, if strict employer liability applied to that, there was no reason absent explicit statutory language (which there isn’t), it shouldn’t apply to employee data protection wrongdoing too.

So a big day for employers, a fascinating one for employment lawyers – and good times for the data protection geeks as well.