2021v17, Monday: It’s not just about you.

Why an apolitical workplace is a luxury only the comfortable can afford. And a cut-out-and-keep caustic guide to AI ethics.

Short thought: One of the more interesting “little firms that could” in the online services space has long been the outfit currently known as Basecamp. Its founder, Jason Fried, has been voluble – and thoughtful and interesting – about how to do good work remotely, long before the past year made that a necessity.

But now he and David Henmeier Hanson, known as “DHH” (together the senior management of Basecamp), have solidly put their feet in it. I won’t rehearse the background in detail, because others have done it far better. The tl;dr version (and this is a really thin summary of a big story):

  • Basecamp employees – a sizeable chunk of the 60-odd staff base – started to work on diversity and inclusion issues. Management blessed this.
  • In the process, the fact that for many years the firm’s internal systems had hosted a list of “funny customer names” – many of which, inevitably, were those of people of colour – came in for understandable criticism.
  • Initially, management were onside with this criticism; indeed, they owned their part in the list’s maintenance over the years.
  • But then it got ugly. A number of staff saw the list in the context of ongoing institutional discrimination – not just or even not mainly at Basecamp, to be clear, but societally. Management (Jason and DHH) pushed back against what they seemed to see as an over-reaction.
  • Jason and DHH announced that political discussion was now to be off-limits. (They later amended this – albeit apparently without making it clear that there was an amendment – to it being off-limits only on Basecamp’s own chat and comms systems.) They also said they would withdraw benefits, instead simply paying the cash value thereof, so as not to be “paternalist”.
  • This caused uproar. An all-staff meeting saw one senior and long-time executive play the “if you call this racism, you’re the racist”, “no such thing as white supremacy” card; he resigned shortly afterwards. As many as a third of the staff have now also taken redundancy.
  • This might seem like a tempest in a teacup. Small tech firm has row; news at 11.

But it’s not. Tech is still overwhelmingly white and overwhelmingly male, particularly at its senior levels. (It may not escape your notice that the Bar isn’t much better.) Which means its leadership often misses the key point, which is this: when you’re not rich and comfortable, when your life has incorporated a lot of moments where you don’t get to expect everything will go smoothly, when you don’t have that much of a safety net, when large numbers of people at all levels of power get to mess you about just because they can, without you having much recourse, just about everything is political.

Healthcare is political, if its availability and quality vary depending on where you live and what you look like. (Don’t doubt this: I’ve seen healthcare professionals, who I’m certain would be genuinely horrified by conscious prejudice, treat Black women with breathtaking disdain compared with how they talk to people like me.) Pay is political. Work is political, because expectations and yardsticks vary unless we pay honest attention to how they’re generated and applied.

Put simply: cutting political and social issues out of the workplace is a luxury only comfortable people can afford. A luxury which exacerbates, rather than diminishes, the power imbalance built into to workplaces by the sheer fact of people’s dependence on a paycheque. (This, by the way, is why in the UK and Europe we say people can’t consent to the use of their data in the workplace. If the alternative to consent is “find another job”, that isn’t free consent for anyone without a private income.)

For Jason and DHH to take this approach is to forget that the only people for whom politics doesn’t relate to business are those who get to dictate the terms of what goes and what doesn’t. The blindness appears to dismal effect in a post by DHH on “Basecamp’s new etiquette at work”:

Just don’t bring it into the internal communication platforms we use for work, unless it directly relates to our business. I’m applying that same standard to myself, and Jason is too.

Well, that’s nice. Reminds of that line about the right of the rich to sleep under bridges. I wonder why.


Someone is right on the internet: On a somewhat related topic, issues of ethics in AI are big news, at least among geeks. Which is as it should be: the more AI or quasi-AI comes to control, dictate or direct our lives, the more concern we should have about whether the black boxes in question are exacerbating structural or other unfairness or inequality. It’s not good enough to just blame – for instance – algorithms that can’t recognise Black people on “computer says no”. People make decisions, and they must be accountable.

(This, of course, is why Article 22 of the GDPR prohibits “solely automated processing, including profiling” – although it’s by no means impossible to get round this by inserting a human into the final stage of the process, or by making statutory arrangements to allow for it.)

Big Tech isn’t that comfortable about this, so it seems – as shown by Google’s removal (whether it’s officially sacking or not isn’t wholly clear, but it’s effectively an ejection anyway) of two senior women working on AI issues.

So MIT Technology Review’s caustic A-Z of how to talk about AI ethics is horrifically on the nose. A couple of examples will suffice, I hope, to encourage you to go and read it:

ethics principles – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.

human in the loop – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.

privacy trade-off – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.

And the best one comes first:

accountability – The act of holding someone else responsible for the consequences when your AI system fails.

Ouch. But yes.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

2021iii15, Monday: Enlightened laziness

A message of hope for procrastinators everywhere. And a depressing but unsurprising long read about Facebook’s damaging take on AI ethics.

Photo of the Cam below King’s College, by Giacomo Ferroni on Unsplash

Short thought: Most of us, on occasion at least, struggle with procrastination. I know I do. Sometimes it’s simply having too many plates spinning. Sometimes it’s what I’ve come to call the “paint colour” problem: that when you’ve got a wicked problem and a more straightforward one, the temptation is to solve the latter and procrastinate the former. (“Paint colour” because if an hour-long meeting has two items on the agenda, one of which is existential and the other of which is what colour to paint the office, you can bet that 55 minutes will be spent on the paint.) And sometimes it’s simply that you’re tired, and focus and flow are elusive.

I don’t know of any magic bullets for this. But mindset can help. Whether it’s a form of self-induced CBT or not I don’t know; but I’ve found that a re-framing of my motives has been of assistance. I call it “enlightened laziness”.

This dates back to my college days. I was studying Japanese, which meant a first year spent sweating grammar and vocab for hours a day while friends were drinking around their weekly essay crises. But as it came to exams, I realised that I was spending less time revising than they were. Not because I’d consciously been doing more work; just because the pattern of my work meant I was constantly and consistently reinforcing things.

The upshot? While they sweated their way through a beautiful May in stuffy rooms and libraries, I lay on the banks of the Cam with a bottle of something.

That’s the laziness bit. I like to work. I like to learn. But I also like to live. And so I told myself: if I can translate this into a practice, I can keep lazing on the riverbank when the time is right to do so.

And thus was born enlightened laziness. If I manage my time, and get to things early enough, I get to laze around when I want to. That’s the motivation: not good behaviour, or efficiency. But making space for laziness.

It’s worked for me. Usually. I admit that life at the Bar has given it a knock; especially right now, with multiple clashing client deadlines. But as a general principle, one that says “spread the work out, don’t leave it till the last minute, so that you can laze about instead”, it remains a touchstone.

Now, of course, I’m trying to convince my 14-year-old daughter of the same principle. Don’t leave schoolwork till the last minute: less stress, and more lazing, to get to it earlier. I’m not conspicuously succeeding. Perhaps I never will.

But it might be worth a try, for those like me who’ve struggled with procrastination. I’ve heard worse incentives than to protect your lazing time. Give it a go.


Someone is right on the internet: There’s an excellent long read from the MIT Technology Review on work within Facebook on AI ethics. It’s well-reported, fascinating, and entirely depressing.

I loathe Facebook; I have an account solely for doing stuff I can’t do without it (like managing our relationship with my daughter’s Tae Kwon Do club), and always open it in a private tab to mitigate the risk of it infecting everything else I do. I regard it as parasitic, unpleasant, and sociopathic to a degree. And I fear deeply the fact that a single, exceptionally odd (and although from me that’s usually a compliment, this time it isn’t) and unfeasibly rich young white man can essentially dictate the terms of communication for large chunks of the world.

This piece does nothing to change those feelings. If anything, it accentuates them:

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

There’s something important in this notion of how a focus not on risk but on compliance – obeying law or regulation, and in the process minimising its effects on one’s business model – can be sheer poison for how a business manages its effect on the world, and the externalities it creates. I’ll try to get to it soon – apologies that it’s still brewing.

Till then, this is well worth your time.


(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)