2021iii15, Monday: Enlightened laziness

A message of hope for procrastinators everywhere. And a depressing but unsurprising long read about Facebook’s damaging take on AI ethics.

Photo of the Cam below King’s College, by Giacomo Ferroni on Unsplash

Short thought: Most of us, on occasion at least, struggle with procrastination. I know I do. Sometimes it’s simply having too many plates spinning. Sometimes it’s what I’ve come to call the “paint colour” problem: that when you’ve got a wicked problem and a more straightforward one, the temptation is to solve the latter and procrastinate the former. (“Paint colour” because if an hour-long meeting has two items on the agenda, one of which is existential and the other of which is what colour to paint the office, you can bet that 55 minutes will be spent on the paint.) And sometimes it’s simply that you’re tired, and focus and flow are elusive.

I don’t know of any magic bullets for this. But mindset can help. Whether it’s a form of self-induced CBT or not I don’t know; but I’ve found that a re-framing of my motives has been of assistance. I call it “enlightened laziness”.

This dates back to my college days. I was studying Japanese, which meant a first year spent sweating grammar and vocab for hours a day while friends were drinking around their weekly essay crises. But as it came to exams, I realised that I was spending less time revising than they were. Not because I’d consciously been doing more work; just because the pattern of my work meant I was constantly and consistently reinforcing things.

The upshot? While they sweated their way through a beautiful May in stuffy rooms and libraries, I lay on the banks of the Cam with a bottle of something.

That’s the laziness bit. I like to work. I like to learn. But I also like to live. And so I told myself: if I can translate this into a practice, I can keep lazing on the riverbank when the time is right to do so.

And thus was born enlightened laziness. If I manage my time, and get to things early enough, I get to laze around when I want to. That’s the motivation: not good behaviour, or efficiency. But making space for laziness.

It’s worked for me. Usually. I admit that life at the Bar has given it a knock; especially right now, with multiple clashing client deadlines. But as a general principle, one that says “spread the work out, don’t leave it till the last minute, so that you can laze about instead”, it remains a touchstone.

Now, of course, I’m trying to convince my 14-year-old daughter of the same principle. Don’t leave schoolwork till the last minute: less stress, and more lazing, to get to it earlier. I’m not conspicuously succeeding. Perhaps I never will.

But it might be worth a try, for those like me who’ve struggled with procrastination. I’ve heard worse incentives than to protect your lazing time. Give it a go.

Someone is right on the internet: There’s an excellent long read from the MIT Technology Review on work within Facebook on AI ethics. It’s well-reported, fascinating, and entirely depressing.

I loathe Facebook; I have an account solely for doing stuff I can’t do without it (like managing our relationship with my daughter’s Tae Kwon Do club), and always open it in a private tab to mitigate the risk of it infecting everything else I do. I regard it as parasitic, unpleasant, and sociopathic to a degree. And I fear deeply the fact that a single, exceptionally odd (and although from me that’s usually a compliment, this time it isn’t) and unfeasibly rich young white man can essentially dictate the terms of communication for large chunks of the world.

This piece does nothing to change those feelings. If anything, it accentuates them:

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

There’s something important in this notion of how a focus not on risk but on compliance – obeying law or regulation, and in the process minimising its effects on one’s business model – can be sheer poison for how a business manages its effect on the world, and the externalities it creates. I’ll try to get to it soon – apologies that it’s still brewing.

Till then, this is well worth your time.

(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)

Privacy: one step forward, one step back.

A quick hit here to memorialise two privacy-related bits of news: a German court bans Facebook from tracking you elsewhere, but US Republicans try – again – to ban encryption that actually works.

Like many people, I barely use Facebook. And when I do, I only do so when using Incognito (Chrome) or Private Browsing (Safari). It’s annoying logging in each time (albeit less so with 1Password). But it stops Facebook from doing something I viscerally loathe: tracking everything else I do, everywhere else, thanks to tracking code and cookies.

I get that this may make me a paranoid tin-hat type. I’m OK with that. Just like I’m OK with blocking ads which rely on adtech, preventing videos from auto-playing, and generally trying to stop a simple text website from downloading an extra double-digit MB load of data so they can show me ads so intrusive that I never want to go back to the site in question. (I’m fine with ads. I like free stuff, paid for by advertising. But adtech-delivered ads are essentially a conman’s dream. And from a data protection/privacy perspective, I have grave doubts about whether adtech is lawful. So I’m very happy to screw with it.)

Which makes a German court’s decision to reinstate a ruling banning Facebook from combining its own data with that from other sites into so-called “super-profiles” very interesting. The ban was at the behest of Germany’s Cartel Office, and the judge’s ruling (press release in German here) said there wasn’t any serious doubt over whether Facebook was (a) dominant and (b) had abused that position – particularly by getting information from non-Facebook sources.

The ruling only applies to Germany, of course. But this does seem to be the first time that cross-site tracking and data collection has been seriously set back. Which may make things slightly hotter for adtech’s widespread consent-less collection of personal data, legally speaking – although the dominance question doesn’t necessarily arise, of course, the ruling nonetheless explicitly addresses, in resolutely negative terms, what Techcrunch calls “track-and-target” and what writers like Shoshana Zuboff and many others call surveillance capitalism. It does so by noting that a significant number of Facebook users would prefer not to be tracked and targeted, and a properly-functioning market would allow them that option. It’s hard to see how the same can’t be said for adtech in general.

Less encouraging, and far more predictable, is US Senate Republicans’ move to introduce legislation (the LEAD Act – seriously, these acronyms…) to “end the use of warrant-proof encrypted technology by terrorists and other bad actors“. As almost any even slightly encryption-savvy person will know, this translates to “making encryption stop working securely”. Simply put, if – as this legislation would appear to require – a service provider keeps a key to your comms so it can give it to law enforcement, then end-to-end encryption is done and your comms aren’t secure any more. As Ars Technica puts it, “Encryption doesn’t work that way.” Anyone claiming it does is either ignorant or acting in bad faith. No real middle ground there.

John Gruber points out that describing the bill as “a balanced solution” as its proponents do because the key would only be handed over with a court order is hogwash. If a key exists, it becomes a target. “That’s how the law works today,” he writes. “What these fools are proposing is to make it illegal to build systems where even the company providing the service doesn’t hold the keys.”

Fools seems like a generous description. It presupposes good faith. I’m not sure I’d go that far.