A message of hope for procrastinators everywhere. And a depressing but unsurprising long read about Facebook’s damaging take on AI ethics.
Short thought: Most of us, on occasion at least, struggle with procrastination. I know I do. Sometimes it’s simply having too many plates spinning. Sometimes it’s what I’ve come to call the “paint colour” problem: that when you’ve got a wicked problem and a more straightforward one, the temptation is to solve the latter and procrastinate the former. (“Paint colour” because if an hour-long meeting has two items on the agenda, one of which is existential and the other of which is what colour to paint the office, you can bet that 55 minutes will be spent on the paint.) And sometimes it’s simply that you’re tired, and focus and flow are elusive.
I don’t know of any magic bullets for this. But mindset can help. Whether it’s a form of self-induced CBT or not I don’t know; but I’ve found that a re-framing of my motives has been of assistance. I call it “enlightened laziness”.
This dates back to my college days. I was studying Japanese, which meant a first year spent sweating grammar and vocab for hours a day while friends were drinking around their weekly essay crises. But as it came to exams, I realised that I was spending less time revising than they were. Not because I’d consciously been doing more work; just because the pattern of my work meant I was constantly and consistently reinforcing things.
The upshot? While they sweated their way through a beautiful May in stuffy rooms and libraries, I lay on the banks of the Cam with a bottle of something.
That’s the laziness bit. I like to work. I like to learn. But I also like to live. And so I told myself: if I can translate this into a practice, I can keep lazing on the riverbank when the time is right to do so.
And thus was born enlightened laziness. If I manage my time, and get to things early enough, I get to laze around when I want to. That’s the motivation: not good behaviour, or efficiency. But making space for laziness.
It’s worked for me. Usually. I admit that life at the Bar has given it a knock; especially right now, with multiple clashing client deadlines. But as a general principle, one that says “spread the work out, don’t leave it till the last minute, so that you can laze about instead”, it remains a touchstone.
Now, of course, I’m trying to convince my 14-year-old daughter of the same principle. Don’t leave schoolwork till the last minute: less stress, and more lazing, to get to it earlier. I’m not conspicuously succeeding. Perhaps I never will.
But it might be worth a try, for those like me who’ve struggled with procrastination. I’ve heard worse incentives than to protect your lazing time. Give it a go.
Someone is right on the internet: There’s an excellent long read from the MIT Technology Review on work within Facebook on AI ethics. It’s well-reported, fascinating, and entirely depressing.
I loathe Facebook; I have an account solely for doing stuff I can’t do without it (like managing our relationship with my daughter’s Tae Kwon Do club), and always open it in a private tab to mitigate the risk of it infecting everything else I do. I regard it as parasitic, unpleasant, and sociopathic to a degree. And I fear deeply the fact that a single, exceptionally odd (and although from me that’s usually a compliment, this time it isn’t) and unfeasibly rich young white man can essentially dictate the terms of communication for large chunks of the world.
This piece does nothing to change those feelings. If anything, it accentuates them:
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
There’s something important in this notion of how a focus not on risk but on compliance – obeying law or regulation, and in the process minimising its effects on one’s business model – can be sheer poison for how a business manages its effect on the world, and the externalities it creates. I’ll try to get to it soon – apologies that it’s still brewing.
Till then, this is well worth your time.
(If you’d like to read more like this, and would prefer it simply landing in your inbox three or so times a week, please go ahead and subscribe at https://remoteaccessbar.substack.com/.)