The news agenda is deepfake-obsessed. Stories are everywhere, including stories that are pretty confused about what a deepfake is. Here’s a refresh: a deepfake is a synthetic image or video made using a particular subset of AI (deep learning, hence the name). Simply put: It’s AI using real images of humans to learn to make fake images of humans, more realistically than any CGI you’ve ever seen. It can make Barack Obama say things he never said, create artificial faces you could swear are real or spread fake — but truly damaging — revenge porn. What deepfakes are not: all other kinds of disinformation. That Nancy Pelosi video? Fake but not a deepfake.
The Pelosi video is a textbook example for why the moral panic over deepfakes is overblown. It didn’t take sophisticated machine learning to spread a false video around the world. Slow the video down to 75%, raise the audio pitch back up. You could do this on a VCR. All misinformation needs to take hold is entry-level Photoshop skills and a hefty dose of confirmation bias. In fact, as Max Read pointed out in New York magazine, so far the most convincing deepfakes have been created to warn us about… the dangers of deepfakes. Truly bad actors seem satisfied with more basic technologies.
Deepfakes could well become more ubiquitous and harder to debunk over time, but they don’t need to be. All the problems deepfakes could cause already exist. The true goal of misinformation is not to make you believe a lie; it’s to make you doubt the truth. It is to make the ground so shaky underfoot, to make us all so cynical about the nature of reality that we no longer accept legitimate information or have conversations based on shared facts. Everything is up for grabs. “Who knows, really?” Collective shrug of the shoulders. The most urgent technology we may need is not one to debunk deepfakes, but one to certify reality.
Judging by the alarmist headline on this NYT op-ed — “Deepfakes are here. We can no longer believe what we see” — I’d say we’re there. And it didn’t take sophisticated tech.
Exhibit A: the ebola epidemic in the Democratic Republic of Congo. The latest outbreak has already taken more than 1,500 lives. Early treatment significantly improves chances of survival. A promising experimental vaccine is available. But conflict, historical mistrust of the central government and conspiracy theories circulating on Whatsapp have made it impossible for accurate public health information to take hold. So healthcare professionals are attacked and the virus spreads. The tech: text messages.
Exhibit B: the last US midterms. By all accounts, there was no significant Russian meddling in those elections. But that didn’t stop Russian actors creating a campaign to say they controlled the vote. The point was not to sway the elections — it was to bait the media to spread the narrative that Americans couldn’t trust their institutions. Thankfully, media didn’t bite. (This example taken from Emerson Brooking’s excellent recent presentation at Hacks/Hackers London. Emerson kindly answered my question on the very topic of this newsletter, then watch on for his take on deepfakes — “incredibly overstated and a way for consultants to make money”). The tech: a crude website, lots of spelling errors and a troll jpg signature (I kid you not).
“As humans, we have to trust someone or something, or we couldn't leave the house,” Oxford academic and trust expert Rachel Botsman reminded me in a recent interview (find it in Delta Sky magazine in August). Those who spread misinformation want you to stay indoors. The most radical act of resistance is to exercice those critical judgment muscles and to choose — skeptically but never cynically — to trust.
What I’ve been up to
- I really enjoyed interviewing Etsy CEO Josh Silverman. We talked about the importance of doing few things well and how to bring your team along with you. “There's a very long list of good ideas,” he said. “And if you try to do all of them you'll get absolutely nowhere.”
- I also interviewed WPP UK country manager Karen Blackett. A woman who really stands out in her industry and who lights up when talking about diversity and making room for the less privileged in her industry. So that’s what we talked about.
What I’m paying attention to
- I love it when we lift the hood and show how LinkedIn works. Three articles well worth reading from colleagues: Why the feed shows you what it does from product director Pete Davies; how that actually happens on the engineering side because you really ought to understand what an algorithm does; and how LinkedIn’s “more communication, less hierarchy” mantra works by our head of product Ryan Roslansky, which is as true an article about what happens inside the company as I’ve ever read.
- This Guardian piece by Julia Carrie Wong about how even the wealthy are starting to dislike San Francisco is spot on. This quote – “Everyone I met was only interested in their jobs, and their jobs weren’t very interesting. I get it, you’re a developer for Uber, I’ve met a million of you.” – is pretty much why I avoid visiting.
What have you been reading? Share with me.
Get more from Borderline
Never miss new articles, essays and podcast episodes. Sign up for the free newsletter. Opt for a paid subscription to unlock comments, get the podcast early and support independent journalism.