That Time We Tried To Make Sense of a Statistic in a New York Times Story on Deepfakes
January 27, 2023 | Tags: New York Times, REASON
About a third of the way into a New York Times piece about the prospects for regulating "deepfake" technology, a striking statement appears:
Some experts predict that as much as 90 percent of online content could be synthetically generated within a few years.
It's striking because it's frustrating: The sentence grabs your attention with a huge number (90 percent!), but it doesn't convey what the number means. For one thing, there's that weasel-phrase "as much as"—just how much wiggle room is that masking? And what counts as "online content"? The passages before and after this line are explicitly about deepfakes, but strictly speaking even bot spam is "synthetically generated." What precisely are we talking about here?
Fortunately, the sentence had a hyperlink in it. When I clicked, I found myself at a report from the European law enforcement agency Europol. And on page 5, it says this:
Experts estimate that as much as 90% of online content may be synthetically generated by 2026. Synthetic media refers to media generated or manipulated using artificial intelligence (AI). In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life, but the increase in synthetic media and improved technology has given rise to disinformation possibilities, including deepfakes.
That helps a bit: We're not talking about spam, but we're not just talking about deepfakes either. But it still doesn't really say how we're defining the denominator here (90 percent of what?), and we still have that hazy "as much as."
Fortunately, Europol also cites a source: Nina Schick's 2020 book Deepfakes: The Coming Infocalypse. And using Amazon's "look inside" feature to search for the number 90, I was able to find the original source of the figure:
To understand more, I spoke to Victor Riparbelli, the CEO and Founder of Synthesia, a London-based start-up that is generating synthetic media for commercial use….At Synthesia, he and his team are already generating synthetic media for their clients, who he says are mostly Fortune 500 companies. Victor explains that they are turning to AI-generated videos for corporate communications, because it gives them a lot more flexibility. For example, the technology enables Synthesia to produce a video in multiple languages at once. As the technology improves, Victor tells me, it will become ubiquitous. He believes that synthetic video may account for up to 90 per cent of all video content in as little as three to five years.
1. We're not talking about all online content. We're talking about all video content.
2. It wasn't "some experts" who predicted this. It was one expert. And he's a guy who runs a company that generates synthetic media, so he has every reason to hype it.
3. He offered his forecast in a book published in August 2020, so we might—depending on when exactly Schick's interview with him took place—already be in that three-to-five-year zone. Technically, I suppose his prediction has already come true: The amount of online video content that is synthetically generated is indeed somewhere between 0 and 90 percent.
Does any of this tell us anything about the coming impact of deepfakes? I suppose it supports the general idea that these technologies are getting better and becoming more common, but I already knew that before I read any of this. Even if that sentence in the Times had been more accurate, it wouldn't have illuminated much; it's basically there as a prop.
But it does tell us something that this story discussing the dangers that deepfakes purportedly pose to the information environment would itself include a little misinformation. This misinformation is being spread not by some spooky new tech, but through the tried-and-true method of saying something misleading with the imprimatur of authority. For the average reader, that would be the authority of The New York Times; for the Times reporter, it was the authority of Europol.
If you talk with people who track the spread of rumors online, you'll often hear that it doesn't take an avant-garde technology to convince people of something that isn't true—just mislabel a photo or misquote a speech, and your deception might take off among the people who are already inclined to believe what you're telling them. Deepfakes themselves don't worry me tremendously. Society has adjusted to new forms of visual fakery before, and it can surely do so again. Better to worry about something as old as humanity: our capacity to find reasons to believe whatever we want.
The post That Time We Tried To Make Sense of a Statistic in a <i>New York Times</i> Story on Deepfakes appeared first on Reason.com.