The rampant spread of Covid-19 conspiracies serves as a warning that our online information ecosystem can be weaponized well before platform administrators step in. To counter the deluge of viral disinformation, platforms should implement a circuit breaker mechanism to limit the exponential amplification of harmful content.
On Thursday, Casey Newton reported that Facebook is piloting a circuit breaker to stop viral spread of posts in some circumstances. Such a tool, if it had been adopted earlier, as one of us (Goodman) proposed and the Center for American Progress also advanced this week, might have helped stop QAnon’s toxic spread and might still staunch the flow of dangerous incitement and misinformation by introducing friction into the algorithmic amplification of dangerous conspiracies.
The news about Facebook comes in the same week that major social media platforms, having been warned that it was coming, acted quickly to stop the Plandemic sequel from becoming viral. Things went very differently when the first Plandemic video appeared in May, going viral with lies that masks are dangerous and social distancing is unnecessary. The video spread with such velocity that it was viewed more than 8 million times in a week before YouTube, Facebook, and Twitter all removed it for violating their policies. The video was a digital pathogen with a high rate of infection—what virologists call the R-naught.
The R-naught is rising; the pace of viral misinformation is speeding up. In July, a video called America’s Frontline Doctors, which pushed hydroxychloroquine as a COVID-19 miracle cure, caught fire. Funded with dark money, promoted by influential accounts, advanced by algorithmic recommendations, and shared in large private Facebook groups, it rocketed up to 20 million views in just 12 hours on Facebook alone before the platforms removed it. COVID-19 “truthers,” in other words, had more than doubled their Plandemic reach in just a fraction of the time by exploiting a system of amplification that is ultimately unsafe.
The rampant spread of harmful COVID-19 misinformation serves as a warning that our online information ecosystem can be weaponized well before platform administrators step in, especially when they don’t see it coming. The Federal Communications Commission prohibits broadcast hoaxes to protect public safety. The Federal Trade Commission stops health care scams. Online, scams and hoaxes can infect millions of people before the content moderation hammer falls. Next time, bad actors might share manipulated footage of Anthony Fauci urging Americans to avoid a vaccine or even to contract the virus intentionally. Or it could be a message on Election Day that polls are closed. Such provocations can encourage people to risk serious harm in the offline world. The R-naught of online scams rises as hoaxers strategically place content along algorithmic pathways that exploit anger and credulity. Unless platforms take more aggressive actions, the risks associated with online disinformation will continue to accelerate.
Lessons from Wall Street
But there might be a way to change that. To counter the deluge of viral disinformation, platforms should implement anti-viral algorithms. We’ve seen similar mechanisms in other fields. Most notably, the New York Stock Exchange uses a circuit breaker to prevent panics associated with market volatility. The circuit breaker is triggered when stock prices drop by at least 7 percent from the prior day’s S&P closing price. Trading stops for 15 minutes so investors have the chance to understand what’s happening in the market and act accordingly. The purpose of these circuit breakers is to pause trading in order to give investors time to assimilate new information and “give traders the space to make informed choices during periods of high market volatility.” Another source of friction in financial markets are the speed bumps many markets are now introducing to slow down high-frequency trading.
Viral content online is like high-frequency trading. The system gets overheated, people share without understanding, and there is a form of irrational exuberance around posts that trigger strong emotion. We need more friction in the digital platforms. We need mechanisms that will stop viral spread at least until there has been careful consideration, even without the kind of advance notice that Facebook had about the Plandemic sequel. These systems do not fall prey to irrational exuberance by accident. They are designed for it. Interactions beget more interactions, often gaining a velocity that overwhelms cognitive or operational checks. A circuit breaker mechanism will limit the exponential amplification of content, at least until human reviewers can determine whether the content violates platform policies or poses a risk to the public interest. If it does, it can be removed, and if not, sharing can be unthrottled.
If the stock market circuit breaker is about free fall, the platform trigger is about exponential growth. Virality is expressed graphically as a hockey stick. Whether one looks at audience impressions or shares or other interactions, the numbers rise more or less exponentially. The place to add friction and slow things down is where interactions are multiplying very fast. The platforms have real-time data about the velocity of interactions over any given period. To get one measure of velocity, we looked at data from Newswhip, a social media intelligence firm, on the 300 top-performing new Facebook posts from public pages (Newswhip’s constraints) within a 12-hour period. Using interactions as the best available metric, we analyzed the performance and distribution of the top posts on different days. We placed the trigger at the “inflection point” for the distribution of these posts—that is, where the superspreading posts begin to break away from the pack.
Based on this data, a trigger of 100,000 Facebook interactions in 12 hours seemed about right as a national target, given the interest in not overthrottling. On this data, that would amount to roughly the top 0.01 percent of Facebook posts from public pages within the period. The Twitter data for verified accounts looked similar. If those platforms essentially “halted trading” at 100,000 interactions, they could then assess the posts for policy violations before the misinformation did too much harm. Using this trigger, the hydroxychloroquine video would have been stopped in the first few hours.
Failed attempts to share could be met with a warning message that the content is under review. People could still view the posts and, of course, share by screenshot, but the speed bumps would provide useful friction. More refined and geotagged triggers would be appropriate for highly localized misinformation, such as around voting. The YouTube trigger might be different because the metrics differ significantly. Normalizing a trigger across platforms, if needed, could be done using percentages, but transparent, platform-specific triggers make more sense given the volatility of traffic.
Adding friction to sharing viral content
Virality in itself is not the problem. The issue is that content can go viral with such great speed that platforms are unable to enforce their content guidelines before it is too late. We are not suggesting that 100,000 interactions is the perfect trigger. It would in many cases be too high, especially for misinformation posted by low-trust sites, and can only work in tandem with other interventions. But we do think that platforms should use the better data they have to implement circuit breakers and be transparent about how they operate. In a recent interview with The Daily, Twitter CEO Jack Dorsey muses elegiacally about choices the company might have made different had it known the Twitter algorithm would elevate “the most salacious or controversial tweets”—and he specifically mentioned adding friction to shares. WhatsApp has added friction by limiting forwarding to contain virality. If Twitter were starting now, it might adopt a content-neutral approach that simply throttles sharing at a certain threshold of distribution. Short of that, a human in the loop to determine if there are rule violations would at least ensure the platforms are aware of dangerous viral spread.
America’s free speech tradition rests on the notion that truth and falsity can battle it out—that good speech can confront and overwhelm bad speech. But this can’t happen when speech moves at the speed of light and bad actors can easily amplify malicious lies, schemes, and fraud. Platforms are held hostage by their own algorithmic creations. The sheer velocity of content spread prevents platforms from judiciously addressing whether viral content violates platform rules. If platforms design more friction into the system to slow down exponential spread, even if just temporarily, they can mitigate the dangers of uncontrolled velocity while also protecting speech interests.
This article was first published in the Future Tense channel on Slate on 21 August 2020.