In its ongoing effort to combat misinformation about breaking news, Twitter is rolling out a crisis misinformation policy to ensure that it doesn’t amplify falsehoods during times of widespread strife.
To determine whether a tweet is misleading, Twitter will require verification from credible, public sources, including conflict monitoring groups, humanitarian organizations, open source investigators, journalists and more. If the platform finds that the tweet is misleading, it’ll slap a warning notice on the tweet, turn off likes, retweets and shares, and link to more details about the policy. These tweets will also stop surfacing on the home page, search or explore.
Notably, Twitter will “preserve this content for accountability purposes,” so it will remain online. Users will just have to click through the warning to view the tweet. In the past, some warnings about election or COVID-19 misinformation have simply been notices that appear in line beneath the tweet, rather than covering it up entirely.
Twitter says it will prioritize adding warning notices to viral tweets or posts from high-profile accounts, which may include verified users, state-affiliated media and government accounts. This strategy makes a lot of sense, since a tweet from a prominent figure is more likely to go viral than a tweet from an ordinary person with 50 followers — but it’s a wonder that more platforms haven’t taken this approach already.
Some examples of tweets that might be flagged under this policy include false on-the-ground event reporting, misleading allegations of war crimes, atrocities, or use of weapons and misinformation about international community response, sanctions, defensive operations and more. Personal anecdotes don’t fall under the policy, nor do people’s strong opinions, commentary or satire. Tweets that call attention to a false claim in order to refute it are allowed, too.
Twitter began working on a crisis misinformation framework last year alongside human rights organizations, it says. This policy may come into effect under circumstances like public health emergencies or natural disasters, but to start, the platform will use these tactics to mitigate misinformation about international armed conflict — particularly, the ongoing Russian attack on Ukraine.
Most social networks have struggled with content moderation amid the war in Ukraine, and Twitter is no exception. In one circumstance, Twitter made the decision to remove the Russian Embassy’s false claim that a pregnant bombing victim in Ukraine was a crisis actor. Twitter also suspended an account that spread a false conspiracy theory that the U.S. holds biological weapons in Ukraine.
It seems like there’s a fine line between what kind of content would be taken down entirely or what posts would result in a deletion or ban. This policy might have applied to the Russian Embassy’s misleading tweet, for example, but at what point is an account so violative that it earns a ban?
“Content moderation is more than just leaving up or taking down content,” Twitter’s head of safety and integrity Yoel Roth wrote in a blog post. “We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events.”
Roth added in a thread that Twitter found that not amplifying this content can reduce its spread by 30% to 50%.
But depending on whether Elon Musk’s $44 billion bid to buy Twitter actually goes through, these policies may not be around for long. Musk believes that content moderation should mirror the rules of the state, AKA, Twitter’s community guidelines basically just become the First Amendment with no added nuance. While that may be appealing to the kinds of people who are never on the receiving end of hateful messages, that approach could undo loads of progress on Twitter, including efforts like this that halt the spread of harmful misinformation.
Even so, these policies are never 100% effective, and much content that violates guidelines escapes detection anyway. This week, we encountered multiple banned videos of the Buffalo shooter’s terrorist attack on platforms like Twitter and Facebook, which were left online for days without removal. One video of the gruesome shooting, which we sent to Twitter directly, still remains online.
So while these policies might be well intentioned, they can only function as effectively as they’re enforced.