Twitter is rethinking how the labels it applies to misinformation look and work, its head of site integrity told Reuters in an interview, as the social media company tries to make these interventions more obvious and cut its reaction times.
Twitter’s Yoel Roth said the company is exploring changes to the small blue notices that it attaches to certain false or misleading tweets, to make these signals more ‘overt’ and be more ‘direct’ in giving users information. But he did not say whether any new versions would be ready before the US election in the next four weeks, a period that experts say could be rife with false and misleading online content.
Mr Roth said the new efforts at Twitter include testing a more visible reddish-magenta colour, and working out whether to flag users who consistently post false information.
“We’ve definitely heard the feedback that it would be useful to see if an account is a repeat offender or has been repeatedly labelled, and we’re thinking about the options there,” said Mr Roth.
Twitter started labelling manipulated or fabricated media in early 2020, after a public feedback period. It expanded its labels to coronavirus misinformation and then to misleading tweets about elections and civic processes. Twitter says it has now labelled thousands of posts, though most attention has been on the labels applied to tweets by Donald Trump.
In September, Twitter announced it would label or remove posts claiming election victory before results were certified.
Mr Roth said research undermining the idea that corrections can strengthen people’s beliefs in misinformation – known as the “backfire effect” – had contributed to Twitter rethinking how its labels could be more obvious. The risk is that label “becomes a badge of honour” that users actively pursue for attention, said Mr Roth.
Though Twitter’s labels have been praised by some misinformation experts as a long-overdue intervention, their execution has triggered criticisms from researchers as too slow.
“Mostly things take off so fast that if you wait 20 or 30 minutes … most of the spread for someone with a big audience has already happened,” said Kate Starbird, an associate professor at the University of Washington who has been analysing Twitter’s labelling responses.
It took Twitter about eight hours to add labels to Mr Trump’s tweets about mail-in voting the first time it labelled him in May, though Ms Starbird said Twitter was getting quicker. Two Trump tweets in September appeared to have been labelled within two hours.
Mr Roth said Twitter reduces the reach of all tweets labelled for misinformation, by limiting their visibility and not recommending them in places like search results. The company declined to share any data about the effectiveness of these steps.
In August, Election Integrity Partnership researchers said Twitter’s disabling retweets on a Trump tweet that violated its rules had a clear effect on its spread but was “too little, too late”.
Mr Roth said Twitter takes into account the number of retweets, engagement and views to prioritise viral content for review to give “the most bang for our buck”. But he said Twitter was exploring how to predict which tweets would go viral and conducting exercises on likely new 2020 election claims to get faster.
Multiple researchers told Reuters it was difficult to assess effectiveness of Twitter’s interventions without knowing which actions it was taking and when.
The company does not keep public lists of when it has applied labels and has not shared data to allow outsiders to assess how its labels affect a tweet’s spread or how users interact with them.
“The platforms need to explain what hypothesis they’re testing, how they’re testing it, what the results are and be transparent,” said Tommy Shane, head of policy and impact at anti-misinformation non-profit First Draft. “Because these are public experiments.”
Twitter has labelled or put grey warning overlays over 10 @realDonaldTrump tweets for reasons related to civic integrity rules since it first labelled him in May.
Mr Roth said Twitter consults with partners, including election officials, on its labelling. But it has chosen to link to a page of tweets from multiple sources rather than follow Facebook’s lead of paying third-party fact-checkers – including Reuters – to assess content as they could be “easy to dismiss if you disagree with them”.
Facebook Inc, which exempts politicians from its fact-checking programme and faced backlash for not acting on misleading Trump posts, has started adding labels with voting information to all related posts. This strategy has been criticised by researchers for not quickly and obviously differentiating between true and false.
Mr Trump’s spokeswoman Samantha Zager said in a statement, without offering specific evidence, that “across social media platforms, it’s clear the Silicon Valley Mafia creates arbitrary rules that do not apply equally to every account and instead are used to silence any views in opposition to those held by the liberal Big Tech coastal elites”:
Asked how Twitter is monitoring high-profile users like Mr Trump or his Democratic presidential rival Joe Biden, Mr Roth said Twitter does not “specifically focus in on individual accounts or individual account holders”.