Twitter says its system for verifying accounts is broken. Now the company is reconsidering how it hands out its little blue and white check mark icons.
The company announced Thursday that it is pausing all “general verifications” because of “confusion” over the policy.
The decision comes days after Twitter authenticated an account belonging to the man who organized this summer’s white nationalist rally in Charlottesville, Virginia. Critics attacked the company for a move they said gave credibility and significance to white nationalism.
“Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance,” Twitter said. “We recognize that we have created this confusion and need to resolve it.”
In another tweet, CEO Jack Dorsey said the company “realized some time ago the system is broken and needs to be reconsidered.”
“And we failed by not doing anything about it,” he added.
Twitter has long verified accounts belonging to celebrities, journalists, government officials, companies and other noteworthy people. Such accounts receive a small, coveted blue badge with a white check mark.
But that system came under fire this week after Jason Kessler, who put together the August “Unite the Right” protest in Charlottesville, said he had been verified by the company.
“Looks like I FINALLY got verified by Twitter,” Kessler tweeted. “I must be the only working class white advocate with that distinction.”
Other users immediately took Twitter to task.
“This is disgusting,” tweeted the comedian Michael Ian Black. “Verifying white supremacists reinforces the increasing belief that your site is a platform for hate speech. I don’t want to give up Twitter, but I may have to. Who do you value more, users like me or him?”
Kessler did not immediately respond to a request for comment about Twitter’s latest decision.
Twitter has long grappled with questions and controversy about speech on its platform.
Last month, Dorsey promised that there would be more aggressive rules to address “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence.”
His company said it would blur hateful imagery and symbols as it does adult content and graphic violence, meaning users would need to manually opt in to see it. But Twitter didn’t outline in its policy memo what it considered to be a hate symbol.