Twitter changes verification policy...
Twitter’s announcement yesterday that it would begin removing verification badges from some accounts had an immediate impact,
by Casey Newton
Twitter’s announcement yesterday that it would begin removing verification badges from some accounts had an immediate impact, as the company stripped the blue checkmark from a handful of accounts associated with the far right. But the announcement, which arrived via five tweets and an update to a Twitter support page, left much unanswered. The most common question was why Twitter would remove a user’s badge instead of simply suspending or banning the account. And the answer, the company says, has to do with offline behavior.
The gist is this: if a user breaks Twitter’s rules on Twitter — that is to say, by tweeting — that user will still be disciplined in all the usual ways, a spokesperson said. What’s new is that Twitter now plans to do at least some monitoring of verified users’ offline behavior as well, to determine whether it is consistent with its rules. If it isn’t, users can lose their badges. And so a hypothetical verified user who tweeted nothing but pictures of kittens but organized Nazi rallies for a living could now retain his tweeting privileges, but lose his verification badge.
The key phrase in Twitter’s policy update is this one: “Reasons for removal may reflect behaviors on and off Twitter.” Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held accountable for their behavior in the real world as well. And while it’s unclear what Twitter’s final policy will look like, the introduction of offline behavior to the Twitter rules adds an unpredictable new dimension to its anti-harassment efforts.
To understand why Twitter would begin taking offline behavior into account, consider the case of Jason Kessler. Kessler, a white supremacist who organized the United the Right rally in Charlottesville in August, kicked off the latest controversy over Twitter’s rules when his account was verified last week. Kessler’s account was new — he deleted his previous account after making offensive comments — and his tweets, while offensive to many, seemingly did not break Twitter’s rules.
On the other hand, by organizing the march, Kessler had promoted hate speech, which is explicitly against Twitter’s rules. This put the platform in a bind. “Our agents have been following our verification policy correctly,” CEO Jack Dorsey tweeted on Nov. 9th, “but we realized some time ago the system is broken and needs to be reconsidered.” Twitter stopped verifying new accounts and said it would develop “a new program.”
While it worked on a new program, the company introduced a half-measure. It introduced accountability for offline behavior into its rules and suspended a handful of accounts associated with far-right activism. The company said it would also review all verified accounts — about 287,000 in total.
Many questions remain unanswered. What will the company’s “review” consist of? How will it examine users’ offline behavior? Will it simply respond to reports, or will it actively look for violations? Will it handle the work with its existing team, or will it expand its trust and safety team? The company declined to comment.
For most of its life, the verification program existed only to authenticate the identities of high-profile Twitter users. That changed in January 2016, when the company stripped far-right provocateur Milo Yiannopoulos of his badge. For the first time, the badge seemed to carry a hint of endorsement from Twitter itself.
With this week’s changes, Twitter has now made that endorsement explicit. A badge is now more than a marker of identity — it’s a badge of approval, as well. This seems likely to increase the number of public battles Twitter faces over who deserves to be verified — and who deserves to lose their badge. The final policy is still in development. But the direction it’s heading in looks clear./theverge
Comments (0 posted)
Post your comment