Twitter Releases New Policy on 'Dehumanizing Speech'
Twitter on Tuesday announced a new policy addressing “dehumanizing speech,” which will take effect later ...
Louise Matsakis
Twitter on Tuesday announced a new policy addressing “dehumanizing speech,” which will take effect later this year, and for the first time the public will be able to formally provide the company with feedback on the proposed rule.
The policy will prohibit “content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.” It expands upon Twitter’s existing hateful conduct policy prohibiting users from threatening violence or directly attacking a specific individual on the basis of characteristics such as race, sexual orientation, or gender. Twitter’s users, especially women and minority groups, long have complained that the company’s rules have been ineffective and inconsistent in addressing harassment and abuse.
“We obviously get reports from people about content that they believe violates our rules that does not. The dehumanizing content and the dehumanizing behavior is one of the areas that really makes up a significant chunk of those reports,” says Del Harvey, Twitter’s vice president of trust and safety. She adds that many Twitter users, as well as researchers who study dehumanizing speech’s real-world effects, told the company allowing that content to stay was “deeply problematic.”
Susan Benesch, whose research Twitter cites in its announcement, defines dehumanizing speech as “describing other people in ways that deny or diminish their humanity,” like comparing them to insects, demons, or bacteria. The Dangerous Speech Project she founded and directs argues that it's one hallmark of a wider category called “dangerous speech,” which covers any form of expression that can increase the risk that an audience will participate in or accept violence against another person or group.
“Dehumanization is important since it leads to real harm; it's just challenging to define precisely, and it's critical to protect freedom of speech as well,” says Benesch. “This initiative shows that Twitter staff are thinking hard about the variety of offline harms to which online content can lead, and trying to reduce them. It's easiest for platforms to respond to more obvious forms of harm, such as a credible threat of violence directed at a named, specific person.”
Previously on Twitter, a comment like “all women are scum and should die” would need to be targeted at an individual to break Twitter’s rules. The new policy would remove the requirement that a user who is potentially a member of the protected class be referenced or discussed in the tweet itself.
'Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.'
DEL HARVEY, TWITTER
News that Twitter was considering a policy on dehumanizing speech first broke in August, as major tech companies like YouTube and Facebook moved to ban conspiracy theorist Alex Jones from their platforms. Twitter initially declined to follow suit, and CEO Jack Dorsey defended his company’s decision by arguing that Jones had not broken the rules. (Media outlets like CNN went on to point out multiple instances where Jones did appear to violate Twitter policies.) After the judgment caused an uproar at Twitter, Harvey sent an email to staff that she was “shifting our timeline forward for reviewing the dehumanization policy.”
Twitter is giving users two weeks to comment on the new rule via a survey form; questions include whether the policy is clear and how it could be improved. It will be available in English, Spanish, Arabic, and Japanese. “Historically we have been less transparent than, quite frankly, I think is ideal about our policies and how we develop them,” says Harvey, who has worked at the company for more than a decade. “Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.”
Twitter also has a so-called Trust and Safety Council, made up of third-party nonprofits and other organizations that are consulted on new policies. The company will process the feedback and then continue its normal internal procedure for new rules.
After the dehumanizing speech statute becomes a permanent part of Twitter’s rules, the hardest work of actually enforcing it will begin. Unlike, say, a cryptocurrency scam, dehumanizing speech might be difficult to spot—in part because it’s dependent on context and hard to exactly define.
“Not all dangerous speech has dehumanizing language, and not all comparisons of human beings with animals are dehumanizing,” says Benesch. “Twitter and other platforms should be careful not to define dehumanization too broadly. For example, it’s tempting to say that any demeaning remark about a group of people, such as ‘the X people are all thieves’ or ‘all corrupt’ is dehumanizing. That one is not dehumanizing, since corruption is a specialty of humans.”
Recent real-world incidents have proved challenging for other social media companies to police effectively. Facebook, for instance, has been accused of helping to facilitate the Muslim Rohingya crisis in Myanmar, which the UN is now calling to prosecute as genocide. Buddhist leaders in the country used the platform to spread misinformation and hate speech, including comparing Rohingya to dogs and pests. (Facebook already prohibits users from publishing “violent or dehumanizing speech.”)
Twitter’s new policy is part of a greater soul-searching initiative the company announced in March, after it was widely criticized for allowing misinformation and automated bots to flourish on its platform during the lead-up to the 2016 US presidential election. Since then, Twitter has limited the influence of suspicious accounts, deleted more than 140,000 third-party apps that violated its policies, and begun hiding tweets from potentially harmful users, among other efforts.
The fight against dehumanizing speech is only the latest part of that effort. Hate groups that use the platform to spread their messages would likely be impacted, for example, even if they’re not harassing a specific individual. But a rule against dehumanization won’t help in circumstances where harassment is facilitated through lying or misinformation. Conspiracy theorist Alex Jones’ claims that school shootings never happened wouldn’t necessarily be addressed by the policy, but they’re still extremely harmful nonetheless.
What's more, Twitter may need to decide what to do about high-profile users like President Donald Trump, who once tweeted that Democrats want illegal immigrants “to pour into and infest our Country.” The company historically has allowed for world leaders whose statements may violate policies but also are newsworthy. “It’s not something where it’s set in stone,” says Harvey. “It’s something we’re going to be continuing to explore.”/wired
Comments (0 posted)
Post your comment