Home | Society | ‘The Basic Grossness of Humans’

‘The Basic Grossness of Humans’

image
Content moderators review the the dark side of the internet. They don’t escape unscathed.

 

 

 

Alexis C. Madrigal

  

 

 

Lurking inside every website or app that relies on “user-generated content”—so, Facebook, YouTube, Twitter, Instagram, Pinterest, among others—there is a hidden kind of labor, without which these sites would not be viable businesses. Content moderation was once generally a volunteer activity, something people took on because they were embedded in communities that they wanted to maintain.

 

But as social media grew up, so did moderation. It became what the University of California, Los Angeles, scholar Sarah T. Roberts calls, “commercial content moderation,” a form of paid labor that requires people to review posts—pictures, videos, text—very quickly and at scale.

 

Roberts has been studying the labor of content moderation for most of a decade, ever since she saw a newspaper clipping about a small company in the Midwest that took on outsourced moderation work.

 

“In 2010, this wasn’t a topic on anybody’s radar at all,” Roberts said. “I started asking all my friends and professors. Have you ever heard of people who do this for pay as a profession? The first thing everyone said was, ‘I never thought about it.’ And the second thing everyone said was, ‘Don’t computers do that?’ Of course, if the answer in 2017 is still no, then the answer in 2010 was no.”

 

And yet there is no sign of these people on a platform like Facebook or Twitter. One can register complaints, but the facelessness of the bureaucracy is total. That individual people are involved in this work has only recently become more well-known, thanks to scholars like Roberts, journalists like Adrian Chen, and workers in the industry like Rochelle LaPlante.

 

In recent months, the role that humans play in organizing and filtering the information that flows through the internet has come under increasing scrutiny. Companies are trying to keep child pornography, “extremist” content, disinformation, hoaxes, and a variety of unsavory posts off of their platforms while continuing to keep other kinds of content flowing.

 

They must keep the content flowing because that is the business model: Content captures attention and generates data. They sell that attention, enriched by that data. But what, then, to do with the pollution that accompanies the human generation of content? How do you deal with the objectionable, disgusting, pornographic, illegal, or otherwise verboten content?

 

The one thing we know for sure is that you can’t do it all with computing. According to Roberts, “In 2017, the response by firms to incidents and critiques of these platforms is not primarily ‘We’re going to put more computational power on it,’ but ‘We’re going to put more human eyeballs on it.’”

 

To examine these issues, Roberts pulled together a first-of-its-kind conference on commercial content moderation last week at UCLA, in the midst of the wildfires.

 

For Roberts, the issues of content moderation don’t merely touch on the cost structure of these internet platforms. Rather, they go to the very heart of how these services work. “What does this say about the nature of the internet?” she said. “What are the costs of vast human engagement in this thing we call the internet?”

 

One panel directly explored those costs. It paired two people who had been content moderators: Rasalyn Bowden, who became a content-review trainer and supervisor at Myspace, and Rochelle LaPlante, who works on Amazon Mechanical Turk and is the cofounder of an organizing platform for people who work on that platform, MTurkCrowd.com. They were interviewed by Roberts and a fellow academic, the University of Southern California’s Safiya Noble.

 

Bowden described the early days of Myspace’s popularity when suddenly, the company was overwhelmed with inappropriate images, or at least images they thought might be inappropriate. It was hard to say what should be on the platform because there were no actual rules. Bowden helped create those rules and she held up a notebook to the crowd, which was where those guidelines were stored.

 

“I went flipping through it yesterday and there was a question of whether dental-floss-sized bikini straps really make you not nude. Is it okay if it is dental-floss-size or spaghetti strap? What exactly made you not nude? And what if it’s clear? We were coming up with these things on the fly in the middle of the night,” Bowden said. “[We were arguing] ‘Well, her butt is really bigger, so she shouldn’t be wearing that. So should we delete her but not the girl with the little butt?’ These were the decisions. It did feel like we were making it up as we were going along.”

 

Bowden said that her team consisted of the odd conglomeration of people that were drawn to overnight work looking at weird and disturbing stuff. “I had a witch, a vampire, a white supremacist, and some regular day-to-day people. I had all these different categories,” Bowden, who is black, said. “We were saying, ‘Based on your experience in white-supremacist land, is this white-supremacist material?’”

 

That was in the mid-’00s. But as social media, relying on user-generated content, continued to explode, a variety of companies began to need professional content moderators. Roberts has traced the history of the development of moderation as a corporate practice. In particular, she’s looked at the way labor gets parceled out. There are very few full-time employees working out of corporate headquarters in Silicon Valley doing this kind of stuff. Instead, there are contractors, who may work at the company, but usually work at some sort of off-site facility. In general, most content moderation occurs several steps removed from the core business apparatus. That could be in Iowa or in India (though these days, mostly in the Philippines).

 

“The workers may be structurally removed from those firms, as well, via outsourcing companies who take on CCM contracts and then hire the workers under their auspices, in call-center (often called BPO, or business-process outsourcing) environments,” Roberts has written. “Such outsourcing firms may also recruit CCM workers using digital piecework sites such as Amazon Mechanical Turk or Upwork, in which the relationships between the social-media firms, the outsourcing company, and the CCM worker can be as ephemeral as one review.”

 

LaPlante, for example, works on Mechanical Turk, which serves as a very flexible and cheap labor pool for various social-media companies. When she receives an assignment, she will have a list of rules that she must follow, but she may or may not know the company or how the data she is creating will be used.

 

Most pressingly, though, LaPlante drew attention to the economic conditions under which workers are laboring. They are paid by the review, and the prices can go as low as $0.02 per image reviewed, though there are jobs that pay better, like $0.15 per piece of content. Furthermore, companies can reject judgments that Turkers make, which means they are not paid for that time, and their overall rating on the platform declines.

 

This work is a brutal and necessary part of the current internet economy. They’re also providing valuable training data that companies use to train machine-learning systems. And yet the people doing it are lucky to make minimum wage, have no worker protections, and must work at breakneck speed to try to earn a living.

 

As you might expect, reviewing violent, sexual, and disturbing content for a living takes a serious psychological toll on the people who do it.

 

“When I left Myspace, I didn’t shake hands for like three years because I figured out that people were disgusting. And I just could not touch people,” Bowden said. “Most normal people in the world are just fucking weirdos. I was disgusted by humanity when I left there. So many of my peers, same thing. We all left with horrible views of humanity.”

 

When I asked her if she’d recovered any sense of faith in humanity, a decade on, Bowden said no. “But I’m able to pretend that I have faith in humanity. That will have to do,” she told me. “It’s okay. Once you accept the basic grossness of humans, it’s easier to remember to avoid touching anything.”

 

LaPlante emphasized, too, that it’s not like the people doing these content-moderation jobs can seek counseling for the disturbing things they’ve seen. They’re stuck dealing with the fallout themselves, or, with some sort of support from their peers.

 

“If you’re being paid two cents an image, you don’t have $100 an hour to pay to a psychiatrist,” LaPlante said.

 

In a hopeful sign, some tech companies are beginning to pay more attention to these issues. Facebook, for example, sent a team to the content-moderation conference. Others, like Twitter and Snap, did not.

 

Facebook, too, has committed to hiring 10,000 more people dedicated to these issues. Their executives are all clearly thinking about these issues. This week, Facebook Chief Security Officer Alex Stamos tweeted that “there are no magic solutions” to several “fundamental issues” in online speech. “Do you believe that gatekeepers should police the bounds of acceptable online discourse?” he asked. “If so, what bounds?”

 

This is true. But content moderators already all know that. They’ve been in the room trying to decide what’s decent and what’s dirty. These thousands of people have been acting as the police for the boundaries of “acceptable online discourse.” And as a rule, they have been unsupported, underpaid, and left to deal with the emotional trauma the work causes, while the companies they work for have become the most valuable in the world.

 

“The questions I have every time I read these statements from big tech companies about hiring people are: Who? And where? And under what conditions?” Roberts told me./Atlantic

Subscribe to comments feed Comments (0 posted)

total: | displaying:

Post your comment

  • Bold
  • Italic
  • Underline
  • Quote

Please enter the code you see in the image:

Captcha
Share this article
Rate this article
5.00