A contractor at the Manila office of TaskUs, a firm that provides content moderation services to U.S. tech companies.
I found this interesting and it relates to our last class discussion; who moderates the content on social media, in this case, Facebook
The campuses of the tech industry are famous for their lavish cafeterias, cushy shuttles, and on-site laundry services. But on a muggy February afternoon, some of these companies’ most important work is being done 7,000 miles away, on the second floor of a former elementary school at the end of a row of auto mechanics’ stalls in Bacoor, a gritty Filipino town 13 miles southwest of Manila. When I climb the building’s narrow stairwell, I need to press against the wall to slide by workers heading down for a smoke break. Up one flight, a drowsy security guard staffs what passes for a front desk: a wooden table in a dark hallway overflowing with file folders.
Past the guard, in a large room packed with workers manning PCs on long tables, I meet Michael Baybayan, an enthusiastic 21-year-old with a jaunty pouf of reddish-brown hair. If the space does not resemble a typical startup’s office, the image on Baybayan’s screen does not resemble typical startup work: It appears to show a super-close-up photo of a two-pronged dildo wedged in a vagina. I say appears because I can barely begin to make sense of the image, a baseball-card-sized abstraction of flesh and translucent pink plastic, before he disappears it with a casual flick of his mouse.
Baybayan is part of a massive labor force that handles “content moderation”—the removal of offensive material—for US social-networking sites. As social media connects more people more intimately than ever before, companies have been confronted with the Grandma Problem: Now that grandparents routinely use services like Facebook to connect with their kids and grandkids, they are potentially exposed to the Internet’s panoply of jerks, racists, creeps, criminals, and bullies. They won’t continue to log on if they find their family photos sandwiched between a gruesome Russian highway accident and a hardcore porn video. Social media’s growth into a multibillion-dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user-generated content—to ensure that Grandma never has to see images like the one Baybayan just nuked.
So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor. Hemanshu Nigam, the former chief security officer of MySpace who now runs online safety consultancy SSP Blue, estimates that the number of content moderators scrubbing the world’s social media sites, mobile apps, and cloud storage services runs to “well over 100,000”—that is, about twice the total head count of Google and nearly 14 times that of Facebook.
This work is increasingly done in the Philippines. A former US colony, the Philippines has maintained close cultural ties to the United States, which content moderation companies say helps Filipinos determine what Americans find offensive. And moderators in the Philippines can be hired for a fraction of American wages. Ryan Cardeno, a former contractor for Microsoft in the Philippines, told me that he made $500 per month by the end of his three-and-a-half-year tenure with outsourcing firm Sykes. Last year, Cardeno was offered $312 per month by another firm to moderate content for Facebook, paltry even by industry standards.
Here in the former elementary school, Baybayan and his coworkers are screening content for Whisper, an LA-based mobile startup—recently valued at $200 million by its VCs—that lets users post photos and share secrets anonymously. They work for a US-based outsourcing firm called TaskUs. It’s something of a surprise that Whisper would let a reporter in to see this process. When I asked Microsoft, Google, and Facebook for information about how they moderate their services, they offered vague statements about protecting users but declined to discuss specifics. Many tech companies make their moderators sign strict nondisclosure agreements, barring them from talking even to other employees of the same outsourcing firm about their work.
Watching Baybayan’s work makes terrifyingly clear the amount of labor that goes into keeping Whisper’s toothpaste in the tube. (After my visit, Baybayan left his job and the Bacoor office of TaskUs was raided by the Philippine version of the FBI for allegedly using pirated software on its computers. The company has since moved its content moderation operations to a new facility in Manila.) He begins with a grid of posts, each of which is a rectangular photo, many with bold text overlays—the same rough format as old-school Internet memes. In its freewheeling anonymity, Whisper functions for its users as a sort of externalized id, an outlet for confessions, rants, and secret desires that might be too sensitive (or too boring) for Facebook or Twitter. Moderators here view a raw feed of Whisper posts in real time. Shorn from context, the posts read like the collected tics of a Tourette’s sufferer. Any bisexual women in NYC wanna chat? Or: I hate Irish accents! Or: I fucked my stepdad then blackmailed him into buying me a car.
link to original article, all above was copy pasted: http://www.wired.com/2014/10/content-moderation/
here is a link to an article on the comment clean up on Youtube: http://www.wired.com/2013/09/youtube-comments/