A New York University report published today is calling for social media companies to stop outsourcing content moderation.
The report says big social media companies like Facebook, Twitter and YouTube need to use more of their own employees – instead of the outside contractors on which they currently largely depend – to make calls about what posts and photos should be removed. Misinformation is becoming an increasingly big problem on tech platforms during the protests against racial injustice and the novel coronavirus pandemic, and both are happening during an election year in which the industry is already braced for action by bad actors.
Currently, many of those charged with sifting through the reams of content posted to social media platforms are contractors, without the same salaries, health benefits and other perks as full-time employees at Silicon Valley companies.
Paul M. Barrett, deputy director of the NYU Stern Center for Business and Human Rights and author of the report, says it’s time for tech companies to reevaluate that system — which he argues results in the moderators being a marginalized class of workers.
Barrett says outsourcing has continued because it saves the industry money, but also because there’s a psychological factor at play.
Content moderators are tasked with sifting through what Barrett calls the “worst that the Internet has to offer.” Their work often centers on rooting out violence, hate speech, child exploitation and other harmful content. Facebook has developed a separate program for fact-checking, where it partners with news organizations to debunk hoaxes and other widely shared posts that could confuse people about sensitive topics like elections or the pandemic.
“Content moderation isn’t engineering, or marketing, or inventing cool new products. It’s nitty-gritty, arduous work, which the leaders of social media companies would prefer to hold at arm’s length,” he told me. “Outsourcing provides plausible deniability.”
Content moderators work at a Facebook office in Austin, Texas. (Photo by Ilana Panich-Linsman for The Washington Post)
Content moderation is the latest battleground for the social media giants in Washington.
The high-profile debate over how social media companies handle President Trump’s inflammatory content is one of the most politically perilous issues for tech companies. Twitter’s recent decision to label a few of the president’s comments has escalated an intense debate over how much responsibility the tech companies have to police their platforms — and whether they could go too far in censoring speech online.
“The recent controversy over how Facebook and Twitter handled President Trump’s posts underscores how central content moderation is to the functioning of the social media platforms that billions of people use,” Barrett said.
The tech companies have taken divergent approaches to addressing these issues, with Facebook leaving the president’s incendiary posts alone. Facebook chief executive Mark Zuckerberg’s decision no to take any action against a Trump post has enraged employees internally. Zuckerberg last week met with black executives at the company to discuss their objections to the Trump post, Elizabeth Dwoskin and Nitasha Tiku report. Employees questioned whether Facebook was in an “abusive relationship” with the president, according to a trove of documents including more than 200 posts from an internal Facebook message board.
Now the company’s content moderators are revolting too.
A group of current and former Facebook content moderators today released a letter criticizing Facebook’s decision, and expressing solidarity with full-time Facebook employees who recently staged a virtual walkout.
“We know how important Facebook’s policies are because it’s our job to enforce them,” the moderators wrote, in a letter published on Medium. “Our everyday reality as moderators is to serve as the public square’s first responders.”
They write that their status as contractors makes it more difficult for them to participate in the employee-drive activism against the company’s decisions. They also said they don’t have financial security, which makes it more difficult to speak out especially as the pandemic creates broad economic uncertainty.
“We would walk out with you — if Facebook would allow it,” they wrote. “As outsourced contractors, nondisclosure agreements deter us from speaking openly about what we do and witness for most of our waking hours.”
Strong content moderation isn’t just necessary in the high-profile showdowns.
Not every decision about content on Facebook is as high-profile. Zuckerberg and top executives are only making these calls in the most prominent situations. Barrett warns that strong teams are needed in place to deal with the millions of posts and tweets that regularly violate the companies’ policies.
“Given the importance of both levels of moderation, it seems odd and misguided that the platforms marginalize content moderation by outsourcing the bulk of it to third-party vendors,” he said. “Instead, the companies should be pulling this vital function in-house and investing more in its expansion.”
Barrett also laid out the following recommendations for social media companies to improve their content moderation efforts:
- Increase the number of human content moderators: As a starting point, Barrett argues the companies should double their moderator staffs to keep up with the deluge of problematic content on their services. He says this would also allow moderators to rotate more frequently, so they wouldn’t repeatedly be exposed to the same sometimes traumatic material.
- Appoint a senior official to oversee content moderation: Barrett says responsibility for content moderation is currently stretched across disparate teams. He argues
there should be a central, senior official who is responsible for both fact-checking and content moderation in the companies.
- Invest more in moderation in “at-risk nations
” : The companies need moderators with understanding of local languages and culture in countries where they operate, Barrett says. This is especially essential in times of instability. Barrett says the tech companies should have offices on the ground in every country where they do business.
- Improve medical care for content moderators: The companies should expand mental-health support and access to psychiatric professionals to assist workers with the psychological effects brought on by repeatedly viewing alarming content, Barrett says.
- Sponsor research into the health risks of these jobs: A third-party content moderation vendor, Accenture, has said that PTSD is a potential risk of content moderation work. But little is known about how often it occurs, and whether there should be time limits on how long content moderators do this work. Barrett says the companies could play a role in funding research into these issues.
- Consider “narrowly tailored” regulation: Trump in recent days has renewed debate over how the tech industry should be regulated by threatening to revoke Section 230, a key shield that protects tech companies from lawsuits for the posts, videos and photos people share on their platforms. The report expresses wariness of politically charged proposals to revoke that shield, but suggests considering a proposal from Facebook to create a “third-party body” to set standards governing the distribution of harmful content.
- Debunk more misinformation: Barrett suggests the companies should more frequently fact-check posts on their services
— a job they’ve long resisted. Though Facebook’s decision to not fact-check the president has seen intense pushback in recent days, Barrett notes the company currently has the most robust partnerships with journalism organizations in place to do this work.
Twitter, Facebook and Instagram removed a video from the Trump campaign for violating copyright laws.
President Trump. (Patrick Semansky/AP)
The four-minute video, narrated by Trump, showed videos of protest marches following the killing of George Floyd in police custody. It’s unclear what the infringing material was, but a California law firm submitted copyright complaints to the companies on behalf of an unnamed artist it represents, Cristiano Lima at Politico reported.
Trump used the takedown to slam Twitter for alleged bias against conservatives and to promote his executive order that challenges protections for social media companies against liability for content on their platforms.
Twitter chief executive Jack Dorsey responded to Trump’s statement saying it was “not true” and that the removal was “not illegal.”
The tribute video remains up on YouTube. The version of the video uploaded to the platform did not contain the infringing content, spokeswoman Ivy Choi told Politico.
Google and Apple are struggling to keep off their app storescontact-tracing apps that may be siphoning people’s sensitive information.
A waiter wears a mask and gloves as he takes customers’ orders. (Drew Angerer/Getty Images)
Some of the contact-tracing apps aren’t clear about user privacy, and some don’t have privacy policies at all – putting them in violation of platform rules, Khadeeja Safdar and Kevin Poulsen at the Wall Street Journal report . Researchers at the International Digital Accountability Council also found apps that failed to safeguard location and other sensitive data, potentially exposing it to hackers.
Lawmakers introduced bipartisan legislation to regulate how coronavirus-tracing apps collect and use data, including limiting commercial use of the data.
But until that bill becomes law, it has been up to Apple and Google to decide which apps to allow in their stores. But continuously changing guidelines are creating confusion for some developers.
Google removed an app called “Contract Tracing” with ads for allegedly violating its rules and profiting off the tragedy. The search giant also prohibited the use of its ad services on the Apple version of the same app after the Journal inquired. But Contact Tracing’s developer says he provided both Google and Apple with documents proving he was working with local governments, in line with guidance for store requirements.
“The rules for this keep changing depending on the day,” app creator Alexander Desuasido told the Journal. “The key is to be persistent and keep following up.”
Amazon has reserved its most prominent search advertisement real estate for its own products, upsetting third-party sellers and igniting antitrust concerns.
An Amazon fulfillment center in Sacramento. (Justin Sullivan/Getty Images)
Consultants and legal experts allege that the recent change was designed to take advantage of increased sales during the pandemic, Renee Dudley at Pro Publica reports.
Amazon acknowledged that it recently introduced the new placement for its own merchandise, but said that the changes had been planed months in advance and were not related to the pandemic. A representative also said there is no specific spot reserved for Amazon brands and they may be placed anywhere. (Amazon chief executive Jeff Bezos owns The Washington Post).
Still, experts say the listings are misleading customers into thinking items are more popular than they are. For instance, an Amazon Essentials Oxford shirt listed on the front page of search results for men’s shirts sells well below what should be required to net its search spot, according to one consultant service that analyzes Amazon sales rank data.
It also gives in-house brands an advantage that could fuel antitrust scrutiny, especially as U.S. regulators and members of Congress are closely scrutinizing the company.
“They don’t have to fight like everybody else to get positioning” said Tim Hughes, a consultant who used to work in product management at Amazon. “They just put ‘our brands’ there, and boom, instant sales. The difference between being in slot one versus slot 10, even on the first page, is going to be an order of magnitude different in terms of sales. It is an exponentially decreasing curve. It is a huge drop off.”
Amazon chief executive and Post owner Jeff Bezos said he’s “happy to lose” Amazon customers enraged by the company’s Black Lives Matter support. Yesterday on Instagram, he shared some of the responses he received from customers upset with the company’s public support of the movement.
Jay Carney, Amazon’s vice president for global corporate affairs and former Obama White House press secretary, attended a Black Lives Matter protest in Washington on Saturday.
Twitter responded with some reminders of Amazon’s treatment of black workers and ties to the policing industry. Vice’s Edward Ongweso Jr.:
Sleeping Giants, an activist Twitter account that challenges tech companies’ power, also chimed in:
Democrats are pressing the Department of Homeland Security, Immigration and Customs Enforcement, and Customs and Border Protection on whether there were abuses of surveillance technologies against the protesters.
A rally at the edge of Lafayette Park across the street from the White House during protests over the death of George Floyd on June 7. (Tasos Katopodis/Getty Images)
Sen. Kamala D. Harris (Calif.) and Reps. Mary Gay Scanlon (Pa.) and Juan Vargas (Calif.) led 97 colleagues in a letter to Customs and Border Protection and Immigration and Customs Enforcement demanding answers about what surveillance tools the agencies have used, how they shared surveillance footage and whether their staff has been trained to comply with privacy laws.
In a separate letter , Democrats on the House Oversight Committee including Rep. Alexandria Ocasio- Cortez (D-N.Y.) demanded a full account of DHS’s role in surveillance of protesters in Minneapolis where George Floyd was killed in police custody and where the protest movement began.
The letter slammed the agency’s use of a military drone for surveillance as a “gross abuse of authority.”
House Homeland Security Committee Chairman Bennie Thompson (D-Miss.) has also demanded answers about the agencies’ surveillance.. So far DHS has not scheduled a briefing or answered Thompson’s letter, according to a committee representative.
Lawmakers have also questioned the Justice Department’s dispatch of Drug Enforcement Administration agents to surveil protests. Rep. Ted Lieu (D-Calif.) announced on Twitter that he’s working on a bill that would ban use of powerful stingray technology that spoofs cellphone towers to collect messages and data information on protesters.
Tinder will no longer ban users for using the app to fundraise for Black Lives Matter.
Demonstrators hold a Black Lives Matter banner (Eduardo Munoz/Reuters)
The change follows an inquiry from BuzzFeed News, which found dozens of users who had been suspended or banned for using their accounts to solicit donations. Users slammed the dating app as hypocritical for banning the practice while publicly promoting its support for Black Lives Matter.
”From time to time, our members use Tinder to engage with topics they care about,” a Tinder representative told BuzzFeed. “And while our community guidelines state that we may remove accounts used for promotional purposes, we are dedicated to enforcing our guidelines in line with our values.”
More news from the protests:
-
The Senate Judiciary Committee has scheduled a hearing, titled “COVID-19 Fraud: Law Enforcement’s Response to Those Exploiting the Pandemic,” for June 9 at 10 a.m.
-
George Washington University’s Institute for Data, Democracy and Politics will host a virtual forum on the coronavirus and social media disinformation on June 16 at 10 a.m.
More coverage from The Post of this weekend’s Black Lives Matter protests in Washington: