Faced with escalating accusations of bias from the right and an onslaught of calls to break up Facebook on the left, the social media juggernaut released a data dump Thursday that its leaders hope will help the public better understand how it moderates content—and remind them that the bigger Facebook is, the more it can invest in fending off these threats.
In its third Content Standards Enforcement Report, Facebook laid out the sheer amount of prohibited content that appears on its platform, including fake accounts, spam, terrorist propaganda, child sexual exploitation, hate speech, bullying, nudity, violence, and the sale of regulated goods. In most, though not all categories, the company explained how prevalent views of that content were, how many pieces of content Facebook took action on, how much of it Facebook found before users reported it, how many enforcement decisions Facebook users appealed, and how much content was restored after that appeal.
Some of the numbers Facebook reported leap off the screen. Over the last six months, for instance, Facebook says it saw a “steep increase” in abusive, fake accounts, which accounted for 5 percent of its monthly active users during the last two quarters. Between October 2018 and March 2019 alone, the company removed 3 billion fake accounts, almost all of them within minutes of creation. The report also includes some never-before-shared insights into certain categories of banned content, including child sexual exploitation and terrorist propaganda, which Facebook says accounted for .03 percent of all views in Q1 of 2019. According to the report, Facebook catches more than 99 percent of those posts before a single user reports them.
Just hours before these numbers came out, a separate report was published by the Data Transparency Advisory Group, an independent group Facebook established last year to audit its content moderation efforts. This report offers an analysis of Facebook’s internal review processes and the metrics tracked with regard to prohibited content, while offering recommendations for how Facebook might improve.
Taken together, the reports serve as Facebook’s answer to several ongoing debates about the company’s size and tremendous power. Whether such an influx of information will ultimately satisfy Facebook’s critics or be used against it remains to be seen.
In a press call introducing the reports, CEO and founder Mark Zuckerberg argued that Facebook’s size is precisely what makes the company a responsible watchdog for the internet. “The amount of our budget that goes toward our safety systems, I believe, is greater than Twitter’s whole revenue this year. So we’re able to do things that I think are just not possible for other folks,” Zuckerberg said. Breaking Facebook up, he claimed, would make enforcement tougher. “We really need to decide what issues we think are the most important to address and to focus on, because in some ways, some of the remedies cut against each other in terms of making progress.”
Facebook only began releasing these content moderation reports a year ago, in response to calls for more information about how it sets and enforces its policies. The numbers included in the reports certainly underscore the scope of the problem Facebook is up against. In just the first three months of this year, for instance, the company took action against 5.4 million pieces of content that included child sexual exploitation, 4 million pieces of hate speech, 6.4 million pieces of terrorist propaganda, and 1.76 billion pieces of spam, among other banned content.
But the reports show just how imperfect Facebook’s judgment calls can be, particularly as it pertains to hate speech, a category of prohibited content that has fueled so much of the criticism from conservatives, including President Trump. Over the last quarter, for instance, 1.1 million decisions regarding hate speech were appealed by Facebook users. In the same period, 130,000 posts that had been removed were restored after a secondary review by Facebook’s moderators.
- Facebook 'grateful' to Channel 4 Dispatches team for undercover reporting of content moderation practices days after 'fake news' row
- Mark Zuckerberg Is Deluded If He Thinks AI Can Solve Facebook’s Hate Speech Problem
- Facebook Hits Alex Jones With Half-Assed Suspension
- 5 Tips for a Data-Driven Content Strategy That Increases Conversions
- 13 Data-Driven Content Marketing Techniques That Boost Rankings
- Facebook Screens Posts For Suicide Risk, And Health Experts Have Concerns
- 4 Unconventional Ways to Reclaim Lost Facebook Organic Reach
- There Are No Universal Truths in Content Marketing (Sort Of)
- Report: Facebook Let Netflix, Spotify Read Users’ Private Messages
- Facebook Wanted Us to Kill This Investigative Tool
- Angry UK Legislators Say Facebook Stonewalled Inquiry, Recommend Exposing It to More Liability
- 7 Reasons Why Content Needs Amazing Images, Videos & Visuals
- Google to hire thousands of moderators after outcry over YouTube abuse videos
- Facebook employees slam Mark Zuckerberg in letter for allowing politicians to ‘weaponize’ the platform with false paid-for ads and warn the false information could influence the 2020 election
- Infowars, Alex Jones are testing the limits of free speech on Twitter, Facebook, YouTube, Apple
- It's Not "He Said, She Said" Over Google Rankings & Facebook Shares
- Infowars and Alex Jones test limits of free speech on Twitter, Facebook, YouTube, Apple
- 9 tough questions Facebook's Zuckerberg can expect from Congress
- Facebook Won't Remove Mastectomy Photos...Unless It Feels Like It
- Facebook Can't Explain Why It Won't Ban Infowars
More Data on Content Moderation Won't Silence Facebook’s Critics have 858 words, post on www.wired.com at May 23, 2019. This is cached page on Europe Breaking News. If you want remove this page, please contact us.