Facebook in 2018: 866 Million Pieces of Content + 583 Fake Accounts Removed in Q1

May 22, 2018 Kathy Bryan

This is the sixth article within the “Facebook in 2018” series inclusive of top issues, fundamental changes and expectations for the year. Click here to see a list of the other posts within this series.

Earlier this month, Facebook published their Q1 2018 Community Standards Enforcement Preliminary Report. Promised quarterly, this first-ever report is a step forward for Facebook in terms of transparency into the policing of accounts and content. Across all categories reported, Facebook removed 866 million pieces of content and disabled 583 million fake accounts.

Facebook Disabled 583 Million Fake Accounts in Q1

According to the community standards enforcement report, most fake Facebook accounts are created in large volumes using scripts or bots with the intent of spamming or scamming Facebook users. Understanding fake accounts typically lead to content that violates standards, Facebook is “vigilant about blocking and removing fake accounts.” Facebook estimates that 3-4% of their monthly active users (MAUs) are actually fake accounts. In Q1 2018, they disabled 583 million fake accounts, down 16% from the prior quarter.

Facebook Removed 3.4 Million Pieces of Graphic Violence Content in Q1

In Q1 2018, Facebook took action to remove 3.4 million pieces of content that “glorified violence” or “celebrated the suffering or humiliation of others.” The quarter prior, only 1.2 million pieces of graphic violence content were addressed. Facebook stated the 183% quarter-over-quarter increase in addressable violent content was due primarily to enhancements in detection technology, including a photo matching system that identified images previously marked as disturbing. The majority (86%) of graphic violence content was flagged and removed by Facebook before it was reported by users.

Facebook Removed 21 Million Pieces of Adult Nudity & Sexual Activity Content in Q1

Last quarter, Facebook addressed 21 million pieces of content that were flagged for adult nudity or sexual activity. Though up from Q4 2017, the volume of content removed was comparable to the prior quarter. Almost all of this content was found and flagged by Facebook before users reported it. Of additional importance, Facebook notes they “recognize nudity can be shared for a variety of reasons.” When the intent is clearly positive, like for health education or the promotion of breastfeeding, Facebook makes appropriate allowances.

Facebook Removed 1.9 Million Pieces of Terrorist Propaganda in Q1

According to their report, Facebook does not “tolerate any content that praises, endorses or represents terrorist organizations or terrorists,” but their community standards enforcement report published only statistics related to ISIS, al-Qaeda and their affiliate groups. In Q1 2018, Facebook took action on 1.9 million pieces of terrorist content (related to the previously mentioned groups) with all but 0.5% addressed before being reported by users. The volume of terrorist content removed last quarter was up 73% from the prior quarter. Facebook reported this increase was due in part to detection technology, review process and reporting tool enhancements.

Facebook Removed 2.5 Million Pieces of Hate Speech Content in Q1

Facebook classifies hate speech as “a direct attack” inclusive of dehumanizing speech, statements of inferiority or calls for exclusion or segregation on people based on race, ethnicity, national origin, religious affiliation, sexual orientation, serious disability or disease, sex, gender, gender identity or immigration status. Allowing for content clearly intended to educate or raise awareness of other’s violations, Facebook took action on approximately 2.5 million pieces of hate speech content in Q1 2018. Because of detection technology enhancements, addressed hate speech content was up 56% from the prior quarter. However, only 38% of hate speech content was identified first by the social media giant. Facebook users found and reported the majority of hate speech posts.

Facebook Removed 837 Million Pieces of Spam Content in Q1

Automated or coordinated, inauthentic activity is defined by Facebook as spam. Commercial spam, false advertising, fraud, malicious links and counterfeit good promotions are all classified as spam. Last quarter, Facebook addressed 15% more spam than was addressed the prior quarter. Nearly 100% of the 837 million pieces of spam content were identified by Facebook before being reported (and often before being seen) by users.

During his time on Capitol Hill, Mark Zuckerberg, CEO of Facebook, promised his social media platform would do more to prevent its powerful tools from being used for harm. The Facebook community standards enforcement report appears to be partial delivery on that promise. However, it also clearly shows the influx of fake accounts and content that have infiltrated the Facebook platform.

For Facebook users, it can be challenging to identify what content is genuine, factual and safe to consume. For Facebook advertisers, it is difficult to fully quantify the effectiveness of media spend. While the transparency provided by the Facebook community standards enforcement report is a step in the right direction, Facebook still has a long path ahead to restore confidence in all of its constituents.

Nevertheless, as we’ve previously reported, there are no signs of Facebook use slowing. The volume of Facebook MAUs around the world surpassed 2 billion in the second quarter of last year, and the growth rate has been consistent since. Likewise, Facebook’s worldwide advertising revenue is projected to grow 20% this year.

Despite the removal of third-party targeting data, Facebook remains a powerhouse in terms of reaching niche audiences as scale. Unless this changes, Facebook will continue to be an important component of most successful media plans.

Are you maximizing your Facebook opportunities?

Contact Team DMS to schedule a call. Our social advertising experts can help you navigate the Facebook in 2018 landscape to ensure your campaigns continue to perform.


Click on the links below to read the other articles within the Facebook in 2018 series:

Click the link below to download the Facebook eBook: 

Facebook in 2018: A Recap of Recent Facebook Trials in the Media & on Capitol Hill. 


"Facebook in 2018" Series Sources

About the Author

Kathy Bryan

Kathy Bryan is the Senior Vice President of Corporate Marketing and Communications at Digital Media Solutions (DMS), an industry leader in providing end-to-end customer acquisition solutions that help clients grow their businesses and realize their marketing goals. In this role, Kathy is responsible for all aspects of marketing and communications for DMS and its subsidiary brands. Since its inception, DMS has evolved into a full-service performance marketing company that services firms within highly complex and competitive industries including mortgage, education, insurance, consumer brands, automotive, jobs and careers. DMS has achieved incredible year-over-year growth, which has earned recognition on the Inc. 5000 list in 2014, 2015, 2016 and 2017.

Follow on Linkedin Visit Website More Content by Kathy Bryan
Previous Article
In-App Shopping on Instagram: What It Means for Marketers
In-App Shopping on Instagram: What It Means for Marketers

Instagram is testing in-app shopping with stored payment. This is what you need to know.

Next Article
Amazon Prime Day: How They Turned a One-Day Sale into a Worldwide Event
Amazon Prime Day: How They Turned a One-Day Sale into a Worldwide Event

Amazon has marketed their Prime Day like the intelligent and tech-savvy company they are. Learn about the b...

×

Subscribe to DMS Insights

Thank you!
Error - something went wrong!