61 stories
·
0 followers

Facebook sees rise in violent content and harassment after policy changes

1 Comment

Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.

The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas."

There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March. (Notably, this is a separate category from the company's hate speech policies, which were re-written to allow posts targeting immigrants and LGBTQ people.)

Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day. (Meta describes its prevalence metric as an estimate of how often rule-breaking content appears on its platform.)

The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn't provide stats around fake account removals on Instagram.)

At the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress."

Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users.

The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies."

The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-sees-rise-in-violent-content-and-harassment-after-policy-changes-182651544.html?src=rss



Read the whole story
motang
1 day ago
reply
Who would have thought that?
Share this story
Delete

CoComelon is headed to Disney Plus in 2027

1 Share

Disney Plus will become the new home of CoComelon outside of YouTube starting in 2027, according to Bloomberg. All eight seasons will move over from Netflix, which has hosted the absurdly popular kids show since 2020.

CoComelon, essentially a series of mind-numbingly plotless, CG-animated vignettes set to karaoke-quality nursery rhymes, is a giant in the world of programming for children, having accounted for 601 million Netflix views in 2023. According to Bloomberg, it was the second most-streamed show on the platform last year.

Despite its popularity, Bloomberg reports that CoComelon views fell by “almost 60% over the last couple of years,” and that compared to all of streaming, it went from the fifth most-watched show in 2023 to not even breaking the top 10 last year. Still, it’s probably going to be a good deal for Disney, which will reportedly pay “tens of millions” a year for it. After all, 2027 is also the year that the first CoComelon movie hits theaters.

Read the whole story
motang
4 days ago
reply
Share this story
Delete

Did WhatsApp really need Meta?

1 Share

In its antitrust case against Meta, the US Federal Trade Commission is asking a judge to consider an alternate reality. In that world, the company never bought Instagram and WhatsApp. The two apps remained competitive with Facebook, developing features that competed for users' attention. And that competition created a thriving ecosystem of social media apps where people can connect with their friends and family.

Meta has spent the past several days - during which it's begun lodging its case-in-chief in a Washington, DC, courthouse - building a counternarrative. In its telling of this alternate present, Instagram and WhatsApp are shadows of what they are in our world. They lacked the resources, expertise, and vision to become robust and valuable online platforms, let alone formidable competitors. And consumers are the ones who ultimately suffered.

One of Meta's key witnesses for this defense is WhatsApp cofounder Brian Acton, who was called on Tuesday to help make its case that WhatsApp users, just like Instagram ones, benefited from Meta's acquisition. Acton was the second app founder to testify in the case, after Instagram cofounder Kevin Systrom delivered mostly blistering te …

Read the full story at The Verge.

Read the whole story
motang
9 days ago
reply
Share this story
Delete

15 Actors Who Bombed Auditions For Iconic Roles

1 Share
Hollywood audition stories are always a fun joy to hear. So, here are 15 examples of auditions for iconic roles going completely sideways.



Read the whole story
motang
24 days ago
reply
Share this story
Delete

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates

1 Share

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents' credit cards or find a fraudulent number online to access adult content on the web. But today's kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users' search behavior. It's impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren't the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

"Individuals adapt primarily by moving to content providers that do not require age verification," the study concluded.

Read full article

Comments



Read the whole story
motang
31 days ago
reply
Share this story
Delete

When the world connected on Skype

1 Share
Skype, the online video-calling service, is shutting down in May after more than two decades of service. For those of a certain generation, Skype changed everything.  Before it launched in...

Read the whole story
motang
36 days ago
reply
Share this story
Delete
Next Page of Stories