Advertisement
Australia markets closed
  • ALL ORDS

    8,153.70
    +80.10 (+0.99%)
     
  • ASX 200

    7,896.90
    +77.30 (+0.99%)
     
  • AUD/USD

    0.6514
    -0.0021 (-0.33%)
     
  • OIL

    83.02
    +1.67 (+2.05%)
     
  • GOLD

    2,242.10
    +29.40 (+1.33%)
     
  • Bitcoin AUD

    108,530.88
    +2,738.10 (+2.59%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • AUD/EUR

    0.6038
    +0.0007 (+0.12%)
     
  • AUD/NZD

    1.0907
    +0.0027 (+0.25%)
     
  • NZX 50

    12,105.29
    +94.63 (+0.79%)
     
  • NASDAQ

    18,254.69
    -26.15 (-0.14%)
     
  • FTSE

    7,952.62
    +20.64 (+0.26%)
     
  • Dow Jones

    39,807.37
    +47.29 (+0.12%)
     
  • DAX

    18,492.49
    +15.40 (+0.08%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     

Meta announces 'Facebook Jail' reforms that focus more on better explanations of policy, less on 'timeouts'

"Facebook jail," the name the social network's users have bestowed on the company's system for determining policy violations, is getting an overhaul. Meta announced today it will be reforming its penalty system based on the recommendations from the Oversight Board, the independent body of experts, academics, civic leaders and lawyers who now weigh in on appeals decisions made by Meta. The Board had long raised concerns about Facebook's penalty system, which it called "disproportionate and opaque." It also advised Facebook to be more transparent with users over its decisions and pushed to allow users to explain the context of their violating post when appealing a decision made by Meta.

Today, Meta says it will reform its system to focus less on penalizing end users by restricting their ability to post and more on explaining the reasoning behind its content removals, which it believes will be a fairer and more effective means of moderating content on its platform.

The new system won't enforce strict penalties, like the 30-day timeouts from posting, until the seventh violation in most cases.

"We’re making this change in part because we know we don’t always get it right. So rather than potentially over-penalizing people with a lower number of strikes from low-severity violations and limiting their ability to express themselves, this new approach will lead to faster and more impactful actions for those that continuously violate our policies," wrote Monika Bickert, Meta vice president of Content Policy, in a newsroom announcement about the changes.

ADVERTISEMENT

Meta explains that nothing is actually changing about its decision-making process with regard to content removals themselves, but it will increase its transparency around its decisions by explaining its policies to users when violations occur. Historically, Meta said people found themselves in "Facebook jail," without even understanding what they did wrong. Some also didn't know they had been penalized after finding themselves suddenly unable to post. However, Meta doled out strict and lengthy penalties for these lower-level violations, which were often not ill-intended.

In its announcement, the company offered some examples of the kinds of posts that may have triggered a Facebook jail sentence in the past.

For example, you may have been joking with a friend "I'm on my way to kidnap you," when really you were planning to take them out to dinner after a rough day. Or you may have posted someone's name and address -- a violation of policies around sharing personally identifiable information -- when really you were just inviting a friend to a party. In both scenarios, Facebook's prior response would have been disproportionate.

There were real harms to this system, Meta acknowledges. People banned from posting not only lose their ability to express themselves and connect with their local communities, the company says, but they also may not be able to run their business on Facebook while in "Facebook jail." And these "timeouts" could be lengthy. The prior system would have immediately blocked users from posting for 30 days -- a period that's even more frustrating when the mistake was minor or the context wasn't considered.

In addition, the system didn't actually address the larger issue at hand with the real bad actors -- it let them stay on the platform longer because it prevented Meta from seeing larger violation trends, it notes.

To address this, Meta is increasing transparency around violations while still penalizing repeat violations, which would be a more effective means of managing issues with bad actors, it and the Oversight Board believe.

To come to its conclusion, Meta conducted an analysis of its penalty system and found that nearly 80% of users with a low number of strikes did not go on to violate its policies again in the next 60 days, which demonstrated that most users respond well to a warning and explanation. Meanwhile, it applied its more severe penalties at the seventh strike, which gave "well-intentioned people the guidance they need while still removing bad actors."

There's more nuance to the changes beyond simply waiting longer to penalize violations, however. Serious violations will not get a pass here. Meta says that posts that include "terrorism, child exploitation, human trafficking, suicide promotion, sexual exploitation, the sale of non-medical drugs or the promotion of dangerous individuals and organizations," will continue to see immediate consequences, including account removals at times.

It may also restrict people from posting in Facebook Groups at lower thresholds than the seventh violation, in some cases, but did not elaborate on this part.

Meta's decision to now apply these changes, after years of user outcry and outside pressure, is worth noting. In the U.S., Republican lawmakers have long insisted that Facebook and other Big Tech companies, like Google and Twitter, are censoring conservative viewpoints -- more recently pressing companies like Facebook and Twitter to explain their individual moderation decisions, as with the Hunter Biden laptop story. These claims of censorship, along with other factors, have contributed to some lawmakers' increasing interest in regulating tech platforms or even breaking them up over antitrust concerns. The belief there is that the companies have had too much power to regulate discourse, which has led to lengthy debates over to what extent these companies are preventing "free speech" versus making rules about how they want their own business to operate.

By reducing the severity of penalties, Meta is trying to balance its ability to remove violating content with the actual harms in doing so, in terms of impacts on individual users. But it still gives the company leeway to make immediate decisions -- like whether or not a sitting U.S. president can have their account banned, perhaps.

The Oversight Board applauded Meta's decision today in a blog post, but suggested there was still room for improvement, particularly with the transparency around "severe strikes." It also says that users' explanations and context should be taken into account by content reviewers during the appeals process. However, the post was largely positive, saying the new system is, at least, "fairer to users who have been disproportionately impacted in the past."

"This is a welcome step in the right direction for Meta, and the Board will continue to push for further improvements to Meta’s content moderation policies and practices," it said.