Advertisement
Australia markets closed
  • ALL ORDS

    7,898.90
    +37.90 (+0.48%)
     
  • AUD/USD

    0.6446
    +0.0009 (+0.14%)
     
  • ASX 200

    7,642.10
    +36.50 (+0.48%)
     
  • OIL

    81.88
    -0.81 (-0.98%)
     
  • GOLD

    2,394.20
    +5.80 (+0.24%)
     
  • Bitcoin AUD

    97,503.48
    -430.43 (-0.44%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     

Facebook details its takedown of a mass-harassment network

This, and mass-reporting, fall under its Coordinated Inauthentic Behavior threats.

Kelly Sullivan via Getty Images

Meta/Facebook is today updating the world on how its efforts to remove fake and adversarial networks from its platform are going. The social network has released a new report saying that it has successfully closed down a number of networks for Coordinated Inauthentic Behavior (CIB). But in addition to networks of fake profiles all working in tandem, the company has also shed some light on how it deals with additional threats. This includes Brigading — the use of negative comments and counter-posting to drown out an individual’s posts — and Mass Reporting, where Facebook’s own anti-harassment tools are used as a weapon. This is another step beyond the broader tactics the company announced back in September, where it pledged to combat broader social harms that took place on its platform.

With Brigading, the company took down what it describes as a “network of accounts that originated in Italy and France” which targeted medical professionals, journalists and public officials. Facebook says that it tracked the activity back to a European anti-vaccine conspiracy movement called “V_V,” adding that its members used a large volume of fake accounts to “mass comment on posts” from individuals and news agencies “to intimidate them and suppress their views.”In addition, those accounts posted doctored images, superimposing the swastika onto the faces of prominent doctors and accusing them of supporting nazism.

In Vietnam, Facebook took down a network that was being used to target activists and users critical of the local government. The network would submit “hundreds — in some cases thousands — of complaints against their targets through our abuse reporting flows.” Attackers also created duplicate accounts of the users they intended to silence and then reported the real account as an impersonator from the fake account. Facebook added that some of these fake accounts were automatically detected and disabled by the company’s automatic moderation tools.

ADVERTISEMENT

As for the more old-fashioned methods of Coordinated Inauthentic Behavior, the company took down networks in Palestine, Poland, Belarus and China. The first was reportedly tied to Hamas, while the second two were crafted to exacerbate tensions during the humanitarian crisis on the border there. In a call with reporters, Facebook said that the Polish network had very good operational security and, so far, it has not been able to tie it to a real-world organization. The Belarusian network, on the other hand, had much poorer operational security, and so the company has tied the activity to the Belarusian KGB.

The final network, out of China, has prompted Facebook to publish a deep dive into the activity given the depth of what took place. In its report, the company says that a group created a fake profile of a Swiss biologist called Wilson Edwards who posted material critical of the US and WHO. 48 hours later, and his comments were picked up by Chinese state media, and engaged with by high-level officials. But there was no evidence that Wilson Edwards existed, which prompted the platform to close the account.

Researchers found that Edwards’ was “the work of a multi-pronged, largely unsuccessful influence operation,” involving “employees of Chinese state infrastructure companies across four continents.” Facebook wanted to make it clear that Edwards’ comments were not engaged with organically, and it was only when the posts were reported by state media did things suddenly rise in prominence.

One thing that Facebook did identify is the use of guides which were used to train potential network members. The V_V network, for instance, published videos through its Telegram channels that suggested that users replace letters in key words so that it wouldn’t be picked up by automatic filtering. The people behind the Chinese network, too, would sometimes inadvertently post notes from their leaders, written in Indonesian and Chinese, offering tips on how best to amplify this content.

In addition, Facebook has announced that it has launched a tool, through CrowdTangle, to enable OSINT (Open Source Intelligence) researchers to study disinformation networks. This includes storing any content taken down by the company, allowing a small list of approved third parties the chance to analyze it. Access has, so far, been limited to teams from the Digital Forensic Research Lab at the Atlantic Council, Stanford Internet Observatory, Australian Strategic Policy Institute, Graphika and Cardiff University.

Facebook believes that offering greater detail and transparency around how it finds these networks will enable researchers in the OSINT community to better track them in future.