CfD/Global Witness Study Finds Facebook Accepted Threatening Ads

CfD/Global Witness Study Finds Facebook Accepted Threatening Ads

In a test of how well social media outlets enforce their policies about harmful content in advertising, Facebook clearly fell short during the US midterm elections. A study conducted by Global Witness and NYU’s Cybersecurity for Democracy (C4D) found that Facebook either failed to detect, or just ignored, death threats against election workers contained in a series of ads submitted to the company. Facebook’s performance in the study was in sharp contrast to that of two other social media platforms, YouTube and TikTok.

According to a report jointly released by C4D and Global Witness on December 1, the researchers submitted advertisements that included threats against election workers. Though the ads were fakes, the types of death threats they cited had actually been reported within the media. A total of 20 such ads were prepared, 10 each in both English and Spanish, and submitted to Facebook, YouTube, and TikTok. In all cases, the submissions violated the platforms’ stated ad policies, and “the death threats were chillingly clear in their language.”

The report explains that the advertising format was employed to enable researchers “to schedule them in the future and, importantly, to remove them before they go live, while still being reviewed by the platforms and undergoing their content moderation processes.” The ads were submitted to the three social media sites on the day of or the day before the 2022 US midterm elections.

The results could not be more different. “TikTok and YouTube suspended our accounts for violating their policies,” the report observes, adding that Facebook “approved 9 of the 10 English- language death threats for publication and 6 of the 10 Spanish-language death threats. Our account was not closed down despite a handful of ads having been identified as violating their policies.” When asked about these findings, a spokesperson for Meta, Facebook’s parent company, responded: ”This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms. We remain committed to continuing to improve our systems.”

The report concludes with a list of specific actions the research team feels social media platforms in general, and Facebook in particular, should take to stop the proliferation of threatening ads.

The publication of “We’re going to kill you all”: Facebook fails to detect death threats against election workers in the U.S., while Youtube and TikTok succeed caps off a busy election cycle for the Cybersecurity for Democracy. In the run-up to the midterm elections, co-directors Laura Edelson and Damon McCoy were both sought after as commentators by a number of media outlets, including The Hill and Forbes.