Why Can't Facebook Take Down All Terrorist Content?

This article first appeared on Just Security.

The world's giant social media companies, including Facebook, will appear before the Senate Commerce Committee today for hearings titled, "Terrorism and Social Media: #IsBigTechDoingEnough?"

Last year, the British House of Commons' Home Affairs Committee held hearings with the same companies. In mid-2017, the parliamentary body issued a scathing report concluding that the social media companies were "shamefully far from taking sufficient action."

Facebook has long stated that it recognizes its "responsibility" to eliminate terrorist content, and has promised governments to do better. New evidence from researcher Eric Feinberg, however, points to continuing holes in the monitoring of content that violates Facebook's own Community Standards.

Feinberg shared his findings with Just Security , which are reported here for the first time.

The British parliamentary report repeatedly criticized the social media giants' ability to efficiently identify and remove content that negatively affects their bottom line—such as advertising and copyright violations—but not showing the same commitment to detecting and removing extremist content. The report stated:

The major social media companies are big enough, rich enough and clever enough to sort this problem out—as they have proved they can do in relation to advertising or copyright. It is shameful that they have failed to use the same ingenuity to protect public safety and abide by the law as they have to protect their own income.

Feinberg is founding partner of GIPEC (Global Intellectual Property Enforcement Center) and developed a patented tool for detecting fraudulent and malicious activity on social media platforms. His research has previously led to significant changes by advertisers who were alerted to instances in which their content was associated with videos promoting terrorism, neo-Nazi and other hate groups.

His research has also been featured in news reports on social media companies' failure to control other malicious activity on their platforms including illegal opioid sellers, pirated NFL games, malware that drains bank accounts, and more.

Does Facebook have an ISIS problem?

The first lesson of Feinberg's research on pro-ISIS accounts on Facebook is that his tool appears to detect content that Facebook's own algorithms and monitoring systems have failed to identify.

In a one month period spanning December 2017-January 2018, Feinberg anonymously reported dozens of pro-ISIS pages to Facebook. In 56 percent of those cases, Facebook removed the offending page. In other words, this material had apparently escaped Facebook's own methods for identifying and removing malicious content.

The second lesson, however, is arguably more damning—it shows that Facebook decided not to remove pages that clearly violated its standards, even after being directly informed of the malicious content.

The most significant findings are accordingly in the other category of Feinberg's cases: the 44 percent of reports in which the company told him in each instance, the content "doesn't go against the Facebook Community Standards." The problem is that it is difficult to discern any meaningful difference between the content that Facebook removed and the content it retained.

Judge for yourself. Content that Facebook declared did not violate its Community Standards included a photo of hooded gunmen aiming their weapons in an urban neighborhood with the caption, "We Will Attack you in Your Home."

FB-1
Facebook

In another case, Feinberg reported a page that presented itself as an online publication inspired by ISIS's notorious propaganda magazine, Dabiq , though in this instance it was called Dabiq Bangla News Agency. The page was apparently stylized to promote ISIS among the Bangladeshi community.

When Feinberg reported the page to Facebook on December 19, the company responded two days later that it doesn't violate their Community Standards. On Monday of this week, two days before the Senate hearing, Facebook appears to have removed the page. But it does not require much work to find other Facebook publications like it.

Another page that the company said "doesn't go against the Facebook Community Standards" praised the Orlando nightclub attack in the name of God and said "Be like Omar and do like him." The gunman, Omar Mateen, had reportedly pledged allegiance to ISIS before carrying out the massacre.

FB-2-1
Facebook

This content appears to clearly violate Facebook's explicit policies. In accordance with its Community Standards, Facebook states that the company will "remove content that expresses support for groups that are involved in the violent or criminal behavior" including "terrorist activity" as well as content supporting, praising, or condoning such activities.

This public statement is also consistent with internal guidelines and training manuals that Facebook provides its moderators, according to documents leaked to The Guardian last year.

In a Q&A post on the company's website, Monika Bickert, Head of Global Policy Management for Facebook (who will appear before the Senate on Wednesday), said that the company has focused its most cutting-edge techniques on ISIS- and Al Qaeda-related content.

She also wrote, "We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. … [W]e don't want Facebook to be used for any terrorist activity whatsoever."

She added that "algorithms are not yet as good as people when it comes to understanding this kind of context." But how good are the people?

Back in May of last year, the British parliamentary committee concluded with respect to the social media companies in general, "The interpretation and implementation of the community standards in practice is too often slow and haphazard. We have seen examples where moderators have refused to remove material which violates any normal reading of the community standards." The social media companies also pledged back then to do better.

A third finding in Feinberg's research points to the possibility of a specific loophole involving fake or inauthentic accounts. Facebook states that it has "gotten much faster at detecting new fake accounts created by repeat offenders." Feinberg, however, discovered several cases in which an existing but semi-dormant account appears to be hacked and then used to promote ISIS content.

GettyImages-669889770
Facebook CEO Mark Zuckerberg at Facebook's F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. Justin Sullivan/Getty

What's more, Feinberg told Just Security that the same indicators that suggest the account may have been hacked could presumably be included in an internal algorithm to detect such cases. These variables include a profile name that no longer matches the name in the page's web address (url), and abrupt changes in language. Feinberg shared examples with Just Security in which earlier posts on a single page were in Spanish and more recent posts were in Arabic.

Lack of transparency, and where the fault lies

Facebook possesses enormous power to influence the lives of a huge percentage of the world's population. While the company's failures to stop terrorism-related content may be criticized by some, its regulation of what the company deems terrorism-related content may be criticized by others.

But how can the public and the people's elected representatives even begin to grapple with these questions if Facebook is not fully transparent?

Transparency in standards: One of the key questions, for example, is how the company even defines terrorism and related violence. [Stay tuned for more on that topic at Just Security .]

Transparency on scale of terrorist content: When the Guardian reported internal Facebook documents that showed that the company's moderators identified over 1,300 posts as "credible terrorist threats" in a single month, "Facebook contested the figures but did not elaborate. It also declined to give figures for other months."

Transparency in resources committed to stopping terrorist content: In 2016 and 2017, Google, Facebook and Twitter refused to tell British Parliamentary committees how many staff they employ to monitor and remove inappropriate content.

What is novel is that we're still talking about terrorist content on Facebook, specifically ISIS in 2018. This has been a high profile issue since 2014.

That year the head of Britain's intelligence agency, GCHQ, wrote that the technology companies seemed to be "in denial" about how ISIS was misusing their platforms. Yet Facebook has arguably been more aggressive in policing terrorist content compared to other social media platforms, and since late 2016 they've partnered with other large tech companies to pursue shared solutions.

But something is obviously still missing.

It is notoriously difficult to identify extremist content algorithmically, though Feinberg's tool appears to catch what Facebook doesn't. Facebook's problem may also be in its systems of human moderation including poor policy guidance or overburdened and under-resourced staff.

There are obviously financial implications for tech companies to invest more resources in policing terrorist content, and thus built in resistance. The public has every reason to get to the bottom of why the social media giants' actions are not working.

Now Congress is increasing its own knowledge and awareness of the problem, which the companies have been unable to resolve on their own.

Ryan Goodman is Co-Editor-in-Chief of Just Security, the Anne and Joel Ehrenkranz Professor of Law at New York University School of Law and a former Special Counsel to the General Counsel of the Department of Defense (2015-2016).

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Ryan Goodman

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go