A view from the CT foxhole: An interview with Brian Fishman, counterterrorism policy manager, Facebook.

AuthorCruickshank, Paul

CTC: There's long been concern that extremist content posted and shared on social media is helping to fuel terrorism. As the social media company with the largest user base in the world, what is Facebook doing to counter terrorism?

Fishman: The bottom line is that there is no place for terrorism on Facebook--for terrorist actors themselves, terrorist groups, or supporters. This is a long-standing Facebook policy. (a) Our work countering terrorism now is more vital than ever because of the success ISIS [the Islamic State] has had in distributing their message via social media. But our basic policy framework is very clear: There should be no praise, support, or representation of terrorism. We use a pretty standard academic definition of terrorism that is predicated on behavior. It is not bound by ideology or the specific political intent of a group.

The sheer size and diversity of our user base--we have 2 billion users a month speaking more than 100 languages--does create significant challenges, but it also creates opportunities. We're striving to put our community in a position where they can report very easily things on Facebook that they think shouldn't be there.

We currently have more than 4,500 people working in community operations teams around the world reviewing all types of content flagged by users for potential terrorism signals, and we announced several months ago that we are expanding these teams by 3,000.

Every one of those reports gets assessed, regardless of what it was reported for, to see whether there is anything that looks like it might have a nexus with terrorism. If the initial review suggests that there might be a connection, then that report is sent to a team of specialists who will dig deeper to understand if that nexus exists. And if there is support of some kind or someone representing themselves as a terrorist or another indication that they are, then we will remove the content or account from the platform.

CTC: Earlier this year, a U.K. parliamentary report on online hate and extremism asserted "the biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe." (1) For her part, British Prime Minister Theresa May stated after the June London Bridge terrorist attack, "We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet and the big companies that provide internet-based services provide." (2) Is the industry as a whole doing too little to combat terrorism content?

Fishman: There was once a time where, I think, companies were trying to wrap their heads around what was happening on their platforms. And so there was a learning period. Facebook's policy on this is really clear. Terrorists are not allowed to be on Facebook. So I don't think the suggestion that technology companies must be compelled to care is helpful at this stage. From my vantage point, it's clear technology companies across the industry are treating the problem of terrorist content online seriously. Now we need to work constructively across industry and with external partners to figure out how to do that job better.

CTC: You're an alumnus of the Combating Terrorism Center who has long studied and written about terrorism. What's the transition been like to your current role at Facebook?

Fishman: It's tremendously gratifying to take my experience at a center of academic expertise and the engagement that I had with cadets and folks in government and translate it to a Facebook environment. I work within a wider product policy team whose job it is to set policy for Facebook broadly, including community standards. We've broken out a dedicated team on counterterrorism that I lead and are growing that team with some really talented people.

I think that the biggest point of learning for me is figuring out how to scale an operation to enforce guidelines consistently and effectively. And in my experience, until you've had to manage the scale that Facebook operates at, even when somebody gives you some of the numbers, you still have to learn to wrap your head around it and understand what that means in terms of language coverage, cultural knowledge, having the right people to be able to do the right things. That's something that I think you can't fully prepare yourself for. You need to get in the trenches and do it.

CTC: Given the sheer volume of material constantly being posted on social media by extremist actors, what are some of the strategies you are using to remove such material?

Fishman: I mentioned reports from the community earlier, but we are increasingly using automated techniques to find this stuff. We're trying to enable computers to do what they're good at: look at lots of material very quickly, give us a high-level overview. We've also recently started to use artificial intelligence [AI]. But we still think human beings are critical because computers are not very good yet at understanding nuanced context when it comes to terrorism. For example, there are instances in which people are putting up a piece of ISIS propaganda, but they're condemning ISIS. You've seen this in CVE [countering violent...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT