Farid to Facebook (and Others): Root Out Extremist Content

News subtitle

Professor Hany Farid wants tech companies to do more to combat terrorism.

Image
Image
Hany Farid
“Facebook, Twitter, and YouTube are each being used to recruit and radicalize extremists around the world,” says Professor Hany Farid. (Photo by Eli Burakian ’00)
Body

Hany Farid, the Albert Bradley 1915 Third Century Professor of Computer Science and chair of the Department of Computer Science, says one of the most effective weapons against terrorism is not being fully deployed. He’s not talking about armaments. He’s talking about algorithms.

Having helped technology companies identify and block online child pornography, Farid has developed a similar method that he says can be used to detect terrorist content, including tweets and messages used to recruit extremists. But he says social media platforms, including Facebook, should move faster to adopt this technology. Known as “the father of digital forensics,” Farid recently spoke with Dartmouth News about this issue.     

How are radical extremists using sites like Facebook for recruitment?

Facebook, Twitter, and YouTube are each being used to recruit and radicalize extremists around the world. For example, the leader of the most recent London Bridge attacks was reportedly radicalized and driven to violence in part by online lectures from extremist clerics. This is not an isolated example. According to a recent report, since the attacks of Sept. 11, 2001, 45 percent of terrorism cases involved online radicalization. In 2017, that number is 90 percent.

Is there software that can find that content? How does it work?

The same technology that can identify child pornography can be used to identify extremism-related content. This technology works by extracting a distinct digital signature from a digital image and then comparing this signature to all uploads. This technology can effectively stop the redistribution of known bad content. This core technology has been in operation since 2009 but has only been applicable to digital images. In collaboration with an NGO—the Counter Extremism Project—I have developed the next generation of this technology that can also analyze digital video and audio recordings. We don’t yet have technology that can reliably analyze text, but this is an area that should be aggressively investigated.

What are technology companies saying about the power of that software to root out and block terrorists on the internet?

For decades, technology companies have claimed that they cannot rein in online abuses. They said this first in the early 2000s on the issue of child pornography, and now on the issue of extremism. These companies hide behind the scale and complexity of the internet as an excuse, although I’ll note that the scale and complexity of the internet hasn’t stopped them from building complex systems to target advertising, collect personal data, and profit handsomely along the way.

With increased criticism and threats of regulatory legislation from the United Kingdom, the European Union, and United States, technology companies have finally said that they are going do more to rein in abuses. There is no question that this is a difficult problem, but there is also no question that reining in online abuses is not a priority for these companies, and so they have been frustratingly slow in responding to very real threats with very real consequences.

Can you comment on Facebook’s recent announcement about its decision to use new algorithms to scrape words, images, and video used as terrorist propaganda?

The recent statements by Facebook and YouTube are a good start, but much work remains. It is critical that these companies invest significant resources—not just their public relations team—in this effort and that they are transparent in what they are doing and how effective their approaches are. It is also critical that they continue to innovate because, as with the spam/anti-spam and virus/anti-virus efforts, we know the extremist groups will not simply go away—they will adapt, and we have to continually adapt if we are to take the issue of online safety seriously.

Are civil liberties advocates voicing concerns about infringing on freedom of speech?

There are serious questions regarding what does and does not constitute extremism speech. The terms of service of all major technology companies, however, already specify that certain types of speech, that may be protected by the first amendment, are not allowed on their platforms. For example, Facebook’s terms of service read, in part, "You will not post content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence. We can remove any content or information you post on Facebook if we believe that it violates this statement or our policies.” I don’t advocate imposing specific rules or regulations. I do advocate that technology companies enforce their own stated terms of service. This alone would go a long way in reining in online abuses.

You have said that technology companies should be motivated to act “not just for the social good, but for their own good.” What do you mean by that?

From child exploitation to online extremism, illegal sex trade, cyber-bullying, revenge porn, cyber-crime, fake news malware, and trolling, the internet, and social media in particular, are becoming poisonous. By not addressing these abuses head-on, these companies run the risk of alienating their users, and more importantly for them, advertisers.

Charlotte Albright