This is the text of a keynote presentation given at the Terrorism and Social Media (TASM) at Swansea University on June 17, 2024. It has been lightly edited for clarity, contains fewer images than those presented during the live presentation, and includes several hyperlink references not applicable for a live audience.

Phone call

My phone rang just after midnight. On the line was one of my bleary-eyed colleagues from Facebook. They had just been alerted that the U.S. government believed agents working for the Iranian government planned to release a video the following day as part of a campaign to intimidate American voters. 

We did not know exactly what the video said or showed, but it had something to do with the Proud Boys, a streetfighting group whose leadership was ultimately imprisoned for their actions related to January 6. 

It was October 2020, weeks before the election. I remember slumping down onto the hardwood floor in my pajamas so we could talk through our options. 

Sure enough, early the following day an erratically edited video with a soundtrack from Metallica was released online. The video claimed that it had been made by the Proud Boys. It claimed the group had stolen voter rolls and would target Democrats.  

The video seemed designed to intimidate Democratic voters, but it was so sloppily crafted that some thought it might be designed to boost Democratic turnout and embarrass Republicans. 

I’m not going to speculate about the partisan motives - and I have that luxury because this video ultimately had very little impact. 

Alert to the possibility of such a video, Facebook removed copies extremely quickly and the campaign quickly withered.

In short, this was a win - for Americans generally, for democracy more broadly, and for Facebook. 

The question today is: why was Facebook able to act so quickly - and thereby so meaningfully - in disrupting this operation? 

Was it because the government had given us a heads up - and we took their word that this was a nation-state influence operation?  No. Facebook did not rely on government assertions - even by the United States - to make such decisions during my tenure, and I doubt they would today. 

Was it because we could verify that it was an Iranian influence operation? No. Ultimately, Facebook did corroborate that Iranian-linked accounts pushed this campaign, but that kind of analysis takes time. That’s not what allowed us to act so quickly and effectively.

This particular foreign disinformation campaign was removed not because it was a foreign disinformation campaign, or because it aimed to intimidate voters, or even because it violated Facebook’s policies requiring people to use their real names and identities. 

It was removed because two years beforehand Facebook had designated the Proud Boys as a hate group. In the nomenclature of the policy at the time, that meant that praise, support, and–most importantly for this case–representation of the Proud Boys was not allowed on the platform. 

When the Iranian propagandists decided to impersonate the Proud Boys they inadvertently ensured that their propaganda violated one of Facebook’s broadest and most aggressively implemented policies. 

Facebook’s Dangerous Organizations policy was built to easily enable the removal of mass distributed content attributed to a designated group – and that allowed it to play a decisive role in this case. 

It was decisive not just because the fact pattern fit the policy, but also because that determination could be made quickly, and once the determination was made it could easily be applied at the scale of the disinformation campaign. 

So, is this all just a good news story about the value of sweeping policies? Of course not.

The great dilemma, of course, is that a policy broad enough to serendipitously apply to a disinformation campaign it was not designed for will also be broad enough to inadvertently apply to other circumstances it was not designed for - and in situations where a speaker’s intent is not as clearly nefarious as it was here. 

And because that broad policy decision can be made quickly, individual applications of the policy may not be scrutinized deeply. 

And, of course, because the impact of a particular determination may then be scaled, the impact of an unfortunate decision, whether at the policy or operational level, can have a far broader impact. 

It is community standards analysis as classic (or cliched) Shakespearean truth: a policy’s greatest strength is often also its tragic flaw. 

Introduction

My name is Brian Fishman and it is a great honor for me to be here with all of you. I used to lead the work on Dangerous Organizations at Facebook. I actually had business cards that read ‘Head of Dangerous Organizations’, which is a title I’ll likely never better. 

Today, I am a cofounder of Cinder, a software company that builds operational tools for platform governance, which our clients use to manage their Trust & Safety challenges and both train and monitor AI systems. 

Long ago, I led research at the U.S. Military Academy’s Combating Terrorism Center. 

We have come incredibly far as a community since 2005, when my colleagues at the CTC and I were stumbling around the internet in search of information generated by al-Qaeda and its supporters. 

At the time, our work gathering data on jihadi web forums was considered cutting edge, both by academics and various intelligence agencies (which, with a few exceptions, tended to discount opensource information gathering). 

By contemporary standards, however, our methods were rudimentary. They were not systematic, rarely quantitative, and - with a couple of exceptions - seldom took advantage of contemporary investigative techniques to track users across various platforms.

At CTC, we weren’t the first, nor were we the only ones struggling to crawl out of the primordial muck to understand violent extremism online. The Anti-Defamation League (ADL) semi-systematically tracked American white supremacists on the nascent internet in the early 1980s. 

And the Simon Wiesenthal Center campaigned to drive white supremacists off Beverly Hills Internet, a precursor to Geocities, the early website-building platform. (Note [on the slide] that Stormfront immediately sought hosting on servers in locales it deemed more friendly - in Russia and… Florida.

Screenshot from Stormfront November 1996

Despite these efforts, Geocities was used by jihadi groups as well, including Jund al-Islam, a Kurdish-jihadi group that became part of the justification for the invasion of Iraq.  

After 9/11, much of the energy to research terrorism online focused on al-Qaeda, its cousins, and its offshoots – including the folks that eventually created the so-called Islamic State. Unsurprisingly, that was our focus at West Point in the mid-2000s. 

The geopolitical environment then shaped how the media and civil society approached terrorism online as well. If anyone’s looking for a dissertation topic, here’s a hypothesis you might test:

  • Most media coverage of extremism online in the mid-2000s was written by national security reporters. It was a function of the post-9/11 wars and stories were framed in those terms. 
  • That dynamic persisted through the early years of social media. National security professionals tried – vainly for a long time – to get social media companies to care about al-Qaeda and others increasingly operating not just online but on social media platforms. 
  • While working at New America – a think tank in DC – in 2010, I helped organize the meeting where the National Counter Terrorism Center (NCTC) first presented their “Community Awareness Brief'' about Anwar al-Awlaki (an American member of al-Qaeda) to social media companies. At the time, the platforms were not particularly motivated to do much about the issue. 
  • But when those national security professionals turned to the media to raise the issue, who did they turn to? Not technology reporters in Silicon Valley, rather national security reporters in Washington and New York. And those reporters tended to frame these issues in the context of the ongoing wars in Iraq and Afghanistan. 
  • It was not until years later, when technology reporters were confronted directly by so-called Islamic State propaganda on Twitter, that they began to cover these issues aggressively. 
  • And their coverage was a bit different than the national security reporters. Instead of framing the issue in terms of the battlefield dynamics in Iraq and Afghanistan, they were motivated by the business of Silicon Valley and tended to see technology itself – more than geopolitics – as a core driver of events.

I suspect this shift had some benefits. For example, technology desks were more open to a broader conception of what sort of extremists deserved coverage. They were not bound to traditional “national security” issues and saw the parallels between nazis and other white supremacists – the original digital violent extremists – and the violent jihadist groups. 

But there was a cost as well. 

The perpetual journalistic need to describe newsworthy phenomena as new phenomena was coupled with the bias of technology desks to see technology as the core driver of events. 

This often meant complex violent movements were detached from their social and political context. The long history of digital extremism was largely ignored. And the manifestation of these groups online was presented as functions of technology itself, which by then meant social media.  

Clearly social media is very useful to terrorist groups. I know better than most that recommendation algorithms can help violent extremists build (and, after networks are disrupted, rebuild) networks online. 

Nonetheless, the tech media’s tendency to frame the phenomena as “new” detached it from the social, ideological, and organizational mechanisms that have spawned terrorism for years and obscured the reality that violent extremists have used digital platforms, not just social media, from almost their inception. 

In some cases, this focus was deleterious to building a sense of responsibility among platforms. For example, Substack’s founder justified their lack of a crackdown on neo-nazis by indicating they did not use recommendation algorithms and ads, features they called the internet’s “original sin.” 

In my view, the internet’s true “original sin” is the naive belief among some in Silicon Valley that they have no responsibility to mitigate the risks associated with technological progress. 

Humility and Audacity

All that said, we are better off today than in the past. The work done by a whole host of people to understand terrorism online is so much more sophisticated and rigorous than our work back in the mid-2000s. We’ve come a long way in 20 years.  

Nonetheless, one of the things we got right at West Point 20 years ago was to embrace the dynamism of what we were studying. We were deeply opposed to the idea that anyone could be an “expert” on terrorist use of the internet. 

This was a multifaceted concept: on the one hand, we had the idea that the groups we studied, the geopolitical context in which they operated, and the technologies they used were changing so quickly that “expertise” on the phenomenon was impossible. 

Even recognizing the knowledge collected in this room, I still like this idea. 

It fosters humility and audacity, both qualities necessary to counter the adversarial and adaptive people, organizations, and ideologies we are here to discuss. 

On the one hand, the impossibility of expertise suggests that any knowledge you generate is partial and will probably soon be outdated, so you must be open to challenges to your conclusions and processes. Humility. 

On the other hand, it suggests that established institutions and leaders in the field are also chasing a dynamic reality - so why can’t you be the one to innovate the next big idea? Audacity.

That’s why I look at this august group and say with genuine admiration (and at the risk of insulting everyone) that we are in a room full of incredibly knowledgeable people but without any experts…

And the reason is that you’ve all had the audacity, humility, sense of civic responsibility, and barefoot ragamuffin irreverence for what has come before to study and build systems to counter dynamic problems that preclude both expertise and clear solutions. 

In this dynamic environment we are all students - and can use that ethos to celebrate our wins, examine and learn from our failings and bolster the audacity we will need to be better tomorrow than we were yesterday.  

Three Shifts

When it comes to companies dealing with Trust & Safety challenges generally, and violent extremism specifically, I see three great shifts happening today.  

First, many platforms are embracing Trust & Safety earlier in their lifecycle than in the past. This trend is not universal; many of those early Trust & Safety efforts are minimal and error prone. Moreover, starting early does not necessarily translate into systematic programs that grow with a platform and the risk it accrues over time. 

Nonetheless, there is greater awareness among founders that they must attend to these issues. Indeed, the mere fact that a company like Cinder has been funded by VC firms is evidence that the broader start-up ecosystem is increasingly aware of Trust & Safety and its importance. 

That said, it’s important to keep in mind that young companies face intense resource constraints. Struggling just to keep the lights on, those that do commit to Trust & Safety often pay vendors for API-based classification services and then rely on those third-party labels to drive their decisions. Effectively, this means they are outsourcing their policies because those classifiers were trained on third-party policy definitions, not platform-specific ones.

The second trend is the AI explosion. 

Advances in LLMs and classifier development mean companies can more easily instruct classifiers to apply platform-specific policies, which will alleviate the third-party classifier problem I just mentioned. In general, the ability to train AI systems more quickly and more cheaply will empower platforms to keep up with emergent and evolving threats. 

I envision a future where human reviewers increasingly monitor and benchmark first-tier decisions made by automated systems and review a fewer number of complex or borderline cases. 

This trend is not exactly new - five years ago at Facebook AI systems regularly scored more accurately than human reviewers, though they were (and are) prone to strange, inexplicable mistakes that a human being would likely avoid.

But we should be careful not to fetishize human review. At scale, human review teams have predictable error rates. Human beings all bring their own biases, and those biases can matter a great deal when reviewing for potential terrorist content. 

Early in my tenure at Facebook we discovered that a critical human review team had simply ignored the company’s list of formally-designated terrorist groups and instead was using their own list. It was a deliberate effort to circumvent policy. 

AI will not attempt such deception.

But the future of fully-automated Trust & Safety systems will not arrive overnight. Classifiers still struggle to adequately understand highly contextualized language and images. And systems that can reliably understand multi-modal content (meaning media and text) are still expensive to use at scale. 

This transition will happen - but at the speed of financial and operational viability not simply the bleeding edge of what’s technically possible. 

There’s been a lot of chatter about the abuse of AI systems by extremist groups. Of course this is happening and will continue to happen. Violent extremists adopt new technology along with everyone else. 

It’s not clear to me that this will substantially change the impact of their messaging, with one key exception: AI-driven propaganda is more likely to be covered by mainstream media outlets simply because it is AI-driven. And that coverage is likely to celebrate the technological sophistication of those groups and thereby inadvertently bolster them. 

This sort of dynamic has occurred before, with those technology journalists I mentioned previously. The so-called Islamic State’s early use of Twitter was powerful in large measure because it activated unwitting technology journalists to write stories highlighting the technical acumen of the group, not simply because of the native reach of propaganda on Twitter itself. 

That is the danger of assessing and writing about technical innovation without adequate context. I worry about a similar dynamic today with AI. 

What about the AI companies themselves? 

Many of the larger firms are moving aggressively to identify and prevent abuse at the model level. Some are even building Threat Intelligence teams to conduct coordinated takedowns of nefarious actors. These efforts draw on established “Trust & Safety '' process, but terminology is somewhat different. Many in the AI community refer to this work as “Responsible AI, AI Safety or AI Governance.” The names are different, as is the focus on model outputs, but many of the processes are the same. 

It is important to remember that the AI space is bloated and highly competitive. Many of the companies in it will die. At the same time, public discourse about AI recognizes – and sometimes overstates – the risks of this technology. 

As my cofounder Glen Wise first observed to me, that can produce perverse incentives for AI companies. Abuse of their tools may spur a public discussion about the power of a particular platform, which may distinguish them in a very crowded field. Moreover, the reputational downsides of an incident are mitigated because, unlike the public narratives about specific social media firms, AI mishaps tend to be attributed to the technology as a whole. 

I’d argue that too much of the criticism of social media focused on the decisions and features of specific platforms while ignoring the fundamental risks associated with technology that connects people instantaneously at global scale. 

With AI, we should watch for the inverse mistake - blaming the technology broadly and ignoring the peculiarities of specific corporate decisions and features.  

Perhaps, you might think, the third theme I’ll mention – regulation – can redress these incentives. And, perhaps not.

There is no doubt that companies increasingly aim to comply with the Digital Services Act and Online Harms, among others. But the regulations are no panacea. Overall, I think they will eventually raise the floor for Trust & Safety investments by companies, while at the same time lowering the ceiling. 

It will take time to raise the floor though. Take the DSA. The Very Large Online Platforms (VLOPs) have extensive obligations – and some of those companies are likely to receive intense scrutiny. 

So far though, those enforcement efforts have been highly predictable and align with the political vulnerability of the platforms being examined. I’m sure the legal teams at X and Meta were not surprised to find themselves investigated.

The thing is: the legal teams at a whole host of smaller platforms were also not surprised to find X and Meta investigated. They can read the room. They understand the politics of regulatory enforcement - and the capacity of the regulators. 

And, as a result, they have a calculation to make: do we invest heavily into DSA compliance now when we know that our company is unlikely to be subject to enforcement or do we spend some of those resources investing in the next killer feature? Only 24 companies are registered with the EU’s Statement of Reasons database. 

As anyone that has written platform policies at a social media firm can attest - policy is near meaningless without a plan to operationalize it. 

During my tenure at Facebook, I got into a debate with my friend JM Berger at a workshop. He was arguing that Facebook’s policy on white supremacist hate groups was weak compared to its policy regarding al-Qaeda and the so-called Islamic State. I was arguing that Facebook’s policies toward these groups were essentially the same, but its operational posture toward the two categories differed. 

We were both right - and there is a lesson in that for regulators today. 

When I first got to Facebook, the company did have broad, largely similar, policies addressing both hate groups and terrorist organizations. But Facebook also had limited operational resources and needed to develop novel mechanisms to proactively target material from these groups. 

That meant Facebook had to choose where to focus those resources. At first, we prioritized what we called “global” terrorist groups - effectively al-Qaeda, the so-called Islamic State, and the Taliban. Then we expanded to other terrorist groups, then hate groups, and then large-scale criminal organizations. 

JM had recognized that enforcement disparity and interpreted it as a policy difference. In a technical sense, that wasn’t correct, but in terms of effective impact, he was making a critical and accurate point. Was Facebook’s policy really equal if it wasn’t enforced with the same effort and efficacy? 

Given the resource constraints and the risks of deploying new detection mechanisms, I’d still defend the prioritization decisions we made – and as the new enforcement processes matured the overall balance improved dramatically. 

But it’s indisputable that prioritization decisions carry costs – including lagging enforcement against a range of really terrible entities. 

The EU’s enforcement prioritization decisions matter as well, and specifically its focus on VLOPs; keen General Counsel’s at smaller tech firms calculate that the risk of enforcement against them is low because of the operational limitations of the EU enforcement mechanism and its focus on VLOPs. 

And they are investing, or not investing, accordingly. 

Regulation is an exercise in prioritization. And prioritization has consequences. For example, the European Privacy Directive prohibited automated scanning of private messages for Trust & Safety violations, including terrorist content. 

This is significant because – and I cannot emphasize this enough – the really, really bad stuff happens in private messages. The really, really bad stuff happens in private messages. 

As a general rule, the planning, recruitment, and operationalization of plots to attack churches, mosques and synagogues, and disrupt democratic processes happens in messaging applications. Not on open Facebook pages. Not in TikTok videos. Private messages.

Platforms now have fewer tools to identify terrorist messaging in those messaging applications – even the unencrypted ones -- and regulation is a key reason. (The European Council is considering a measure that would require scanning messages so this is still an active issue.)

There is a tradeoff here between security principles and privacy - and everyone must make their own decision about what they want to prioritize. But there are tradeoffs and even worthwhile tradeoffs have consequences. For now, one is that savvy terrorists have more safe haven on messaging surfaces and applications. 

But whereas the responsibility for such tradeoffs used to belong solely to platforms, now they are shared by regulators. And that means that when those tradeoffs lead to negative real-world outcomes – and they will sometimes – we must hold the regulators and the legislators that set their direction accountable too, and not simply reflexively point at the platforms responding to the incentives placed in front of them. 

Charlotte Willner at the Trust & Safety Professional Association jokes that the T&S in Trust & Safety stand for “tradeoffs and sadness.” That’s the life of internet regulators now as well. Welcome to the club. 

Close

I am honored to be here with all of you. A sustained conference series like TASM would have been unthinkable 20 years ago when I started doing this work. And today we have specialists from companies, regulators, academics, and civil society coming together to discuss and debate these questions. 

It’s invigorating. The questions we face are so dynamic that they defy “expertise,” but those same questions rejoin ancient and perennial debates about privacy and security, unfettered speech and the obligations of community, the universality of morality and the specificity of language, culture, and tradition. 

With those stakes, I hope we can all find the humility to be challenged over the next several days, and to learn from one another. May we also have the audacity to generate new ideas, to speak up when something demands clarification, and ultimately to build this community even stronger.

Thank you so much to Stuart, Swansea, and the entire TASM community for inviting me here today. I am proud to be part of this group and excited to spend the next few days learning from all of you.

Book a meeting

Read More

No items found.