Digital communications platforms are part of the geopolitical battlespace - and that reality will test platforms in new ways. Companies today must balance competing interests: to honor victims, mitigate or prevent future violence, abide by legal requirements, limit harmful experiences for their users, and ensure their platforms can be used for communication necessary to understand an often brutal and violent world. These imperatives do not always align. No company will balance them perfectly – that cannot and should not be our expectation – but those companies that prepared for these moments will manage better than those that did not. 

We should expect platforms to take a hard line against terrorism and the horrors of modern war in an urban landscape, but we also do not want platforms to whitewash the moment. Technology firms should endeavor to create safe experiences online where information can be trusted, but the physical world is not safe, and the fog of war long predates social media. In a moment of war, technology firms, like all of us, must balance the desire to respect victims and limit future violence with the responsibility to bear witness honorably. But that witnessing has to occur within platform guidelines, minimize the risk to human life, and be grounded in authentic material, not the disinformation and misinformation that is rife in conflict.

Hamas’ terrorist attack on October 7 illustrates the great challenges for platforms during a modern military conflict. The purpose of this post is to highlight practical tools for platforms trying to manage terrorist content and the associated digital ills associated with modern war. 

The Complex History of Terrorism Online

Extremist use of digital tools is not new. American white supremacists built bulletin board systems on Commodore 64’s in the early 1980s. Many terrorist organizations established websites in the 1990s, including Hamas. The prominent white supremacist forum Stormfront was first hosted on Beverly Hills Internet, the earliest incarnation of the early website-hosting platform Geocities. Ansar al-Islam, a Kurdish jihadi group whose presence helped justify the 2003 invasion of Iraq, used MS Word templates to create multilingual websites hosted on Yahoo!-owned Geocities in the early 2000s. 

The rise of the Islamic State and, in particular, its recruitment of westerners via the Internet, forced technology platforms to address violent extremism more seriously. As a result, large companies are better situated today to manage Hamas’ atrocities and the violence to come than they were ten or even five years ago. Many platforms have internal processes for crisis management, developed or purchased tooling and intelligence, and can turn to cross-industry bodies to facilitate information-sharing. But terrorists are adversarial actors and the present fight is very likely to raise novel challenges. 

Policy Considerations for Platforms Managing Political Violence 

Platform goals regarding political violence are multifaceted and complex: to avoid facilitating future violence, respect victims, create a safe environment online for users, enable communication that allows users an accurate understanding of the conflict, and avoid government sanction. In crisis, none of these are easily achieved in isolation and steps taken to advance one goal sometimes puts others at risk. 

There are some baseline legal requirements that impact platforms. For U.S.-based platforms, sanctions under the International Emergency Economic Powers Act (IEEPA) prohibit platforms from knowingly allowing U.S.-designated terrorist organizations from operating on their platforms. That includes Hamas. Large US-based platforms tend to ban terrorist groups, including Hamas, in their internal policies, both as a matter of compliance and principle. But not every platform is U.S.-based. And some international bodies do not sanction Hamas, most notably the United Nations. So, while it is true that there are important legal frameworks that shape platform approaches toward terrorism broadly, and Hamas specifically, it is also true that platforms choose how to interpret regulations and what risks to accept regarding compliance. As a result, platform approaches may differ significantly. 

It is worth thinking about platform rules in three broad ways: actors, behavior, and content

The most important actor-level policies are used to proscribe certain actors, most notably terrorist organizations, militias, and hate groups. Such policies are aggressive, but not foolproof. Terrorist groups are adaptive and many develop proxy networks that obscure their true ownership, a tactic that in some cases may implicate platform rules requiring clarity about authenticity, ownership, or control. Groups will very often distribute their activities across multiple platforms – advertising on one, fundraising or communicating on another – which can complicate policy and enforcement decisions because it limits the information readily available to reviewers. 

Enforcement decisions on proxy organizations can cause real political challenges for platforms. Activist groups may challenge the removal of groups that appear neutral on the surface, but sub rosa behavior – messaging, IP links, etc – indicate deliberate affiliation with a banned organization.  On the flipside, governments will sometimes point to groups they claim are managed by a terrorist organization but fail to provide significant justification. Given the pressure by some governments on platforms to suppress political opposition, it is important to be wary of such claims. At the same time, there may very well be vexing scenarios where governments, aware of links between front groups and terrorist organizations through means they do not share, cannot therefore substantiate conclusions to understandably wary digital platforms. 

Content-level prohibitions on terrorist content are often quite complex. Platforms often aim to prohibit activities that actively support a terrorist group or attack but allow neutral reporting about terrorism and counterspeech against the groups themselves. That means a propaganda video posted in one context may be allowed and in others it will be banned. Such complexity can lead platforms to use other policies to remove terrorist content, for example prohibitions on gore or violence that apply always and not just in some circumstances.

Prohibitions on terrorist groups are not the only policies that are important in conflict. Most platforms limit calls to violence, slurs, and broad dehumanizing statements about groups of people. War, however, is violence. People choose war and so when they demand, endorse, or threaten it in digital space those representations are often simply shadows of the horrific specter that stalks the real world. To ignore such calls for violence does not safeguard the real-world, it whitewashes it. To manage these nearly impossible challenges, platforms build complex sets of rules that allow calls for violence in broad terms but not to target specific locations at specific times. Such parsing by platforms is understable, and understandably misunderstood by users and platform observers. 

Disinformation, deception, and the fog of war are fundamental to political violence and war. Internet platforms cannot eliminate them, but they do have a responsibility to limit their services from being exploited. Platform policies on disinformation are critical to managing political violence, though such tools are very difficult to employ in real-time. That matters if the purpose of such deception is to create a tactical advantage or shape a political narrative. Platform policies should enable action against disinformation, but platforms should be very humble about their ability to manage disinformation adequately in a crisis - and, recognizing their limitations, communicate an expectation to users that combatants are likely to spread false information surreptitiously. 

Operations in Crisis 

Platforms utilize a range of operational efforts to manage political violence and drive decisions across the entire Trust & Safety decision spectrum. Large platforms will spin up teams of national security veterans to manage the crisis in real-time; smaller platforms will often struggle to identify key individuals who can be spared from regular work to deal with the particulars of the crisis. 

Scaled Review Teams

Scaled review teams are often critical, especially those that speak relevant languages and can understand rapidly evolving linguistic shifts. But there can be challenges with such teams in crisis moments. For starters, employees and contractors may be impacted by the conflict themselves or need to care for loved ones. Large companies know, as well, that the political conflicts are often reflected inside companies and outsourced review teams. Visceral political debates can be disruptive and, in worst case scenarios, lead to explicit efforts to support partisans. Cultural and linguistic expertise is critical during a crisis, but insider-risk threats are real as well. 

Investigative Teams

Larger platforms often build investigative teams that can identify networks of accounts working in concert, including those that deliberately obscure their association with the problematic group or network. This is useful for uncovering terrorist-related accounts but also to surface disinformation networks. These teams typically require different training, tooling, and oversight than scaled review teams. Another great advantage of these teams is that they are very nimble. Highly trained investigators can often spin up on new threats quickly, which matters in a dynamic conflict where adversaries deliberately seek to avoid platform restrictions. 

Automation

Automation is also critical. Simple automation, like hash-matching and keyword matching, are important tools even though they have critical limitations. Neither catch novel threats and both require constant upkeep to be useful. More sophisticated AI can improve on both counts, both evolving more quickly and more independently. But AI has significant error rates, which can matter during a crisis. Moreover, adversaries tend to shift operations more quickly than AI adjusts. This requires either manual fine-tuning in real-time or a shift toward more human operations. 

Intelligence-driven Operations

Platforms also run intelligence-driven operations to identify threats to their platform as they occur on other platforms. During my tenure at Meta (then Facebook), my team developed a program to collect Islamic State propaganda from Telegram, review it per platform policies, and prepare hashes of violating material all before that content had even been posted on Facebook. This allowed the team to identify ISIS propaganda with a high degree of certainty as soon as it was posted. Not every intelligence-driven operation needs to run so efficiently, but such techniques are often very useful because nefarious actors often prepare their activities on one platform before executing them on another. 

Recidivism

Political violence usually involves dedicated actors in the real world. When they suffer the loss of accounts online, such organizations and individuals will often adapt their procedures and return to a platform. Platforms often need to develop automated systems to remove accounts that violate policies repeatedly and identify new accounts that likely reflect prior removals. 

Extraordinary Measures

A platform’s principles are more important than its standard operating procedures. In crisis, platforms may aim to speed up enforcement by simplifying their enforcement posture. After the Christchurch attack, the terrorist’s livestream was downloaded, spliced, and re-uploaded millions of times. Facebook made the decision to remove all media derived from the video, even posts using stills that did not depict violence and were used to condemn the attack. We simply could not review every post fast enough. It was the right decision to suppress an active propaganda campaign by supporters of a violent white supremacist, but it came at the cost of removing a significant amount of reporting and counterspeech. In a recent blog post describing its approach to the October 7 Hamas attack, Meta suggests it is making similar compromises in the current conflict and is therefore not applying strikes on accounts that violate some platform rules, presumably because of an increased risk of false positives. 

Cross-Industry Resources Can Help Platforms

There are cross-industry resources available. The Global Internet Forum to Counter Terrorism (GIFCT) provides training, enforcement resources, and collaboration mechanisms for digital platforms.  GIFCT is best known for its hash-sharing database: a mechanism for sharing digital fingerprints of known terrorist propaganda, manifestos, and URLs where noxious material is shared. Platforms can safely tap into this database for hashes of terrorist content that can be used to detect such material on their platform. 

GIFCT also has protocols for communicating among member platforms during a crisis. These range from relatively-simple situation updates and facilitating bilateral conversations between platforms to the Content Incident Protocol (CIP), which enables cross-industry hash-sharing and regular updates to cross-sector partners such as governments and NGOs. 

NGOs and researchers can also be critical allies. Tech Against Terrorism is a British non-profit that specializes in advising technology companies looking to improve their preparedness for dealing with terrorism. Many companies also join the Christchurch Call, a political coalition convened by the governments of New Zealand and France in the wake of the Christchurch terrorist attack. 

There are also specialized conferences focused on how political violence manifests online. The Terrorism and Social Media (TASM) conference at Swansea University brings together academics that study terrorism and political violence with practitioners from social media platforms. Unlike many Trust & Safety gatherings, the academics at TASM tend to focus first on the groups that conduct political violence and second on the medium itself. 

The Inevitability of Trust & Safety

The current crisis illustrates, again, that Trust & Safety teams and tools are not nice-to-haves. Digital space is part of the battlespace; managing real-world geopolitical crises responsibly is what both users and regulators expect of platforms. The current crisis will test those Trust & Safety teams like very few moments before, though many companies are better prepared today than in past years. They will do much right, and inevitably make some mistakes. Regardless, it is important to learn lessons starting now because as extraordinary as the present crisis seems, it will not be the last.

Read More

The Trust & Safety Funnel: A Framework for Product, Operations, and Policy

By recognizing and understanding the interconnected roles that policy, product, and operations departments play, teams will be empowered to enhance cross-functional communication, anticipate and prepare for downstream changes, and work together more effectively.