A slew of academic research, regulation, failed regulation, and even video games point to a truth that Trust & Safety practitioners have long understood: this stuff is complicated.
At Cinder, we conceptualize some of that complexity in the Decision Spectrum, a framework that describes a range of Trust & Safety decision types on a spectrum reflecting various levels of complexity. In turn, those decision types manifest across a range of Trust & Safety activities - simple object enforcement, complex object enforcement, unstructured investigations, etc. In large Trust & Safety enterprises, those activities and decision types are often managed by different teams and organizations. In small enterprises they may be consolidated into a single, very busy, individual. In either case, companies must determine what to prioritize, how to synthesize similar functions, and how to structure accountability overall.
I described many of those Trust & Safety functions in a long paper for the Brookings Institution, which targeted the policy community in Washington, D.C. The Digital Trust and Safety Partnership (DTSP) covers much of the same ground, and expands on it, with their definition of “Trust & Safety.” Whereas the Brookings paper describes what might be called “reactive” Trust & Safety activities it sidesteps elements that the DTSP rightly includes in its definition, such as product advisory work and red-teaming.
The DTSP developed a Glossary of Trust & Safety to set baseline definitions of key terms, improve cross-industry conversation, and help define the field for outsiders. In other words, help it mature. As that maturation occurs, we expect to see not just standard nomenclature, but increasingly standardized organizational structures and better-defined norms across industry. Will the same structure be applicable to every company? Of course not. Conceptual frameworks, like plans, do not survive first contact with the peculiarities of particular companies and products, and unique resource constraints. Nonetheless, they are mechanisms for prioritization and risk assessment which may be useful as more and more companies evolve Trust & Safety enterprises of their own. This post ties the functions of Trust & Safety to bureaucratic structures and offers recommendations for leaders considering different organizational models.
The responsibilities of “Trust & Safety'' teams are not standard. Depending on how a company is organized internally, Trust & Safety will either include functions sometimes assigned to neighboring teams, like compliance, customer success, legal, core product, and information security.
The Digital Trust & Safety Partnership’s definition of Trust & Safety is both broad and knowingly incomplete, saying that elements linked to Trust & Safety “include”: “defining policies, content moderation, rules enforcement and appeals, incident investigations, law enforcement responses, community management, and product support.”
I’ve consolidated some of those elements and added a few others to capture a superset of Trust & Safety functions.
Many readers will note that these functions overlap and should not be distinguished completely. In short, that is why they are sometimes collected into a core Trust & Safety organization. But in larger organizations divergent sensitivities and responsibilities can mean they are housed in different bureaucratic units. It makes sense, for example, that a large-company General Counsel would want ultimate authority over Law Enforcement Responses and Compliance whereas a product lead might want to control Product Support. Whether these functions are consolidated or federated, Trust & Safety is fundamentally cross-functional.
Even when authority is distributed, however, these functions overlap and often depend on one another to succeed. Most of these functions depend on decisions reflected on Cinder’s Trust & Safety Decision Spectrum. Sometimes that is straightforward, for example scaled Content Moderation, Law Enforcement Response, and Incident Management.
But the framework applies more broadly as well. For example, sophisticated Policy Definition often involves substantial implementation testing (as OpenAI’s recent blog discussing the use of LLMs explores), and Product Support sometimes means data labeling to train recommendation algorithms and other machine learning models.
Of course, many companies do not formally execute on all of these Trust & Safety functions. Small teams may not have a real investigative capacity, for example, and comprehensive transparency reports are still not the rule - though regulation will increasingly require them.
Trust & Safety’s nest of functions is incredibly complex, and executives must decide how to organize them. Ultimately, this centers around three interrelated questions:
As a practical matter, Trust & Safety’s initial organizational home is often a function of the core business concern driving the development of the Trust & Safety capability. Historically, there are three core concerns:
These origin stories are interesting because they often do not align directly to the Policy Definition and Content Moderation tasks often considered central to modern Trust & Safety. In small companies, that may not matter. Trust & Safety efforts are concentrated in a small team (or individual). In larger companies, Trust & Safety efforts are inevitably cross functional, whether they are concentrated in a single organization or distributed across a range of different teams.
The reporting chain for these teams varies widely. Unlike information security, where Chief Security Officers (CSO) and Chief Information Security Officers (CISO) are commonplace, Chief Trust & Safety Officers (CTSO) are rare. Trust & Safety functions sometimes report up to the General Counsel because they grow out of compliance efforts and in other situations to Chief Operating Officers (COO), Chief Technology Officers (CTOs), or Chief Product Officers (CPOs) because of the large operational or product requirements.
Sometimes, the executive responsible for platform user retention and support, a title itself not well defined but often represented as a Chief Customer Officer or Chief Community Officer (CCO), becomes the owner of the Trust & Safety function. Why? Simply because they already own the teams moderating other aspects of platform health on a daily basis.
Larger teams may not have a singular Trust & Safety leader at all - and effectively operate like a military Joint Staff, where land, sea, and air power are largely managed by different military services. Long ago at Facebook, the Product Policy, Community Operations, and Engineering teams spoke of a “three-sided coin” to reflect the cross-functional and cross-organizational responsibility required for Trust & Safety to be a success. Yet even that effort at inclusivity did not capture critical functions within the Legal team to manage law enforcement requests and manage sanctions compliance programs, let alone investigative groups in the security team analyzing sophisticated adversarial networks. Meta has since consolidated some of this work, but fundamentally key functions operate across different organizations.
Other companies may create a singular Trust & Safety leader that leads a cross-functional unit that includes Policy Definition, Content Moderation, and the engineering necessary to make their efforts work. The crude analogy for this framework is the Marine Corps, which famously integrates land, sea, and air power in a single military service. In a Trust & Safety setting, such organizations may do the operational Investigations and Law Enforcement Response work, even if the final authority for disclosing data lives with the legal team. The “Marines-like” structure may improve coordination across these various functions, but potentially at the risk of reducing effectiveness of the specific functions because they are not housed within broader structures built to scale those capabilities.
I have seen both models work. When I arrived at Facebook in 2016, the Dangerous Organizations cross-functional team operated more on the Marines model - with dedicated engineering, operations, and policy capabilities. We were a sub-unit within the larger Trust & Safety enterprise that operated quite differently from most everyone else. It was inefficient, but effective. We made huge strides against the Islamic State because we could coordinate tightly and pivot quickly in response to adversarial shifts. But there were costs to that approach. Our systems were not always plugged into cross-cutting metrics and transparency tools, they could only be maintained by a limited set of engineers, and the whole effort was prohibitively costly to all but the wealthiest companies. Eventually, many of those capabilities were folded into their functional bureaucracies and we started to look more like the Joint Staff model. This was more efficient and more sustainable, but the group was less nimble as a result.
There is no single Trust & Safety structure optimal for all companies. But there are principles that all companies should consider as they develop Trust & Safety functions.
There is no single right way to build a Trust & Safety enterprise. But the tensions and tradeoffs facing leaders constructing Trust & Safety enterprises are remarkably familiar. Regardless of the specific choices, Trust & Safety will remain fundamentally cross-functional, which means collaboration across disciplines and teams focused on different parts of the Decision Spectrum. There is no one right way to structure those enterprises, but leaders should consider the tradeoffs inherent in any organizational structure - and the need to build cross-functional collaboration from the groud up.