A slew of academic research, regulation, failed regulation, and even video games point to a truth that Trust & Safety practitioners have long understood: this stuff is complicated. 

At Cinder, we conceptualize some of that complexity in the Decision Spectrum, a framework that describes a range of Trust & Safety decision types on a spectrum reflecting various levels of complexity. In turn, those decision types manifest across a range of Trust & Safety activities - simple object enforcement, complex object enforcement, unstructured investigations, etc. In large Trust & Safety enterprises, those activities and decision types are often managed by different teams and organizations. In small enterprises they may be consolidated into a single, very busy, individual. In either case, companies must determine what to prioritize, how to synthesize similar functions, and how to structure accountability overall.

I described many of those Trust & Safety functions in a long paper for the Brookings Institution, which targeted the policy community in Washington, D.C. The Digital Trust and Safety Partnership (DTSP) covers much of the same ground, and expands on it, with their definition of “Trust & Safety.” Whereas the Brookings paper describes what might be called “reactive” Trust & Safety activities it sidesteps elements that the DTSP rightly includes in its definition, such as product advisory work and red-teaming.

The DTSP developed a Glossary of Trust & Safety to set baseline definitions of key terms, improve cross-industry conversation, and help define the field for outsiders. In other words, help it mature. As that maturation occurs, we expect to see not just standard nomenclature, but increasingly standardized organizational structures and better-defined norms across industry. Will the same structure be applicable to every company? Of course not. Conceptual frameworks, like plans, do not survive first contact with the peculiarities of particular companies and products, and unique resource constraints. Nonetheless, they are mechanisms for prioritization and risk assessment which may be useful as more and more companies evolve Trust & Safety enterprises of their own. This post ties the functions of Trust & Safety to bureaucratic structures and offers recommendations for leaders considering different organizational models. 

Trust & Safety Functions

The responsibilities of “Trust & Safety'' teams are not standard. Depending on how a company is organized internally, Trust & Safety will either include functions sometimes assigned to neighboring teams, like compliance, customer success, legal, core product, and information security. 

The Digital Trust & Safety Partnership’s definition of Trust & Safety is both broad and knowingly incomplete, saying that elements linked to Trust & Safety “include”: “defining policies, content moderation, rules enforcement and appeals, incident investigations, law enforcement responses, community management, and product support.”

I’ve consolidated some of those elements and added a few others to capture a superset of Trust & Safety functions.

  • Defining Policy: Includes normative development, drafting, definition of enforcement actions, and testing.
  • Content Moderation + Rules Enforcement and Appeals: Includes detection, record-keeping, various decision types, including simple labeling and more complex investigations, and the application of different enforcement actions. 
  • Law Enforcement Responses: Includes management of law enforcement requests for user information.
  • Community Management: Includes informing the community about rules, norm-setting, and, potentially, providing users tools to manage their own experience. 
  • Compliance: Includes everything from management of the Digital Media Copyright Act (DMCA) to complex sanctions enforcement and the Digital Services Act (DSA). These functions are sometimes executed by legal organizations separately from Trust & Safety organizations focused on implementing platform guidelines, but they rely on similar tools and processes. 
  • Incident Investigations: Investigative teams sometimes grow out of Legal or Information Security organizations because they are focused on high-end, adversarial actors. But these teams often must collaborate with more traditional Policy Enforcement processes, which generate leads for deeper investigation. 
  • Product Support: Product advisory often includes everything from “Responsible Tech” organizations investigating issues like algorithmic fairness, research teams assessing social impact, and red-team organizations dedicated to thinking through how adversaries will exploit well-intentioned products. This also often includes efforts to empower users to configure preferences that reflect personalized Trust & Safety predilections. 
  • Fraud & Account Takeover: Attacks that exploit network and computing infrastructure are usually considered “Cyber” issues rather than Trust & Safety ones. But “softer” attacks, including phishing, other forms of fraud, and account takeover fall in a middle ground. 
  • Transparency: Includes both direct user engagement and broader transparency reports. 
  • Managing Quality: Marketplaces and creator platforms must manage the quality of the goods and services sold on their applications. This quality control requires activities very similar to Trust & Safety, but is often organized separately. 

Many readers will note that these functions overlap and should not be distinguished completely. In short, that is why they are sometimes collected into a core Trust & Safety organization. But in larger organizations divergent sensitivities and responsibilities can mean they are housed in different bureaucratic units. It makes sense, for example, that a large-company General Counsel would want ultimate authority over Law Enforcement Responses and Compliance whereas a product lead might want to control Product Support. Whether these functions are consolidated or federated, Trust & Safety is fundamentally cross-functional. 

Even when authority is distributed, however, these functions overlap and often depend on one another to succeed. Most of these functions depend on decisions reflected on Cinder’s Trust & Safety Decision Spectrum. Sometimes that is straightforward, for example scaled Content Moderation, Law Enforcement Response, and Incident Management. 

But the framework applies more broadly as well. For example, sophisticated Policy Definition often involves substantial implementation testing (as OpenAI’s recent blog discussing the use of LLMs explores), and Product Support sometimes means data labeling to train recommendation algorithms and other machine learning models. 

Of course, many companies do not formally execute on all of these Trust & Safety functions. Small teams may not have a real investigative capacity, for example, and comprehensive transparency reports are still not the rule - though regulation will increasingly require them.

Trust & Safety Enterprise Structure

Trust & Safety’s nest of functions is incredibly complex, and executives must decide how to organize them. Ultimately, this centers around three interrelated questions:

  • Should Trust & Safety operate as an independent organization or a sub-unit within another group?
  • What executive is responsible for Trust & Safety? 
  • Should Trust & Safety be organized like the Joint Staff or the Marines? 

As a practical matter, Trust & Safety’s initial organizational home is often a function of the core business concern driving the development of the Trust & Safety capability. Historically, there are three core concerns: 

  • Legal obligations (example: copyright enforcement and compliance)
  • Product concerns (example: harassment on a dating site)
  • Customer support (example: complaints about account takeovers)

These origin stories are interesting because they often do not align directly to the Policy Definition and Content Moderation tasks often considered central to modern Trust & Safety. In small companies, that may not matter. Trust & Safety efforts are concentrated in a small team (or individual). In larger companies, Trust & Safety efforts are inevitably cross functional, whether they are concentrated in a single organization or distributed across a range of different teams. 

The reporting chain for these teams varies widely. Unlike information security, where Chief Security Officers (CSO) and Chief Information Security Officers (CISO) are commonplace, Chief Trust & Safety Officers (CTSO) are rare. Trust & Safety functions sometimes report up to the General Counsel because they grow out of compliance efforts and in other situations to Chief Operating Officers (COO), Chief Technology Officers (CTOs), or Chief Product Officers (CPOs) because of the large operational or product requirements. 

Sometimes, the executive responsible for platform user retention and support, a title itself not well defined but often represented as a Chief Customer Officer or Chief Community Officer (CCO), becomes the owner of the Trust & Safety function. Why? Simply because they already own the teams moderating other aspects of platform health on a daily basis. 

Larger teams may not have a singular Trust & Safety leader at all - and effectively operate like a military Joint Staff, where land, sea, and air power are largely managed by different military services. Long ago at Facebook, the Product Policy, Community Operations, and Engineering teams spoke of a “three-sided coin” to reflect the cross-functional and cross-organizational responsibility required for Trust & Safety to be a success. Yet even that effort at inclusivity did not capture critical functions within the Legal team to manage law enforcement requests and manage sanctions compliance programs, let alone investigative groups in the security team analyzing sophisticated adversarial networks. Meta has since consolidated some of this work, but fundamentally key functions operate across different organizations. 

Other companies may create a singular Trust & Safety leader that leads a cross-functional unit that includes Policy Definition, Content Moderation, and the engineering necessary to make their efforts work. The crude analogy for this framework is the Marine Corps, which famously integrates land, sea, and air power in a single military service. In a Trust & Safety setting, such organizations may do the operational Investigations and Law Enforcement Response work, even if the final authority for disclosing data lives with the legal team. The “Marines-like” structure may improve coordination across these various functions, but potentially at the risk of reducing effectiveness of the specific functions because they are not housed within broader structures built to scale those capabilities. 

I have seen both models work. When I arrived at Facebook in 2016, the Dangerous Organizations cross-functional team operated more on the Marines model - with dedicated engineering, operations, and policy capabilities. We were a sub-unit within the larger Trust & Safety enterprise that operated quite differently from most everyone else. It was inefficient, but effective. We made huge strides against the Islamic State because we could coordinate tightly and pivot quickly in response to adversarial shifts. But there were costs to that approach. Our systems were not always plugged into cross-cutting metrics and transparency tools, they could only be maintained by a limited set of engineers, and the whole effort was prohibitively costly to all but the wealthiest companies. Eventually, many of those capabilities were folded into their functional bureaucracies and we started to look more like the Joint Staff model. This was more efficient and more sustainable, but the group was less nimble as a result. 

Conclusions

There is no single Trust & Safety structure optimal for all companies. But there are principles that all companies should consider as they develop Trust & Safety functions.

  • Trust & Safety should be someone’s ONLY job. Trust & Safety should be the full-time responsibility of someone, and they should either operate at the C-Suite level or report to a division (Legal, Operations, Security, etc) that is represented at the C-Suite. Name a Head of Trust & Safety even if that role is not extremely senior. 
  • Trust & Safety is fundamentally cross-functional. To be successful, Trust & Safety leaders must be empowered to coordinate and utilize legal, operations, and engineering capabilities. So, wherever that Head of Trust & Safety operates, empower them to draw resources from multiple functions or expect them to fail. That means:
  • Joint Staff or Marine Corps. If your Head of Trust & Safety does not look after a cross-functional organization of their own, you should explicitly create a cross-functional “Joint Staff” from legal, engineering, operations, etc to support core initiatives and ensure collaboration. 
  • Recognize that the Marine Corps model is never absolute. Many Trust & Safety organizations consolidate functions in a single organization, but this will never be complete. Your Legal team will need final say over law enforcement disclosure and regulatory risk; public policy will have a say on DSA compliance; security should care about account takeover risks.
  • Empower Collaboration. Insist on joint technical infrastructure, internal information-sharing, and robust tooling. If your company has users and operates at any kind of scale, your Trust & Safety challenges will grow complex. Make sure you have cross-cutting infrastructure because the challenges will transcend bureaucracies. 
  • Invest in tools. Trust & Safety teams rarely have the tools they need to be effective. Measure the investment in those tools–whether you utilize robust engineering resources to build them or buy from the growing market of solutions–against regulatory, reputational, and moral risk. 
  • Build your Trust & Safety organization thoughtfully. Some functions are closely aligned. A product-support function designed to imagine product risk can be parceled out into a red-team function as a team grows. Policy development, Operations, and Quality Management are closely related; they can roll up to one individual in a pinch. So can Quality Management and Regulatory compliance. Investigative teams can be housed in various organizations; the important issue is that they must not be measured against the same metrics as traditional operational teams, which will have higher scale.

  • Incentivize the Engineering. The hardest resourcing question for Trust & Safety teams always involves engineering. Engineering resources are hard to deploy because Trust & Safety is treated as a cost center - even when it supports core product goals. Such resources are even harder to sustain: engineers are often assigned for a specific Trust & Safety task but then get re-allocated. This complicates building cross-functional tools because the development cycle is long. Sustaining focus is the argument for placing Trust & Safety Engineering in its own unit directly within the Trust & Safety enterprise. The counter-argument is that engineers may want to operate within a larger engineering organization with broader opportunities. Whatever the case, Trust & Safety engineers should not be held to the same performance metrics as others - because those standards are almost always built around traditional product development rather than Trust & Safety.

There is no single right way to build a Trust & Safety enterprise. But the tensions and tradeoffs facing leaders constructing Trust & Safety enterprises are remarkably familiar. Regardless of the specific choices, Trust & Safety will remain fundamentally cross-functional, which means collaboration across disciplines and teams focused on different parts of the Decision Spectrum. There is no one right way to structure those enterprises, but leaders should consider the tradeoffs inherent in any organizational structure - and the need to build cross-functional collaboration from the ground up. 

Book a meeting

Read More

Measuring Trust and Safety

Measuring the success (and limitations) of a Trust & Safety program is a complex process, particularly because many Trust & Safety departments are considered cost centers.

Build vs. Buy: Trust & Safety Edition

Today, Trust & Safety teams have a robust set of vendor options and so face a question long-pondered by leaders in other areas of the business: whether to build or to buy.