Anyone who has been on a Trust & Safety team for more than 10 seconds has almost definitely experienced at least one, if not many, reorgs in their time. Trust & Safety presents a uniquely difficult set of organizational challenges. In many cases, trust & safety starts as part of customer support and slowly (or quickly after a headline-making event or two) evolves into its own department. And though some companies are quicker than others to allocate product and engineering resources towards trust & safety, finding the right cross-functional approach remains a challenge.
In the last 8 months, I have either worked directly or done discovery sessions with dozens of trust & safety teams - and every one is organized a little differently. Nonetheless, one theme emerged: a disciplinary triad of policy, operations, and product. (For the purposes of this post, I am including engineering under product.) These three disciplines must work together to drive trust & strategies forward. How they work together varies greatly - for better and for worse. I’ve witnessed friction and frustration on under-resourced teams. I’ve witnessed streamlined, efficient solutions on teams that work together harmoniously. And I’ve witnessed everything in between. I’ve attempted to codify how these teams can intersect to maximize efficiency.
My hope is that by recognizing and understanding the interconnected roles that policy, product, and operations departments play, teams will be empowered to enhance cross-functional communication, anticipate and prepare for downstream changes that are a result of another team’s changes, and work together more effectively.
The Funnel
The ultimate goal of Trust & Safety teams is to create effective, efficient, and equitable platform governance that ensures platforms are used for their intended purpose and in a manner that mitigates harm.
In a dream world, teams would be able to thoroughly vet every user joining a platform and would have the time and bandwidth to investigate even a hint of suspicious behavior. But, since it's not feasible to vet every actor or piece of content, platforms must clearly define prohibited behaviors and then identify the actors and content that violate these standards. The establishment of these guidelines and standards is the responsibility of the Policy team.
Policy: Defining The Filter
Policy is the cornerstone of trust & safety: this team determines what content or behavior is acceptable and what the repercussions are for misuse of the platform. They are faced daily with the daunting challenge of balancing free expression with an imperative to maintain a safe, respectful, and legally compliant online environment. This involves staying abreast of evolving legal requirements, social norms, and user expectations. This is an extraordinarily difficult challenge because policy is inevitably too blunt - it can never capture all of the context shaping interactions between individuals, even online. Moreover, it is extremely difficult to build operational policy that is enforceable across different cultural and political contexts.
Policy’s role in the funnel is to define what a trust & safety team needs to filter for, which brings us to the top of the funnel. Enter Product.
Product: The First Line of Defense
If Policy defines what to look for and how to act on it, it's (fittingly) Product’s responsibility to productionize those requirements.
To do this, Product teams typically rely on a combination of AI or ML algorithms that leverage pattern recognition, text and image analysis, and user reports (among other signals) to detect behavior or content that doesn’t align with platform policy. Product’s mechanisms are powerful but imperfect. In March of 2020, lockdowns forced YouTube into sending their human moderators home. Without human moderation, they had to rely entirely on their automated systems for moderation - or as they put it in a 2020 blog post:
“When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement…Because responsibility is our top priority, we chose the latter—using technology to help with some of the work normally done by reviewers. The result was an increase in the number of videos removed from YouTube; more than double the number of videos we removed in the previous quarter. For certain sensitive policy areas, such as violent extremism and child safety, we accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible. This also means that, in these areas specifically, a higher amount of content that does not violate our policies was also removed.”
YouTube saw a 3x increase in content takedown between Q1 and Q2 of 2020 - and the rate of accepted appeals doubled over the same time period. (Accepted Appeal is when a user appeals a takedown decision and YouTube acknowledges the content is non-violative and reinstates it.) This tells us that an algorithm casting a wider net and reducing the volume of manual work does not necessarily mean reducing operations work. It’s simply displacing that work. Not only did accepted appeals double for YouTube, but the number of total appeals doubled. The cost of casting a wider, but less refined net - the cost of getting it wrong, of a false positive - is an appeal ticket and a bad user experience.
Detection relies on leveraging signals to make moderation decisions. When there is not enough signal to make a decision, Product must rely on a human to fill in the blanks. However, that human does not always need to be a moderator. In many instances, Product can shift the moderation onus to the user themselves - reducing manual moderation work and, in many instances, expediting the time to review.
Leaning on users to provide additional signals is an increasingly common tactic for Product teams trying to reduce moderation work down funnel. This might manifest as prompting potentially underage users with mandatory ID Verification or requiring potentially fraudulent users to Selfie-Verify. Tinder’s “Does this Bother You?” feature gathers additional signal by asking users on the receiving end of a potentially abusive message whether the message was in fact offensive.
And soliciting more signal to reduce down funnel moderation isn’t where it ends. Product can also create features to disrupt abuse before it happens. Instagram uses AI to identify potentially harmful comments and before publishing them asks the user: “are you sure you want to post this?” This proved to be pretty effective: an internal study of 70,000 instagram users saw a 30% decrease in users sending harmful messages.
Nudging the user can be effective. This tactic is often conflated with that of warnings - wherein a platform issues an often patronizing reminder of the code of conduct. Tone and copy matter. I’ve often thought platforms should stop using the terminology of “warnings” and reframe these as educational moments: opportunities to realign a user with a platform’s community guidelines. Product teams have a unique opportunity to coach or educate users to adopt better behaviors, and in doing so, reduce future moderation work.
The relationship between Product and Operations is intrinsically symbiotic. Product whittles down the volume of work by automatically removing high confidence bad actors or content, soliciting additional signals to supplement precision challenges, shifting the moderation onus to the user, and leveraging confidence thresholds to prioritize reviews - but they cannot implement Policy alone - there must be humans in the loop.
External Agents: Humans in the Loop
At this point in the funnel, we enter the realm of hourly employees. Any content that isn't filtered out by the Product's automated systems becomes a matter for human moderators to address, which introduces the factor of cost per enforcement. At many companies, the first line of operations is outsourced to Business Process Outsourcing (BPOs) or external agents. This approach is often more cost-effective, as external agents typically come at a lower cost compared to in-house staff. Internal Operations teams are responsible for translating Policy into clear, actionable moderation guidelines. This ensures that external agents are equipped with the necessary information to make accurate policy enforcement decisions, maintaining the integrity of the platform while optimizing operational costs.
However, external agents are not equipped to handle every situation. There are limitations in terms of the data they can access and the complexity of cases they can resolve. When faced with highly sensitive or severe cases, or when the required information exceeds their access privileges, external agents must escalate these issues to internal teams.
The escalation of decisions to internal teams can also be prompted by gaps in the knowledge or training of external agents. Effectively identifying and remedying these gaps to reduce escalation rates is an essential method for decreasing volume down the funnel.
Internal Teams: The Final Frontier
Only the most complex or sensitive decisions should reach this point in the funnel. Internal teams are responsible for handling these intricate and high-stakes issues, which often require a deeper understanding of the company's policies, legal considerations, and ethical implications. Their role involves critical analysis, crisis management, and sometimes direct communication with law enforcement or legal entities. These teams are also tasked with providing feedback and insights that inform policy updates and training programs, ensuring that the moderation process remains effective and up-to-date with evolving online dynamics and regulatory requirements. Their expertise and decision-making capabilities are essential in maintaining the integrity and safety of the platform, while also safeguarding the company's reputation and legal standing.
Automate Where Possible and Optimize the Rest
The funnel framework encourages cross-functional teams to work together to automate where possible and optimize the rest. I offer this framework as a way to kick off a collaborative thought exercise aimed at addressing the challenge of organizationally structuring trust & safety for maximized efficiency. I want people to pick holes in it and surface gaps or inconsistencies. I hope to gather insights that will enable me to evolve this framework, so please send feedback my way.
Finally I want to address that this post focused on cross-functional relationships and did not touch on privacy or fairness, which should not be any one department’s responsibility, but foundational elements embedded across all departments and levels of an organization. Privacy and fairness and their relationship to Trust & Safety warrant their own future posts.