Yesterday, OpenAI shared how they leverage GPT-4 for content moderation. It’s exciting to see LLM companies like OpenAI share best practices for how to use their technology to build better policy enforcement. We've already seen bad actors leverage LLMs to their advantage, so it's imperative that Trust & Safety teams do the same to stay ahead. And critically, OpenAI correctly called out that LLM-based approaches can learn the nuances of each platform’s specific set of policies, bringing us closer to a world in which platforms can build truly custom solutions that solve their unique needs.
That’s exactly why we built Cove AI: we want each platform to build Trust & Safety solutions that are tailored to their needs, and we think fine-tuned, human-guided LLMs are the best way to do that. This is how we envision the future of Trust & Safety - policy experts guiding LLMs to understand linguistic and policy nuances at scale.
Interestingly, OpenAI’s blog post walks through a similar workflow to what we’ve built with Cove AI. The results are comparable, but the effort and cost required are very different:
How do the Cove AI and OpenAI approaches differ?
Here’s an overview of both approaches:
You’ll notice a few key differences:
- Cove AI doesn’t ask the user to iteratively tweak the policy definition or wrestle with GPT-4’s mistakes. We’ve built ML infrastructure that takes the user’s small set of labels and improves the model's mistakes automatically.
- Cove offers a UI for this entire workflow, optimized for speed and ease-of-use so you don’t have to write code or master prompt engineering.
- Cove gives you all the post-launch tools out of the box, including:
-> Instant integration into your automated enforcement system (through Cove’s Automated Rules)
-> Measurement and monitoring
-> Rapid response capabilities - when new types of harm arise on your platform, you can build and tweak models and have them deployed in minutes.
-> Your models improve automatically as moderators make new decisions (with your oversight)
How can Cove AI be 100x Cheaper than OpenAI?As the OpenAI team mentions in the blog post, relying on GPT-4 can get prohibitively expensive very quickly. To demonstrate… let’s do the math!
Let’s say you run a user-generated content platform. Here are a few assumptions about your platform:
- The average length of content on your platform is 40 words, which is equal to slightly more than 50 OpenAI tokens. (This, of course, varies widely across platforms, and is likely an underestimate for any platform that isn’t DM-heavy.)
- You have a set of content policies that prohibit harmful behavior, including harms like hate speech, violence, harassment, etc. You define each of these policies for your users and post them on your website so users know exactly what you allow and what you don’t. These policies have to be both succinct and sufficiently nuanced, so on average they’re 200 words each, which is roughly 270 OpenAI tokens.
- When you want to determine whether a piece of content violates a given policy, you need to construct a prompt that tells GPT-4 what you’re asking it to do - something like “Please tell me whether this comment violates the following policy definition. Provide a yes or no answer.” This prompt is roughly 20 OpenAI tokens.
- The cost of GPT-4’s output is negligible.
So, with every piece of content we run through GPT-4, we’re sending
into the model. At a price of $0.03 per 1K tokens, that amounts to:
Most platforms have at least 10-15 policies, so for each piece of content on your platform, if you want to enforce all your policies, you’d have to pay at least:
That’s a lot! And it’s roughly 100x more expensive than Cove AI. At Cove AI, we’ve figured out how to harness of the value of GPT-4 and other leading LLMs, and distill it down to a lighter-weight product that achieves the same outcomes for a drastically lower cost.
We’re always excited to see how Cove AI can help platforms with real-world use cases, so please schedule a demo with us if you’d like to learn more!