The Facts
Earlier this week OpenAI announced ChatGPT Enterprise. Their enterprise product offers a few upgrades on the consumer-focused ChatGPT product:
No limit on GPT-4 use (the paid consumer application only allows 50 messages / 3 hours)
Faster GPT-4 inference (they claim up to 2x faster than the consumer application)
32k context window for GPT-4, allowing it to process larger documents.
The packaging (from a user perspective) is essentially the same — as a user, you log in and chat with ChatGPT, and OpenAI will likely advertise using the estimated productivity gains (up to 40%) for knowledge workers.
OpenAI also introduced a number of enterprise features to try to ease some security concerns that many enterprises had voiced:
OpenAI announced SOC2 compliance, including data encryption in flight and at rest.
OpenAI will not train on conversations with ChatGPT Enterprise.
Standard enterprise table stakes — SSO, admin panels, etc.
“Future support” for fine-tuning the Chat models on your enterprise data
Why it matters
The joke in Silicon Valley is that anytime OpenAI makes an announcement, they “kill” hundreds of startups that were working on this feature. While I think this is overstated, I do think it is worth enumerating the obvious products that OpenAI could build based on their success in the last nine months:
ChatGPT (consumer): a consumer productivity / entertainment tool that has seen massive adoption and revenue for OpenAI (estimated at $100s of Millions of revenue). See also: CharacterAI for more of the entertainment side. Lots of potential to grow with plugins and integrations.
ChatGPT (enterprise): An enterprise productivity tool built for knowledge workers. Accelerate a wide range of work (writing, emails, brainstorming, data analysis, etc.). Room to expand with some obvious things: “chat with your documents”, “fine-tune on your data”, integrations with SaaS systems, and vertical-specific tooling.
Model-as-a-Service: A developer tooling play that provides model APIs that developers can use to build AI-powered applications. Probably offers a spectrum of models that have various performance, speed, and price points. Likely expands into offering fine-tuning as a service to be able to customize models to specific domains. To sell to an enterprise, may need advanced deployment options like private networking, dedicated instances.
Some thoughts about those businesses:
The primary differentiator for all three is (at least right now) model quality. Having the best model probably means you have the best product, which gives pricing power. If models tend to converge, then this will degrade over time.
There is a strong differentiator in ChatGPT Enterprise: integrations. Integrations are notoriously tricky, and being great at pulling in enterprise data and using it will be crucial for the winner in this space.
Model-as-a-Service for enterprise is still pretty unserved — most providers' lack of on-prem capabilities is blocking this adoption. Azure OpenAI and Claude2 via AWS Bedrock are the best options right now. I expect the model providers to build this out quickly over the next 12 months.
My thoughts
Of late, I’ve seen a lot of startups addressing shortcomings in OpenAI (and other model providers) offering — services like hosting OSS models on-premise or offering “private” ChatGPT. It’s tempting to go after these opportunities; there is real, unmet demand in the market. I think these approaches are a bit of a trap. Over time, model providers will almost certainly fix the obvious holes in their offerings and will bully startups out of the space.