Google Cloud VP Darren Mowry drew the line. "LLM wrappers? Aggregators? The industry no longer has patience." Every time OpenAI drops GPT-5, dozens of startups lose their reason to exist. Yet in the same period, Harvey hit $11B and Glean hit $7.2B in valuation. What's different?

3-Second Summary
General AI absorbs features LLM wrappers die Only vertical AI survives Survival criteria: data moat + workflow ownership + outcome accountability Harvey, Glean, Cursor prove it

What Is This?

"Vertical AI" means AI products specialized for a specific industry or job function. Legal (Harvey), enterprise search (Glean), coding (Cursor), and so on. The opposite, "general AI," is AI that tries to do everything — like OpenAI's ChatGPT or Google's Gemini.

The problem is that as general AI gets better, "wrapper" startups that just slap a UI on top of GPT are losing their footing. Google VP Darren Mowry nailed it: "If the backend model does all the work and you're just white-labeling it, the industry no longer has patience." A feature that was a differentiator 6 months ago is now a built-in feature of GPT-5 or Gemini 2.0.

VC Cafe called this "The Ugly." Many funded AI startups are nothing more than thin wrappers on GPT, and while they had early revenue, they'll have to pivot, get acquired, or shut down as the models themselves absorb their features. In practice, Jasper.ai took a serious hit when OpenAI directly enhanced its own copywriting capabilities.

What's interesting though — in the same period, vertical AI startups raised over $15B in 2025 alone. The weak die, the strong explode. Polarization is happening.

Mowry's two types headed for extinction

1. LLM Wrappers: Startups where the entire value is a UI on top of GPT/Claude/Gemini. When the model upgrades, they lose their reason to exist.
2. AI Aggregators: Orchestration layers bundling multiple LLMs into one interface. Value plummets as model providers build multimodal and agent capabilities directly.

What Makes It Different?

The difference between surviving vertical AI and dying wrappers comes down to "depth of moat." Better Tomorrow Ventures explains this as "the length of the last mile" — how much complex coordination with real-world systems is needed to deliver results, and who takes responsibility for those results.

Dying AI (Wrappers/Aggregators)Surviving AI (Vertical)
Core assetUI/UX + prompt engineeringProprietary data + domain expertise
MoatDisappears with model upgradesDeeply embedded in workflows
Competing againstIT budget (replacing existing SaaS)Labor costs (automating human work)
Switching costLow (can just move to ChatGPT)High (data/workflow lock-in)
AccountabilityPushed to the userOwns accountability for outcomes

Concrete examples make the difference crystal clear.

CompanyDomainARRValuationCore Moat
HarveyLegal AI$190M$11BLaw firm workflows across 60 countries + case law data
GleanEnterprise search$200M$7.2BInternal data connectors + agent infrastructure
CursorAI coding$1B+$29.3BComplete developer workflow ownership
JasperCopywritingDeclining$1.5B→?GPT wrapper → features absorbed by OpenAI

Harvey is particularly impressive. ARR went from $100M in August 2025 to $190M by year-end — nearly doubling in 5 months. Valuation from $3B in Feb 2025 → $5B in June → $8B in Dec → $11B in Feb 2026, a 3.7x increase in one year. Legal research, contract analysis, compliance — these are worlds apart from typing "review this contract" into ChatGPT. It's deeply embedded in practical workflows of 1,000+ law firms across 60 countries.

Glean is the same story. It's not just "enterprise ChatGPT" — it's building the connective tissue between models and enterprise systems. Slack, Google Drive, Jira, Salesforce — connecting all internal enterprise data and running agents on top. That's territory OpenAI can't easily replicate.

The Essentials: The Vertical AI Survival Formula

Foundation Capital summarized the 2026 trend as "Cursor for X" — AI that completely owns the workflow of specific functions across legal, finance, marketing, and operations, just like Cursor does for coding. What does it take to build one?

  1. Build a proprietary data moat
    The unique dataset accumulated through product usage is key. Harvey accumulates hundreds of thousands of legal document processing data points, Glean builds an internal knowledge graph. Data that general models can't access is your moat.
  2. Become the system of record for the workflow
    Don't be an occasional tool — be the system where core work happens. The higher the switching cost, the harder it is to flee to general AI. Just like Cursor owning the IDE itself.
  3. Take accountability for outcomes
    This is Better Tomorrow Ventures' core insight. "Providing a tool" and "guaranteeing outcomes" are different things. The moment you own the outcome, switching costs skyrocket.
  4. Target the labor budget line
    Compete against labor budgets, not IT budgets. When AI replaces work done by lawyers, sales reps, or CS agents, the ROI is crystal clear. This is what VC Cafe calls "competing against the labor budget."
  5. Increase real-world coordination complexity
    The longer the "last mile," the harder it is for general AI to enter. Regulations, compliance, legacy system integration — this complex real-world coordination is the real defensive wall.

Timing warning: The window is closing

This is Better Tomorrow Ventures' key warning. Most startups start with a "short last mile" and gradually deepen, but as foundation models verticalize faster, that transition time is compressing. Your starting wedge needs sufficient real-world coordination complexity from the outset to scale before the models catch up.