🌰 seedling
AI Application Layer Squeeze - Pressure From Both Sides

AI Application Layer Squeeze — Pressure From Both Sides


The four-layer AI stack

Graham Weaver divides the AI economy into four layers, each with a different investment profile:

LayerExamplesInvestment outlook
InfrastructureChips, data centers, energyLong, visible growth runway
LLMsOpenAI, Anthropic, xAIFew players, already priced for success
ApplicationsVertical SaaS, AI-native toolsMost overhyped — capital most misallocated here
Use casesEnterprises deploying AI operationallyWhere genuine value creation occurs

The squeeze mechanism

Application-layer companies face compression from two directions simultaneously:

From above: LLM providers are building their own interfaces, products, and features that directly compete with apps built on top of them. As models become more capable, the “wrapper” layer of value shrinks. The LLM itself can do what the app was doing, without the app.

From below: Corporate customers are increasingly capable of building their own tooling using the same underlying models. As AI fluency spreads through engineering organizations, the build-vs-buy calculus tilts toward build for any use case that touches proprietary workflows.

The internet analogy

In the 1990s, companies that helped people complete government tasks online — getting marriage licenses, filing permits — grew at over 100% annually. Then Google arrived and absorbed those information-access rents by making the underlying capability free and universal. The intermediary layer evaporated.

Weaver sees a direct parallel: early AI applications are capturing revenue pools that LLMs will gradually absorb. The timing is uncertain, but the structural dynamic is not. Application companies with $2M in revenue and $500M valuations are, in this framing, the marriage-license websites of the AI era.

What survives

Two characteristics mark the application-layer companies that may hold durable positions:

  1. Proprietary data sets — data that cannot be replicated by the LLM provider or reconstructed by a customer. This means data generated through the product’s own network effects, not data scraped or licensed.
  2. Deep customer interface lock-in — integration so embedded in the customer’s workflow that switching costs exceed the value of building internally. This requires years of deployment, not months.

Both conditions are harder to achieve than pitch decks suggest. Most AI startups have neither.


Connected Notes