← Back

The Incentive Angle

Finding pain where platforms won't look

The Incentive Angle

Three months ago, we wrote about the Context Crisis — the way AI conversations lose everything the moment you close the tab. Since then, the world has shifted. Major AI platforms are expanding into agent layers that run in the background, remember across sessions, and orchestrate tools on your behalf. Memory is improving. Integrations are deepening. The scope of "AI handles it for you" widens every week.

If you're building an AI startup, one question is unavoidable: how do you know what you're building won't be absorbed into the next platform update?

The standard answer is "do what platforms can't." Proprietary data, specialized infrastructure, local compliance. These are real moats. But they're incomplete. There's a more important question: what can platforms do, but won't — because their incentive structure points the other way?


Read the Incentive Structure

The business model of a general-purpose AI platform is simple. Revenue = subscribers × subscription fee. Cost = tokens consumed per user. This equation creates inevitable consequences.

The ideal user pays monthly, uses the product moderately, and never churns. The deeper a user goes — longer conversations, more complex tasks, heavier reasoning — the more margin erodes. As scale grows, this pressure intensifies. It's the same logic that prevents Netflix from offering unlimited 4K HDR to every subscriber.

This isn't criticism. It's rational business. But rational business creates empty space — areas a platform could technically fill, but has no incentive to invest in deeply. That empty space is where startups should look.


Four Axes of the Incentive Angle

We see this empty space along four axes.

Depth over frequency Platforms are incentivized to end conversations quickly and efficiently. One answer, one turn, done — that's token-efficient. But the most valuable thinking happens in 30-turn, 50-turn deep dives. Building a structure where users thinking deeper makes the product more valuable — that runs directly against the platform's incentive grain.

The lifecycle after the conversation On general-purpose platforms, a conversation ends as a chat log. Structuring the output, publishing it, sharing it, updating it over time — none of that is an AI model company's business. But for users, what happens after the conversation is often more important than the conversation itself.

Connecting users to each other Platforms optimize for 1:1 relationships: user ↔ AI. Connecting users to each other only adds complexity with no clear revenue upside. But knowledge compounds faster in teams than in isolation. The network layer is structurally outside the platform's incentive perimeter.

Token efficiency in opposite directions Both platforms and startups pursue token efficiency, but in opposite directions. Platform efficiency means reducing depth of reasoning. But if you structure context so the AI references only what's needed, you save tokens while preserving depth. The same word produces opposite outcomes.


Every Pain Is x. We Build f. The Result Is y.

This framework isn't abstract theory. It's how we decide what to build.

At FXY, we start from a simple equation: f(x) = y. Every x is a pain. Every f is a product we build to resolve it. Every y is the resolved state. We don't build vitamins. We build painkillers.

The incentive angle framework is how we find x. We look at the empty space that platform incentives create, and we ask: where is the sharpest pain that no one is solving — not because it's impossible, but because the economics point away from it?


Our First Function

The first pain we found: you think a lot, but your thinking isn't clear.

You have conversations with AI. Some of them are genuinely good — deep, generative, clarifying. But when the conversation ends, it vanishes into a chat log. The thinking doesn't accumulate. It doesn't structure itself. It doesn't become something you can share, revisit, or build on.

This pain sits squarely in the empty space that platform incentives create. Structuring thought is token-intensive. Publishing and sharing results is outside the model company's business. Connecting thinkers to each other adds complexity with no subscription upside.

So we built Superself. Its logic follows a single arc: Think → Structure → Publish → Evolve. A thought born in conversation becomes structured knowledge, gets published into the world, and evolves over time. The entire lifecycle lives in one environment.

Here's what makes this durable: as platforms make their agents smarter, the raw material coming out of AI conversations gets better. And the better the raw material, the more valuable it becomes to structure, publish, and connect it. Platform growth doesn't threaten us. It fuels us. That's what happens when the incentive angle is wide enough.


Run Opposite

If we had to compress the survival strategy for AI startups into one line, it would be this: run the same direction as platforms and get absorbed. Run opposite — but only if that direction creates more value for users.

"What platforms can't do" shrinks over time. But "what platforms won't do because of their incentive structure" becomes sharper as platforms grow. Building user value in that space — that's the only safe ground.


FXY Inc. hello@fxy.global