Pricing AI: What Actually Works
How to price AI products and features without destroying your margins or your customer relationships.
SaaS pricing was a solved problem for fifteen years. Charge per seat, per month. Predictable for the buyer, predictable for the seller, beautiful for the board deck. Gross margins north of 80%. Everyone was happy, and nobody had to think very hard.
AI broke the math.
Not the model, the math. The cost to serve a traditional SaaS user is close to zero. The cost to serve an AI user is not. Every inference call, every agentic loop, every long-context request costs real money. AI companies run gross margins of 50-60%, compared to 80-90% for traditional SaaS. And those are the good AI companies. Early-stage ones can run as low as 25%.
But here’s what most pricing analyses get wrong: the margin compression isn’t the hard problem. The hard problem is that AI inverts the relationship between product success and pricing power. When your AI agent does the work of five people, charging per seat penalizes you for the exact outcome your product delivers. If your product works spectacularly, the customer needs fewer of the thing you charge for.
That’s a business model crisis.
Seat-based pricing dropped from 21% to 15% of companies in just twelve months. Hybrid models surged from 27% to 41%. Credit-based models grew 126% year-over-year. The PricingSaaS 500 Index tracked over 1,800 pricing changes across the top 500 B2B and AI companies in 2025 alone. That’s 3.6 changes per company in a single year. The industry is running experiments in public, at scale, and nobody has converged on the answer.
This piece isn’t a survey of those experiments. There are good ones out there. This is a framework for thinking about the structural question underneath all of them: what is your product actually worth, who knows it, and can you measure it cleanly enough to charge for it?
The Litmus Test That Matters More Than Any Framework
Before we get into models, one question separates companies that will get pricing right from companies that will iterate forever:
If your product succeeds spectacularly, does the customer need fewer of the thing you charge for?
If yes, you’ve picked the wrong metric. Full stop. It doesn’t matter how elegant your pricing page looks or how well your sales team can pitch it. You’re building a machine that destroys its own revenue as it improves. Every product improvement is a pricing headwind.
This is the single most important question in AI pricing, and most companies get it wrong because they inherit their pricing metric from whatever they charged for before they added AI. The seat was the unit because bodies were the input. AI changes the input. If you don’t change the unit, you’re optimizing a dashboard that’s lying to you.
Four Models, and When Each One Kills You
Seats: The Bridge, Not the Destination
Per-seat pricing is comfortable. Buyers understand it. CFOs can budget it. Procurement can approve it. It worked for Salesforce, it worked for Slack, it worked for every enterprise SaaS company of the last decade.
The problem: AI inverts the logic. Per-seat pricing was magical for Salesforce because more sales reps drove more revenue. For an AI that writes the emails those reps used to write, charging per seat penalizes you for the efficiency gain you just delivered.
Microsoft Copilot charges $30 per user per month on top of an existing Microsoft 365 license. It works as a bridge because Microsoft has distribution and lock-in that makes the per-seat premium tolerable. Most companies don’t have that luxury.
Use it when: AI is a lightweight enhancement to an existing workflow, usage is relatively uniform across users, and you’re adding AI features to a product that already charges per seat. Think: AI-assisted search, smart suggestions, grammar checking. The AI makes the seat more valuable without replacing the person in it.
Kill it when: Your P90 user costs 10x your P50 user in compute. When that happens, you’re subsidizing power users with light users, and your margins are eroding invisibly. The thing nobody tells you about AI cost distributions is that they aren’t bell curves. They’re power laws. Your heaviest users don’t cost a little more. They cost orders of magnitude more. And you won’t see it in your averages until it’s already in your margin.
Usage-Based: Honest but Dangerous
Usage-based pricing aligns revenue with cost. That’s its strength and its trap. You charge for tokens, API calls, credits, compute minutes, whatever maps to the resource being consumed. By 2025, 85% of SaaS companies were using some form of usage-based pricing.
The appeal is real. It lets small customers start cheap and grow. It protects your margins because revenue scales with cost. It’s transparent.
But transparency cuts both ways, and in AI, the variance between a simple request and a complex one can be 100x. A user doesn’t know that their three-sentence prompt triggered a six-step agentic loop that burned through their monthly allocation.
Cursor is the cautionary tale everyone should study. In June 2025, Cursor switched from a flat 500-requests-per-month plan to a credit pool tied to API rates. The economics were sound. Frontier models were getting more expensive, and Cursor was eating the cost under flat pricing. They had reportedly reached over $500 million in ARR while spending 100% of revenue on AI costs. Something had to change.
But the rollout was a catastrophe. Users who expected predictable billing got surprise charges. Some burned through their monthly allocation in a few prompts. The community backlash was so severe that the CEO published a public apology and offered refunds.
The lesson isn’t that usage-based pricing is wrong. The lesson is that unpredictable usage-based pricing is lethal. If a customer’s bill is ever a surprise, you’ve already lost their trust. And in AI, where the customer can’t see the compute their request triggers, surprise is the default unless you build aggressively against it: usage dashboards, threshold alerts, spending caps, model-cost previews. These aren’t features. They’re survival infrastructure.
Use it when: You’re selling API access, infrastructure, or developer tools where the buyer expects metered billing. Your cost scales linearly with consumption. Your customers are technical enough to understand and monitor usage.
Kill it when: Your buyers are business users who budget monthly and hate surprises. Or when your product encourages the kind of exploration and experimentation that makes metered billing feel punitive. Nothing kills adoption faster than a user thinking, “should I really ask this question, or will it cost too much?”
Outcome-Based: The Ideal That Most Products Can’t Reach
Outcome-based pricing is where incentives align perfectly. The customer pays when the AI delivers something valuable. If the AI fails, the customer pays nothing. It sounds like the obvious future of AI pricing. It isn’t, for most companies. But in specific domains, it’s devastating.
Intercom’s Fin is the reference case. When Intercom launched its AI agent in 2023, they charged $0.99 per successful resolution instead of per seat. Not per conversation. Not per API call. Per resolved issue, confirmed by the customer. Fin now resolves over 56% of conversations autonomously and generates tens of millions in annual revenue. The per-resolution model doubled adoption rates compared to seat-based alternatives.
The reason Fin works on outcome-based pricing isn’t that Intercom is clever about billing. It’s that customer service has a structural property most AI use cases don’t: the outcome is binary, the attribution is clean, and the counterfactual is obvious. Either the ticket got resolved or it didn’t. There’s no committee required to decide if value was delivered. No attribution model to argue over. No six-month lag between action and result.
Having worked in customer service automation, I can tell you this clarity is rare and precious. Advertiser retention (another area I have scars from) has the same structural property: either the churning advertiser came back or they didn’t. The line from AI action to business result is unmistakable. You can draw a box around the interaction, measure what went in, measure what came out, and price the delta.
That’s the prerequisite for outcome-based pricing: a binary outcome, clean attribution, and a value the customer already quantifies in their own business. Support resolutions. Completed transactions. Retained accounts. Verified lead qualifications. These domains share a structural shape that makes outcome-based pricing not just possible but natural.
Most AI products don’t operate in these domains. Most AI products operate where the outcome is soft, shared, or delayed. “Better decisions.” “Improved productivity.” “Faster workflows.” These are real, but you can’t put a price on them per instance because you can’t isolate the AI’s contribution from everything else that happened.
Salesforce learned this with Agentforce. They launched at $2 per conversation. Customers immediately started asking what counted as a “conversation.” Was a multi-turn exchange one conversation or several? What if the AI handled part and a human finished? The problem wasn’t the price. It was that “conversation” isn’t a clean outcome. It’s an activity metric dressed up as a value metric. Salesforce has since introduced three different pricing models in eighteen months: per-conversation, flex credits at $0.10 per action, and per-user licenses at $125 per month. They’re running all three simultaneously because they still haven’t found the answer.
Outcome-based pricing isn’t a maturity stage every company will eventually reach. It’s a structural fit. It depends on the measurability and attributability of the outcome in your specific domain. Kyle Poyar’s data shows that only about 5% of companies can actually pull it off. The other 95% aren’t behind. They’re in domains where the outcome can’t be cleanly isolated. And that’s fine, as long as they stop pretending they’re on a journey toward outcome-based pricing and start building the model that actually fits their product.
Use it when: Your outcome is measurable, binary, and unambiguously attributable to your product. The value of that outcome should already have a dollar figure in the customer’s mind. If you have to convince them of the value, you’re not ready.
Kill it when: Attribution is murky, outcomes are soft, or your model’s performance varies significantly. If your AI has bad weeks, your revenue has bad weeks. And if the customer has to trust that value was delivered rather than see it, you’re one bad quarter away from a churn event.
Hybrid: Where Most Companies Should Be (For Now)
Hybrid pricing combines a base subscription with usage or outcome tiers. It’s the pragmatic middle ground: predictable for the buyer, margin-protective for the seller, and it lets you learn.
Over 60% of SaaS companies now use hybrid models. The structure is typically a monthly base fee that covers a defined level of usage, with overages or premium tiers beyond that.
The base fee gives the CFO a number to budget. The usage tier gives you upside when the customer scales. The customer feels safe starting, and you don’t subsidize power users forever.
The design principle matters: your base tier should cover 70-80% of your users comfortably. The usage tier should kick in for the top 20-30% who are getting disproportionate value. If most users hit the usage tier, your base is too low and you’re functionally usage-based with extra steps. If almost nobody hits it, your usage tier is a rounding error and you’re functionally subscription with an illusion of upside.
Anthropic’s own tiering is instructive. Free, Pro at $20, Max at $100, Team at $30 per seat. Each tier serves a genuinely different user with different behavior patterns. A casual user and a developer running Claude Code all day aren’t “light” and “heavy” versions of the same persona. They’re different products serving different jobs.
Use it when: You’re uncertain about the right model. Hybrid lets you collect data on usage patterns, cost distributions, and willingness to pay while maintaining customer predictability. Start here, run it for 90 days, and let the data tell you where to evolve.
Kill it when: Complexity is killing your sales cycle. If you can’t explain your pricing in one sentence, it’s too complicated. Every layer of complexity adds friction to the purchase decision, and friction compounds across the sales cycle in ways that don’t show up until your win rates start declining.
Why the Margin Problem Is Structural, Not Temporary
Before you pick a model, you need to accept something uncomfortable: AI margins are structurally lower than SaaS margins, and they’re not going back up.
Traditional SaaS has near-zero marginal cost per user. AI does not. Every inference call costs money. Every agentic loop burns tokens. Every long-context request hits your GPU bill. The industry runs 50-60% gross margins versus 80-90% for traditional SaaS.
This isn’t a temporary phase that better models or cheaper compute will fix. Even as inference costs decline, product complexity increases. Models get more capable, which means customers use them for harder tasks, which means more compute per request. The cost curve flattens but it doesn’t collapse. Plan for 50-65% gross margins for the foreseeable future.
Three things follow from this:
Track your cost distribution, not your average. What does your P90 user cost versus your P10? The average is meaningless if your distribution is skewed, and in AI, it’s almost always skewed. The cost per user you measure in month one is also meaningless because your users haven’t discovered the expensive features yet. The real cost curve doesn’t stabilize for at least 90 days.
Run the litmus test. Does product success reduce the thing you charge for? If yes, change the metric before you have 10,000 customers locked into the wrong model.
Model your margins at scale. If the math doesn’t work at 10 customers, it won’t work at 10,000. AI costs don’t improve as much with scale as SaaS costs did. The per-unit economics of your hundredth customer look a lot like your tenth.
Adding AI to a Product That Already Has Pricing
Most of us aren’t building AI-native companies from scratch. We’re adding AI features to products that already have pricing, customers, and expectations. This is harder than greenfield pricing because you have constraints and relationships to protect.
Four options, each with a sharp edge:
Absorb it. Bundle the AI feature into your existing plan at no extra charge. Google did this with Workspace. The bet: AI makes the core product stickier, reduces churn, and justifies future price increases. This works when the AI feature is lightweight, broadly used, and compute cost per user is low enough to absorb. It doesn’t work when usage varies widely. You’ll subsidize power users with light users and watch your gross margins compress without knowing why.
Add-on. Sell the AI feature as a separate line item. Notion charges $8 per member per month for its AI add-on. Microsoft charges $30 per user for Copilot on top of existing licenses. This works when the AI feature is clearly differentiated and not every user needs it. It doesn’t work when the add-on feels like a tax on features that should have been included. Watch your NPS after launching an AI add-on. If it drops, you’ve mispriced the relationship, not the feature.
New tier. Create a new plan tier that includes AI features. This works when AI features naturally cluster with other premium capabilities, and you can draw a line between tiers that feels like a product boundary rather than an arbitrary usage cutoff. It doesn’t work when the only difference between tiers is “more AI.” If your tiers are just usage gates with different labels, customers will see through it.
Usage gate. Give everyone access, meter it, and charge for heavy usage. The freemium-to-usage pipeline. This works when you want maximum adoption and your free-to-paid conversion is healthy (above 2-3%). It doesn’t work when your free tier is too generous. OpenAI reportedly burned $8 billion annually on compute in 2025 while serving 900 million weekly free users. Most companies can’t afford that bet. If conversion is below 2%, you’re running a charity.
Five Things That Kill AI Pricing
Pricing on cost instead of value. Your inference cost is your problem, not your customer’s. Customers don’t care what a token costs. They care what the outcome is worth. A support resolution worth $15 to the customer should be priced on that value, not on the $0.03 it cost you in compute. Cost sets your floor. Value sets your price. Most companies confuse the two.
Inheriting the old metric. Per-seat pricing was designed for a world where marginal cost per user was near zero and more users meant more value created. AI products don’t live in that world. The number of people logging in tells you nothing about the value being delivered or the cost being incurred. Using the old metric because it’s familiar is how you wake up to negative margins.
Surprise bills. The Cursor debacle came down to one failure: customers couldn’t predict their spend. In AI, where a single complex request can trigger a chain of expensive operations, prediction is hard. Build the transparency infrastructure before you need it. Usage dashboards, threshold alerts, spending caps. If you wait until customers complain, you’ve already lost them.
Treating pricing as a launch decision. The market tracked 3.6 pricing changes per company in 2025. Credit-based models grew 126% year-over-year. The best companies are iterating quarterly. If you’re revisiting pricing once a year, you’re falling behind the companies that are treating pricing as a continuous experiment.
Soft ROI positioning. Copilots that offer advice without closing the loop live in dangerous territory. “Are we really getting value?” is the question that kills renewals. As AI pilots from 2025 hit their renewal cycles, pricing must reflect actual value delivered, not potential or promise. Measure outcomes even when you don’t price on them. Build the dashboards, establish baselines, create feedback loops. This builds the trust that sustains pricing power. And when you’re ready to move to outcome-based pricing (if your domain supports it), you’ll have the data to make the case.
The Agentic Frontier
Everything above applies to AI features and copilots. Agents are a different animal.
An AI agent doesn’t assist a user. It replaces a workflow. It resolves the ticket, books the meeting, drafts the contract, processes the claim. Autonomously. Agents don’t log in. They don’t hold licenses. They can complete thousands of tasks in the time a human completes one.
Charging per-seat for an agent is like charging per-parking-space for a self-driving car fleet. The unit of value isn’t the seat. It’s the work.
This is why “digital labor” pricing is emerging: per action, per resolution, per completed workflow. Salesforce now lets enterprises convert user licenses into flex credits and back, treating human and AI labor as interchangeable budget lines.
The market hasn’t converged on the right model for agents, and it won’t for at least another 12-18 months. If you’re building agents, price on the work they do, not on access to them. Apply the structural test: is the outcome of the work binary and measurable? If yes, you have a real shot at outcome-based pricing. If not, start with hybrid and instrument everything so you can evolve.
Nobody has this figured out. Not Salesforce, not Intercom, not your competitors. The companies that learn fastest will win.
The Decision Sequence
If you take one thing from this piece, make it this sequence:
First: Know your cost structure. Not the average. The distribution. What does your P90 user cost? If you don’t know, stop and go find out. You cannot price what you cannot measure.
Second: Identify the value metric your customer already uses. Not what you want to charge for. What they believe they’re buying. If they’re buying resolutions, don’t charge for tokens. If they’re buying productivity, don’t charge for seats.
Third: Run the litmus test. If your product succeeds, does the customer need fewer of the thing you charge for? If yes, you’ve picked the wrong metric.
Fourth: Match the model to the buyer. Enterprise CFOs want predictable line items. Developers expect metered billing. SMB founders want simple monthly fees. Your pricing model must fit how your buyer purchases, not how your product works internally.
Fifth: Start hybrid if uncertain. Base subscription plus usage tiers. Collect data for 90 days. Then optimize. Don’t try to be clever with pricing before you have usage data. Cleverness without data is just guessing with confidence.
The Bottom Line
AI pricing forces you to confront things SaaS pricing let you ignore: real marginal costs, wildly variable usage patterns, and a value metric that might not be the one you inherited.
The industry is in the messy middle of hybrid models, quarterly iterations, and learning in public. That’s fine. The point isn’t to pick the perfect model. The point is to understand the structural question your product sits on top of: Is the outcome binary? Is attribution clean? Does the customer already quantify the value?
If yes to all three, price on outcomes. You’re in the 5% with a structural advantage.
If no, start hybrid, instrument everything, measure outcomes even if you don’t price on them, and iterate faster than your competitors.
The companies winning at pricing right now aren’t the ones who picked the perfect model on day one. They’re the ones who understood the structure of their problem, picked a starting point, and learned faster than everyone else.
ROI or die.



Strongly resonate.. I am of course biased as we coverage on many aspects independently - https://ankurashokg.substack.com/p/measure-what-matters-in-the-era-of
Measure what matters for both the seller and buyer is more important than just hiding under the seat based pricing umbrella.
As I often day, the future is hybrid for pretty much most things and AI pricing is nearly at the top of it.
In the Agentic AI world I call it, we need to get back to basics of the human performance and compensation mindset:
- OKRs.
- Know and measure what matters.
- What truly counts as - Value.
- Fixed + variable. To cover the bases and incebtivize for performance and skin in the game..