Available for orders from 15 Feb 2026.
Over the last year, application software has lived through the kind of drawdown that normally requires a recession to justify, and the rough “two-trillion” number floating around the market isn’t just a dramatic statistic, it’s a signal that investors are starting to reprice a basic assumption that held for two decades: that enterprise software growth is naturally coupled to human headcount growth. When that coupling loosens, what looks like a valuation problem is actually a physics problem, because the revenue model is anchored to a unit that is no longer the primary unit of work.
It helps to name the dynamic without over-mythologizing it. This isn’t simply a “rotation out of tech” or a “multiple reset” or the market getting bored of AI slogans. It feels more like the early stages of a stack rearrangement where buyers suddenly behave differently, budgets quietly migrate, old operating habits start cracking, and entire categories either find a new center of gravity or begin shrinking into their most defensible cores. In moments like this, companies don’t lose because they lack hustle; they lose because they’re not aligned with a structural tailwind. And when the tailwind becomes real, you can see it in the field before you see it in the earnings deck: the urgency in the buyer’s voice changes, the approval path shortens, the “nice to have” tools get questioned, and legacy workflows stop failing gracefully and start failing loudly.
That’s the heart of what people are calling the SaaS apocalypse, but I want to be precise about the point of this essay. The most important story here is not the destruction of the last era; it’s the emergence of the next one, and specifically the operating manual for running it without repeating the same mistakes under a new brand name. Because the uncomfortable truth is that the stack is not merely getting cheaper or faster or more automated; it is being reorganized around a different primitive, and that primitive is not the interface, it is the decision.
The classic SaaS engine was built on a simple, elegant equation that became the unofficial law of enterprise software: one knowledge worker maps to one “seat,” the seat maps to one license, the license maps to recurring revenue that scales as the organization scales. It’s hard to overstate how much of the industry’s confidence was anchored to that chain, because it made growth legible, forecastable, and finance-friendly, and it rewarded companies that could land inside large accounts and then expand alongside hiring plans and reorganizations and global rollouts.
But the hidden fragility was always there if you were willing to stare at it. Customers were never actually buying “time in the application.” They were buying outcomes produced with the help of the application, and the interface was merely the place humans went to translate intent into action. The software sat between desire and result, and for twenty years we treated that “between-ness” as value in itself.
Autonomous systems change the shape of that translation. When an agent can read a request, pull the correct context, draft an answer, update the record, schedule the next step, and trigger follow-through without needing a person to navigate screens, the interface stops being the center of value and becomes one option among many. The moment the interface becomes optional, the seat becomes negotiable, and when seats become negotiable, the pricing logic begins to bend.
This is why “AI as a feature” is the wrong mental model for what’s happening. A feature cycle adds convenience inside the same architecture. What we’re living through is closer to a re-architecture where the system is optimized for “work completed” rather than “work managed,” and where value is increasingly expressed as the completion of an intent-to-verified-action loop rather than a user logging in and clicking through deterministic steps.
The first force is the quiet disappearance of the interface as the default method of execution for a wide set of routine workflows. Not all workflows, and not every industry, but enough of them that a meaningful portion of the application layer now sits in a zone of vulnerability. Wherever a product is essentially a set of predictable steps shepherded through a UI, an agent can often bypass the UI, call the underlying systems, and produce the outcome with less friction. In that world, the product’s moat can’t simply be “we have the best screens.” It has to be “we are the safest, most trusted, most governable system through which decisions get made and executed.”
The second force is that seat-based monetization begins to look mathematically misaligned once agentic coverage increases. When the unit of work migrates from human seats to automated execution, the buyer’s value conversation naturally shifts away from “how many people will use it” and toward “how much work will get done” and “how reliably will it behave.” Even the vendors who will win are being pushed to explain pricing in terms that map to output, usage, or outcomes, because the buyer’s internal reality is changing and pricing models that ignore reality don’t survive procurement cycles for long.
The third force is that debt and credit dynamics tighten before equity narratives settle, and software has long been a favorite home for lenders precisely because recurring revenue was treated like a stabilizer. When those assumptions get questioned—whether because renewal risk rises, pricing models shift, or margins compress under competition—credit reprices first, and that repricing becomes a constraint on strategy long before it becomes a headline. It limits the room for error, it raises the cost of refinancing, and it makes “we’ll grow out of it” feel less credible as a plan.
The fourth force is political, not technical: the CFO now has a story that sounds responsible rather than reckless. SaaS sprawl has been obvious for years, but deep reduction without an alternative felt like breaking the machine. With agents, there’s a plausible narrative that says: we can cut redundant licenses, keep service levels, and improve speed, and if we do it carefully we can even improve quality. Once that narrative becomes socially acceptable inside executive teams, the rationalization wave stops being a back-office clean-up effort and becomes a boardroom program with teeth, and those programs tend to move quickly because they are framed as financial discipline plus modernization rather than austerity.
The fifth force is the lesson many people misread: when a company pushes too hard on automation and then walks it back because quality drops, the correct interpretation isn’t “automation doesn’t work.” It’s “automation without controls becomes expensive in a different way.” When autonomous systems fail, they fail at scale, and what breaks is not a single transaction; it’s trust. Which is why the real question isn’t whether replacement is coming, but whether replacement is governed.
If you want proof that this moment isn’t a simplistic story of incumbents getting obliterated by newcomers, Salesforce is an instructive counter-signal, because it is being forced to navigate the same gravity as everyone else while still showing what “coming back” can look like when a platform tries to reposition itself as the operating layer for agentic work rather than as a seat-driven application suite. The company’s recent earnings narrative has been essentially this: the market is not moving from “SaaS” to “no software,” it is moving from “interfaces as value” to “execution as value,” and the winners will be the systems that can host autonomous work safely, traceably, and at enterprise scale. That’s why Salesforce has been emphasizing Agentforce momentum, talking about a fast-growing ARR line tied to agents, and experimenting with a new unit of measurement that sounds less like “how many users logged in” and more like “how much work got done.” What I find strategically interesting, though, is Marc’s quieter move in the conversation: instead of framing it as “AI companies will replace us,” he frames it as “AI companies run on us too,” pointing out that many of the AI-native firms people assume will disintermediate SaaS are themselves heavy users of Salesforce and Slack. It’s his way of arguing that the battle is not AI versus SaaS, but whether SaaS platforms can evolve into the trusted system-of-record plus governed execution layer for AI-driven workflows, or whether they become thin back-end databases that agents treat as interchangeable.
Most commentary gets stuck in a vendor scoreboard mindset, as if the important question is “which software category shrinks first.” The more important question is what your enterprise uses as its new decision-and-execution backbone once interfaces stop being the default place where work happens.
For the last twenty years, the SaaS portfolio functioned as decision infrastructure, whether we admitted it or not. CRM was not merely a database; it was a decision system about customers. BI was not merely dashboards; it was a decision system about performance. Ticketing and workflow tools were not merely queues; they were decision systems about priority, escalation, and coordination. And because humans sat at the center, we designed these systems around screens, clicks, and handoffs.
In an agentic world, the successor architecture has to be organized around the decision itself, not the interface. I call this the Decision Fabric, but the label matters less than the structure. At the base, you need explicit sources and permissions so the system knows what is authoritative, what is allowed, and what is off-limits, because autonomy without permission boundaries is not autonomy, it’s risk. Then you need retrieval with grounding so actions are tethered to evidence, timestamps, and policy versions, because “sounds right” is not a valid operating principle for regulated workflows or high-stakes customer interactions. Then you need guardrails and orchestration that live outside prompt text and inside versioned, testable policies, because governance cannot be a vibe; it has to be enforceable. And finally you need deep observability and reliable rollback because you cannot run autonomous execution at scale unless you can trace what happened end-to-end and return to a known safe state quickly when something breaks.
This is the replacement story in a sentence: as interfaces become less central, enterprises will rebuild their operating layer around governed decisions, and the winners will be those who treat that as an architecture and a discipline, not as a collection of demos.
Once you accept that human seats are no longer the primary unit of work, the old dashboard begins to lie. Seat growth doesn’t necessarily mean more output, and seat contraction doesn’t necessarily mean decline; it may simply mean the organization moved work into a more automated execution path.
So you need new numbers that match the new physics. One is what I call Cognitive Throughput, which is simply the volume of high-quality decisions your organization can execute with full traceability over a given period, because that’s the internal measure of “output” in a decision-centric enterprise. It’s how you prove that rationalizing tools and licenses was not a defensive exercise but a capability upgrade, because the organization is executing more, faster, with clearer accountability.
The second is Net Agentic Contribution per Case, which is the CFO-friendly measure of value after you subtract the real costs: model usage, orchestration, review loops, human escalation, compliance overhead, and error remediation. It forces the conversation out of “look what the model can do” and into “what did it contribute to the P&L, net of operating reality,” which is the only way agentic programs avoid becoming theater.
Both of these metrics—Cognitive Throughput and Net Agentic Contribution per Case—are defined in my book Agentic Advantage as part of the board-level scorecard for governed autonomy. If you’d like the full framework (including the Decision Fabric architecture and the 90-day cadence), you can find it here:
The temptation right now is to move fast because the opportunity looks obvious: reduce sprawl, replace routine workflows, compress cycle times, and let savings fund the next set of capabilities. The danger is also obvious: ungoverned autonomy produces failures that are systematic rather than isolated.
So the right posture is not caution for caution’s sake, and it’s not reckless acceleration either. It’s disciplined speed, the kind you earn by making four promises and proving them, not merely stating them. You need to be able to explain what the system did and why it did it in language a senior leader can understand without a technical interpreter. You need to be able to unwind actions quickly and reliably, because rollback is what turns failure from a catastrophe into an incident. You need to manage fairness where fairness matters, because when bias enters an automated path it becomes repeatable at scale. And you need to preserve strategic exit options, because the lock-in risk in an AI platform era can be more constraining than lock-in ever was in the SaaS era.
When those promises are real, boards authorize autonomy more confidently, CFOs fund it more willingly, and teams move faster without losing control, because speed feels safe when reversibility and accountability are built into the machine.
There is a practical cadence that avoids panic while still moving with urgency, and it doesn’t require a heroic transformation program.
In the first month, you run agents in shadow mode alongside the workflows you might replace, capturing traces, assembling decision packets, and learning where the system is brittle, and you treat that month as instrumentation, not automation, because if you can’t observe and explain, you are not ready to delegate.
In the second month, you allow limited autonomy for a narrow set of actions with hard boundaries and real blocking controls rather than gentle warnings, and you rehearse rollback on a schedule, because rollback that isn’t practiced is not rollback.
In the third month, you expand to a majority of eligible workflows once finance can validate net contribution and governance can certify that the system is operating inside policy, and only then do you begin a disciplined sunset of the tool you’re replacing, because decommissioning should be an earned consequence, not a leap of faith.
That cadence feels almost boring compared to the adrenaline of “rip and replace,” but boredom is underrated in enterprise technology, because boredom is often what reliability feels like.
The deeper truth is that enterprise software is not dying. What is dying is the habit of treating “software” as the unit of value in the enterprise. The unit of value is shifting toward verified outcomes, decision quality, and governable execution, and that shift will reward organizations that can build the replacement operating layer while everyone else is still debating whether AI is an enhancement or a threat.
This also reframes how you should evaluate companies, products, and strategies. The hard part isn’t getting a handful of enthusiastic early users who love a clever demo, because plenty of teams can reach that stage. The hard part is creating a repeatable growth formula in a market that is re-forming under your feet, where the category boundaries are moving, the buyer’s mental model is changing, and the old playbooks—more reps, more customer success heads, more implementation bodies—stop scaling linearly. In the agentic era, the scaling lever becomes less about adding humans and more about expanding automation coverage, improving model reliability, and shrinking review loops, which makes “AI-first” not a marketing label but a fundamentally different machine.
So when you look at the current shakeout, don’t ask only which vendors get cut from budgets. Ask what structural truth about the market must be true for the winners to win. Ask whether your architecture is aligned to that truth. Ask whether your governance can keep up with your ambition. And ask whether your organization is built for a world where the fastest-growing capability is not headcount but execution capacity.
Because the disruption won’t politely pause. The only choice is whether you build the governed intelligence layer that turns replacement into advantage, or whether you wait until gravity makes the decision for you.
”Sadagopan Singam is a global business and technology leader and the author of Agentic Advantage. He advises boards and executive teams on GenAI-driven transformation and autonomous enterprise models.”