AI Is Compressing 20 Years of IT History Into 2
Over the past two decades, enterprises went through three major technology waves: enterprise software (think ERP systems), the rise of the internet, and the SaaS explosion. Each one followed the same pattern. First came a burst of innovation, as new tools promised to transform how work got done. Then came sprawl, as everyone adopted those tools independently without a plan. Then, eventually, came standardization and control, as organizations realized the cost of chaos outweighed the benefits of speed.
AI isn’t different. It’s just happening faster. Much faster.
What took twenty years to play out across those three waves is now happening in less than two. And if you look around, most organizations are already deep in the sprawl phase, whether they realize it or not. The question isn’t whether the pattern will repeat. It’s whether you’ll recognize it in time to get ahead of it.
We’ve been here before
You don’t need a long history lesson, but the pattern is worth remembering because it’s about to repeat itself.
When ERP systems first arrived in the late ’90s and early 2000s, every department bought its own software. Finance had one system, operations had another, HR had a third. Nobody planned for how these systems would talk to each other because nobody was thinking at the enterprise level yet. The result was data silos, duplication, and a mess of disconnected processes that cost enormous amounts of time and money to untangle. It took years of painful consolidation (and billions of dollars in consulting fees) to bring things under control.
Then came SaaS, and the same thing happened all over again, just faster. Suddenly anyone with a credit card and a Gmail account could sign up for a new tool. Marketing had its stack, sales had its stack, every team had its own little ecosystem. IT departments started hearing about “shadow IT” for the first time. Tool sprawl exploded. Identity management became a nightmare. Customer data ended up in dozens of systems with no single source of truth. Eventually, companies built governance frameworks, identity platforms, and vendor management programs to manage it all. But that took years too.
The lesson is always the same: decentralized innovation comes first. Architecture and governance come later. Always.
What’s different this time
So if the pattern is the same, why does this wave feel so different? Four reasons.
First, the speed is unprecedented. Previous technology waves took years to reach meaningful enterprise adoption. Cloud computing needed the better part of a decade to go from early experiments to standard infrastructure. AI agents can be deployed in days, sometimes hours. Gartner projects that 40% of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5% in 2025. That’s not a gradual shift; that’s an 8x jump in a single year. Low-code platforms like Copilot Studio and Zapier have made it possible for almost anyone to create an agent without writing a line of code. The barrier to entry has essentially disappeared. The pattern hasn’t changed, but the timeline has been compressed to almost nothing.
Second, these systems act, not just store. This is the distinction that most people underestimate. SaaS tools were systems of record. They held your data, organized your information, and gave you dashboards to look at. AI agents are systems of action. They make decisions, trigger workflows, and interact with other systems on your behalf. They send emails, approve requests, move money, and escalate issues. When a SaaS tool is misconfigured, you have messy data. When an AI agent is misconfigured, you have messy decisions being made at scale, often before anyone notices. That’s a fundamentally different kind of risk, and it requires a fundamentally different approach to governance.
Third, the unit of sprawl has changed. In the SaaS era, sprawl meant too many tools. You could at least count them, even if the number was embarrassingly high. In the AI era, sprawl means too many versions of how work gets done. Different teams are building different agents to handle the same process in different ways, with different logic, different data sources, and different assumptions baked in. It’s not just tool duplication; it’s workflow duplication. And that’s much harder to see, much harder to measure, and much harder to fix.
Fourth, visibility is worse than it’s ever been. SaaS tools could be audited. You could look at your subscription list, check your SSO logs, and get a rough picture of what was happening. AI is embedded, ephemeral, and distributed. Agents get spun up, granted broad permissions to avoid friction, and almost never reviewed after they go live. They live inside other tools, run on personal accounts, and connect to systems through APIs that nobody is monitoring. As one security researcher put it, rogue agents and MCP servers have sprung up in large numbers across enterprises as employees test ways to do their jobs more efficiently. Most organizations genuinely don’t know what AI they’ve actually deployed.
Where we are right now
Let’s name it clearly: enterprises are no longer experimenting with AI. They are scaling it, without a system.
The Larridin “State of Enterprise AI 2026” report captures this perfectly. When they ask CIOs how many AI tools their employees are using, the answer is usually somewhere between 60 and 70. Then they turn on automated monitoring. The real number? Two hundred. Sometimes three hundred. The reaction is always the same: surprise, concern, and then a grudging acknowledgment that it makes sense.
That gap between perception and reality is the defining feature of the sprawl phase. And it’s not just about tools. Torii’s 2026 SaaS Benchmark Report found that the average large enterprise is running over 2,100 applications, with more than 61% not formally approved or overseen by IT. AI didn’t create shadow IT, but it dramatically increased its speed and blast radius.
On the ground, this looks like dozens of agents scattered across teams, duplicate automations solving the same problem in different ways, conflicting workflows with no clear owner, and no centralized inventory of what’s actually running. There’s a term gaining traction for this: automation sprawl. And based on every previous technology wave, it’s exactly where we should expect to be right now.
Why it’s happening
The root causes are structural, not accidental. And they’re worth understanding because they explain why this isn’t just a management failure; it’s a predictable outcome of how new technology gets adopted.
Adoption is bottom-up. Individual teams and employees are solving real problems with AI tools, often without waiting for IT approval. And honestly, you can’t blame them. The tools work, they’re easy to access, and they deliver immediate results. According to a BlackFog survey, nearly half of workers admit to adopting AI tools without employer approval. And here’s the uncomfortable part: 69% of C-suite members are fine with it. Leadership is implicitly (and sometimes explicitly) prioritizing speed over governance because the productivity gains are too obvious to ignore.
An EY report found that more than 78% of leaders say AI adoption is outpacing their organization’s ability to manage the associated risks. Yet 95% still plan to increase their AI investment in the coming year. That’s the tension in one statistic: everyone knows governance is lagging, and almost no one is willing to slow down to close the gap.
Beyond the cultural dynamics, there’s a structural problem. There’s no reuse model. No orchestration layer. No shared understanding of what “good” looks like for AI deployment across the enterprise. Each team is building in isolation, solving its own problems with its own tools and its own logic. Governance isn’t absent because people don’t care about it; it’s absent because AI adoption is scaling faster than anyone can design systems around it. And that’s exactly how it went with SaaS, and with ERP before that. The system design always comes after the adoption curve, not before it.
Why this matters
This isn’t just a tidy problem for IT to sort out. The stakes are real, they’re compounding, and they affect the whole business.
Operational inconsistency is the most immediate issue. When different teams automate the same process in different ways, you get different outcomes for the same inputs. Customer onboarding works one way in the West region and a different way in the East. Invoice processing follows different rules depending on which team built the automation. That erodes trust in your systems, creates confusion for the people who depend on them, and makes it nearly impossible to improve processes at the enterprise level because there’s no single process to improve.
Hidden risk is growing quickly. That same EY report found that 45% of leaders have confirmed or suspected sensitive data leaks tied to employees’ unauthorized use of AI tools. AI agents are being provisioned with broad access permissions as a shortcut to avoid friction, and those permissions are almost never reviewed afterward. In regulated industries like financial services and healthcare, the consequences aren’t just operational; they’re legal. The security implications are significant and largely invisible to the people making budget decisions.
Complexity is compounding in ways that are hard to appreciate until it’s too late. Every new agent that gets deployed without coordination increases the entropy of the overall system. Interactions between agents, overlapping automations, and conflicting workflows create a web of dependencies that becomes exponentially harder to untangle over time. This is technical debt, but it’s not sitting in a codebase where engineers can find it. It’s distributed across the organization in places nobody is looking.
And ROI is getting diluted. When ten teams each build their own version of the same automation, the investment is scattered across duplicate efforts instead of concentrated where it can create the most value. Organizations end up spending more to get less, and the fragmentation makes it harder to measure what’s actually working. That makes it difficult to justify continued investment, which is ironic given that the underlying technology is genuinely valuable when deployed thoughtfully.
What happens next
If the historical pattern holds (and there’s no reason to think it won’t), we know what’s coming.
Every previous technology wave ended the same way: with a push toward standardization and control. Not because companies wanted bureaucracy, but because the cost of sprawl eventually became unbearable.
For AI, this is already starting to take shape.
First, architecture layers are emerging. Orchestration platforms, shared AI services, and system design thinking are moving from theoretical concepts to actual products. Leaders at major cloud providers and analyst firms are comparing the need for agent orchestration to what Kubernetes did for container management. Just as containers needed a coordination layer to prevent infrastructure chaos, AI agents need an orchestration layer to prevent workflow chaos. The goal is coordination, not centralization; making sure agents work together instead of stepping on each other. Google Cloud, for example, is positioning its unified stack (Gemini Enterprise, Vertex AI, BigQuery) specifically around the idea that governance needs to be native to the platform, not bolted on after the fact.
Second, governance is becoming operational. This doesn’t mean more policy documents gathering dust on a SharePoint site. It means real infrastructure: agent inventories, permission audits, lifecycle management, and automated enforcement. We’re already seeing an entirely new product category emerge around this. CrowdStrike recently launched AI Agent Discovery for unified visibility across SaaS platforms, normalizing agent attributes across different vendors so security teams can spot risky configurations regardless of where agents are deployed. Arthur AI built an Agent Discovery and Governance platform after observing that enterprises are routinely running tens of thousands of agents without a centralized inventory. McKinsey reports that 80% of organizations are already seeing risky behavior from their AI agents. The fact that “agent discovery” is now a market category tells you everything about where the industry is headed.
Third, consolidation is inevitable. Just as the SaaS explosion eventually led to platform consolidation and vendor rationalization, the AI sprawl era will give way to fewer, more integrated systems with clear ownership models. Organizations will shift from “let a thousand flowers bloom” to “let’s figure out which flowers we actually need.” Gartner’s warning that over 40% of agentic AI projects risk cancellation by 2027 if governance and ROI clarity aren’t established is a leading indicator of this consolidation. The experiments that can’t demonstrate value within a governed framework will get cut. The ones that can will get absorbed into enterprise architecture.
The shift leaders need to make
The organizations that come through this well won’t be the ones that adopted AI the fastest. They’ll be the ones that recognized the pattern early and designed for it.
That means shifting from thinking about individual tools to thinking about systems. It means moving from experiments to architecture. It means treating AI as infrastructure (something that needs to be designed, governed, and maintained) rather than just a collection of capabilities bolted onto existing workflows.
Practically, this looks like a few things. It means creating a centralized inventory of what AI is actually running in your organization, because you can’t govern what you can’t see. It means establishing ownership models so that every agent has someone accountable for it. It means building shared services so that teams aren’t each reinventing the wheel. And it means investing in orchestration so that your agents work together as a system rather than stepping on each other as isolated tools.
This doesn’t require slowing down. It requires being intentional about how you speed up. The companies that built the best SaaS stacks weren’t the ones that bought the most tools; they were the ones that thought about how those tools fit together. The same principle applies here, just on a much tighter timeline.
One CIO framed it well using a football analogy: you won’t prevent every risk, and you won’t govern everything perfectly before deploying AI at scale. Governance will always lag innovation. The goal isn’t to stop every play; it’s to see it coming and prevent the score. That mindset (pragmatic, adaptive, infrastructure-first) is exactly right for this moment.
AI isn’t creating new problems. It’s accelerating old ones. The pattern is the same as it’s always been: innovation, sprawl, standardization, control. The only difference is that the clock is ticking about ten times faster.
The organizations that win won’t be the ones that moved first. They’ll be the ones that recognized the sprawl for what it was and started designing for what comes next, before it became unmanageable.
