The word "agent" has become meaningless. Every enterprise software vendor now claims to offer AI agents. But more often than not, adoption fails when AI shows up as a separate tool outside of your workflow. A dashboard you have to check. Another system demanding your attention.
This isn't a technology problem... it’s an architecture problem. And it's why most organizations investing heavily in AI are seeing disappointing returns.
At Anaplan, we've worked closely with customers and industry leaders to understand what separates useable, trusted AI from AI that will inevitably fail. Hint: the reason isn’t just more features or better models. It's a fundamental rethinking of what an AI "agent" should be — and where it should live.
Autonomy confusion
When vendors talk about "autonomous agents," they're often conflating two very different concepts.
The first is trigger autonomy — when does the AI run? This includes scheduled execution, webhooks, email triggers, and event-based workflows. It sounds sophisticated, but it's just automation. Cron jobs have done this for decades. A workflow that fires at midnight to refresh your forecast isn't an agent, it’s a timer with extra steps.
The second is reasoning autonomy — how does the AI solve problems? This is where genuine intelligence lives. A reasoning agent decides which tools to use rather than following a script. It reacts to what it finds and plans multi-step approaches to novel problems. It recovers from errors and dead ends.
The tech industry has gotten very good at trigger autonomy and has masked it as innovation, but trigger autonomy without reasoning autonomy is just faster busywork. You're automating the wrong things.
What real AI agents actually do
There's a simple framework for evaluating whether something deserves to be called an agent. We call it OARE: Observe, Act, Reason, Evaluate.
- Observe (context gathering): Not just pulling data but understanding the state of the world. For planning, this means connecting to live models and real-time signals across your business, not static snapshots exported to a spreadsheet last Tuesday.
- Act (executing within your workflow): Real agents make changes inside planning flows, not beside them. They update forecasts, adjust assumptions, trigger downstream processes. An agent that can only generate text isn't an agent. It's a very expensive autocomplete.
- Reason (differentiated reasoning): High-value agents don't just respond to prompts. They ask questions back. They challenge assumptions. They force reflection. In a recent conversation with Gartner, analysts were explicit: the real differentiator is reasoning, not conversation. Users don't need another chat interface. They need something that makes them think more clearly about their decisions.
- Evaluate (assessing results and iterating): The agent examines what happened, determines if it achieved the goal, and adjusts its approach. This is the loop that turns a one-shot response into genuine problem-solving.
This isn't theoretical. When a finance team asks a simple question, "What should I prioritize this week?" the answer requires deep context, cross-system understanding, and reasoning to consider different scenarios and trade-offs. The question looks trivial but the work to answer it effectively is not. An agent that can't do this work isn’t helping. It’s just adding another inbox to check.
Inside the workflow... not beside it
Here's the pattern we see consistently: AI adoption accelerates when it removes friction from work users already know they should be doing. Not "reimagined work." Not "transformed processes." Just being more effective at the same work, with time reallocated toward better thinking and judgment.
This is why Anaplan's intelligent role-based agents (Finance Analyst, Sales Analyst, Supply Chain Analyst, Workforce Analyst) are designed to live inside planning flows, not beside them. They monitor signals across revenue, expenses, margin, cash flow, and operational drivers continuously. When something changes, they don't just send an alert, they propose a response.
The OARE loop in practice looks like this:
- Anaplan’s Finance Analyst observes an unexpected spike in demand from a sales region.
- It reasons about the downstream implications (revenue forecast, margin impact, resource requirements.)
- It acts by generating an updated scenario and initiating a what-if analysis.
- It evaluates whether the scenario adequately captures the change.
- And critically, it presents this to a human for decision.
This is the balance that works for enterprise planning. The agent carries the analytical lift. Humans set policies, approve adjustments, and ensure alignment with business strategy. Gartner frames this as roughly a 70/30 split: the vendor carries most of the logic and point of view out of the box; customers layer in their specific context over time. Nobody wants to build agent logic from scratch. They want agents that understand planning and can be tuned to their business.
The speed problem (and how CoModeler solves it)
This all sounds great… but there’s a caveat. The OARE loop only works if you can build and modify planning models fast enough to react to what agents discover. Traditionally, building sophisticated planning models took months. By the time you'd captured the business logic, validated the calculations, and deployed the model, the conditions that prompted it had changed. This is why so many planning processes feel perpetually behind.
Anaplan CoModeler changes this equation. Using natural language, business users can now build, extend, and optimize complex planning models in minutes rather than months. This isn't just about speed, it's about enabling the kind of rapid iteration that agentic planning requires. Think of it as the difference between writing code by hand and using an intelligent development environment. The agent and the model builder become collaborators. You can prototype quickly, explore scenarios, test assumptions, and discard what doesn't work without mourning six months of development time.
We refer to this as "vibe modelling," the planning equivalent of vibe coding. Models are meant to be explored and iterated on, like thinking tools, not monuments set in stone.This is a profound shift in how planning organizations operate, and it's only possible when model creation stops being a bottleneck.
The rise of AI ops
The emergence of "AI Ops" teams within planning organizations is something we’re starting to see more. These are new roles focused on creating, training, and governing agents and workflows. They're the evolution of the Centers of Excellence that many Anaplan customers have built over the years, but the work is different.
- Instead of building models, they're curating the knowledge and decision logic that agents use.
- Instead of running reports, they're designing the monitoring and evaluation frameworks that keep agents aligned with business intent.
- Instead of responding to requests, they're shaping how agents proactively surface insights.
This is where context capture becomes critical. The biggest gap in the market isn't accessing data, it's productizing how knowledge, domain expertise, and decision logic are captured, evaluated, and improved over time. The organizations that figure this out will have AI agents that genuinely understand their business. The ones that don't will have expensive tools.
Built with AI at the core
Anaplan wasn't retrofitted for AI. Our platform is built on a linear algebra engine — the same mathematics that powers neural networks and large language models. This architecture allows us to operate at a scale that makes real-time agentic planning possible: 2.1 million planning models in production, 7.3 petabytes of model storage, average user interactions in 2.3 seconds.
But architecture alone isn't the point — it’s what it enables. Agents that observe your entire planning landscape, reason about implications across finance, supply chain, sales, and workforce, act within your existing workflows, and evaluate results continuously.
Companies that are navigating today's volatility successfully aren't the ones with the most AI features. They're the ones where AI is invisible, embedded into their existing workflows so much so that it feels less like a tool and more like having a team of analysts who never sleep, always reason, and work inside the flow of decisions you're already making.
That's what makes an AI agent real.
In case you missed it, check out the last post in our AI blog series: Navigating the new frontier of AI: Anaplan’s blueprint for enterprise-grade security and trust