8 mins read

Navigating the new frontier of AI: Anaplan’s blueprint for enterprise-grade security and trust

Anaplan’s platform was built with AI at the core, so that it is not only intelligent, but also responsible, explainable, governed, and above all, secure.

Woman wearing glasses looking at a tablet

The rise of advanced AI capabilities represents a paradigm shift in how businesses operate, innovate, and compete. As technical leaders, we are tasked with harnessing its transformative power to drive efficiency and unlock new insights, yet this potential is shadowed by new and complex risks. For enterprises with vast amounts of sensitive financial and operational data, the adoption of AI cannot be a leap of faith… it must be a deliberate, strategic, and secure journey.

At Anaplan, AI isn’t an “add-on”, but rather built into our core, with a platform that provides a secure, governed, and high-performance environment for planning. The security of your data and the integrity of your planning processes are critical, so as we integrate even more powerful AI capabilities into our platform, our commitment to our customers has only deepened. Our approach is not about being first to market with every innovation; it’s about being the most trusted.
 

The adoption of AI cannot be a leap of faith… it must be a deliberate, strategic, and secure journey.

The new threat landscape: AI can be a force multiplier for risk

The same power that makes AI a remarkable tool for business also makes it a potentially dangerous instrument for those with bad intentions. Historically, when a new common vulnerability and exposure (CVE) was announced, security teams might have had weeks or even months before seeing it exploited at scale. Today, with AI-powered code generation tools, a threat can weaponize a vulnerability in a matter of hours, dramatically shrinking the window for defense.

Beyond accelerating existing threats, AI introduces new threats like:

  • Prompt injection and poisoning: Maliciously crafted inputs designed to trick a large language model (LLM) into ignoring its safety instructions, revealing sensitive information, or executing unintended commands.
  • Data leakage: When proprietary or sensitive customer data is passed to a third-party LLM and inadvertently incorporated into its foundational knowledge base, exposing it to the outside world.
  • Shadow AI: The adoption and use of unsanctioned AI tools or public models outside of approved IT and security controls. This can create blind spots where data usage, model behavior, and access permissions can’t be monitored, governed, or audited, and sensitive data could be compromised.

Navigating this landscape requires a security philosophy that is embedded at the deepest level of the platform. It requires a shift from reactive defense to a proactive, security-first design.

Our foundational security principles: Don’t leave trust up to an assumption

At Anaplan, our AI security strategy begins with the same foundational principles that govern our entire platform: zero trust and least-privilege access. When you are dealing with multi-tenant LLMs that act as powerful inference engines, you cannot make assumptions about trust.

Role-based access control (RBAC) is fundamental. Every query that interacts with an LLM, and every API call that the model might make in response, must be executed within the strict context of the user making the request. The LLM itself has no standing permissions; it inherits the exact, limited permissions of the user for the duration of a single, transient transaction. This ensures that no user can ever use AI to see or access data beyond their established privileges. We are extending this principle by developing dedicated AI administrator roles, giving our customers granular control over how AI is configured and used within their environment.

This is coupled with rigorous, AI-aware API monitoring and gateways. We understand that this is a fast-moving space and that new threats will constantly emerge, therefore, continuous monitoring of access patterns, data flows, and usage is not just a best practice; it is an operational necessity to detect and mitigate anomalous behavior in real time.

Anaplan’s architecture: Secure by design

Our principles are embedded into an architecture designed from the ground up to isolate risk and protect all data. We have made deliberate, thoughtful choices to ensure our AI capabilities are as secure as they are powerful.

1. Use of retrieval-augmented generation (RAG)

A critical distinction in our approach is that we do not train third-party LLMs on our customers’ data or use your sensitive business information to expand a model’s general knowledge. 

Instead, our AI agents, like Anaplan CoModeler, operate using RAG. When a user asks a question, we retrieve a relevant, but limited, set of data (the context) and provide it to the LLM along with the user’s query. This context is based on the user's permissions, Anaplan best practices, and specific data from a model relevant to the question asked. The LLM uses this context to formulate a precise and relevant answer. Once the answer is delivered, the context is gone. The data is entirely transient. It exists only for the duration of the query and is immediately purged once the answer is provided. This allows us to leverage the reasoning power of LLMs without ever putting your data at risk of being absorbed into a persistent knowledge base. 

In the future, we may explore training specialized models on platform metadata. For instance, using performance telemetry to teach CoModeler to build more efficient calculations, but this will never include your core business data. Your data is yours alone.

2. A fortified and isolated environment

We treat the services that connect our platform to LLMs as highly sensitive components. That is why we have architected a multi-layered, defense-in-depth environment. Our proprietary Model Context Protocol (MCP) servers, which help align prompts with our underlying APIs, are not exposed to the public internet. They are "sandwiched" within a perimeter network segment (a demilitarized zone or DMZ-like environment) isolated by gateways and subject to intense monitoring.  

We are rolling out advanced, AI-based gateways specifically designed to inspect and secure traffic flowing to and from AI services. This provides a delineated, ring-fenced integration point that gives us holistic control and auditability over every interaction, effectively air-gapping the core of our platform from these external services.

3. A thoughtful, standards-based approach

You may see organizations that are quick to adopt every emerging AI trend, even if it means taking shortcuts. At Anaplan, we’re not focused on being “fast,” we’re focused on getting it right and protecting our customers’ most critical planning data. We have a responsibility to ensure we are adopting technologies that are proven, stable, and secure. 

Because of this, we leverage industry-wide standards, such as agent-to-agent (A2A) protocol for communication between AI agents. By aligning with protocols adopted by major hyperscalers, we ensure that our technology stack benefits from the collective security research and best practices of the entire industry. This thoughtful and responsible approach allows us to deliver innovation at pace, without gambling on unproven architectures that could introduce unacceptable risk.

4. Continuous validation through red teaming

Security by design is only effective if it is continuously tested. That’s why red teaming is a core part of Anaplan’s AI security posture and operating process. We proactively simulate real-world scenarios against our AI systems (prompt injection, data exfiltration attempts, permission bypasses, and abuse of agent workflows, etc.) to identify weaknesses before they can be exploited. 

These exercises are conducted regularly and embedded into our development lifecycle, not treated as one-time events. Findings are fed directly back into architectural controls, gateway policies, prompt hardening, and access enforcement. This ensures our defenses evolves with the changing threat landscape and that our AI capabilities are validated against the same tactics attackers would use in the wild.

Delivering secure AI to our customers

For users leveraging our out-of-the-box Anaplan Intelligence features, such as CoModeler and our intelligent role-based analysts, the security is built-in and managed entirely by us. You simply use your standard Anaplan role-based access controls (RBACs) to assign user permissions, and we handle the rest — from securing the LLM provider, to preventing prompt injection and ensuring data transience. 

For our more mature customers who want to build their own agentic workflows, we offer Anaplan Agent Studio. Agent Studio is the single-entry point for bringing custom agents and even your own LLMs into the Anaplan ecosystem. Every change is audited, and every interaction is controlled, giving you the power to innovate within a secure and managed framework. 

Finally, we are firm believers in keeping a human-in-the-loop. LLMs are probabilistic, not deterministic. They can make mistakes and hallucinate. Our intention, even with autonomous agents, is to ensure that a human is always there in the workflow to validate, approve, and make the final business decision.

Anaplan’s commitment: AI you can trust

The journey into AI-powered enterprise planning is a marathon, not a sprint. It requires a partner who is as committed to the security of your data as you are. Anaplan AI's decision making is fully transparent, allowing customers to understand, trust, and verify its outputs. This is made possible because we built our platform with AI at the core, so that it is not only intelligent, but also responsible, governed, and above all, secure. 

We are centralizing our AI teams, investing in talent and continuous education, in specialized security infrastructure, and designing our architecture with a security-first mindset so you can confidently embrace the future of AI-driven planning. Our commitment is to deliver the power of AI with the peace of mind that comes from knowing your platform, and your data are protected by an enterprise-grade security strategy.

Missed the last post in our AI blog series? Check out Planning reimagined: What AI means for the way we model, decide, and act.


Learn more about our Anaplan's AI capabilities.