Calling functions, reading structured data, chaining operations across systems without waiting for a human click: Agentic AI is here to act on behalf of a user. For enterprise content teams, the shift from AI assistants to autonomous agents is a new set of architectural requirements, not an upgrade. Just two years ago, the question was whether AI could write a product description. Today, the question is whether your content infrastructure can be called by an autonomous agent that has never opened a browser.
That is a different problem.
AI assistants work inside the tools humans use. They suggest. Humans approve. Agentic AI is a different order of magnitude. Agents call functions, read structured data, and chain operations across systems without waiting for a human to click "next." According to a June 2025 Gartner press release, by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024. A separate Gartner forecast from August 2025 puts task-specific AI agents in 40% of enterprise apps by as early as 2026.
The organizations best positioned for this shift are not the ones with the most AI-enabled functions. They are the ones that build their content infrastructure to be used by agents.
Most enterprise CMS platforms were not built for this. They were designed for human users navigating menus, filling forms, and clicking publish. They bolt on AI features. They don't expose their functions as tools. They were not designed to be orchestrated by an external agent that needs to create, translate, govern, and publish content at scale without touching a browser.
At dotCMS, we are designed for exactly that.
At a Glance
Gartner projects 40% of enterprise apps will embed task-specific AI agents by 2026, up from under 5% in 2025.
Most enterprise CMS platforms bolt AI onto a human-first UI; agents need schemas, events, policy, and stable contracts.
dotCMS uses a two-layer architecture: Layer 1 = AI embedded in the CMS for editors; Layer 2 = the CMS exposed as callable tools for agents.
The dotCMS MCP Server gives agents live context (content types, fields, workflow states, permissions) before they act — eliminating schema guessing and hallucinated field names.
Governance travels with every agent call: permission inheritance, full audit trails, brand voice policy, and multi-provider model routing.
The composable “spine” architecture — portable content model, event stream, codified policy, stable contracts — is exactly what agents need to operate reliably.
A Two-Layer Architecture offers Flexibility
In short, the dotCMS two-layer architecture splits AI into what editors use inside dotCMS (Layer 1) and what agents call against dotCMS (Layer 2) — sharing the same content, governance, and contracts.
The way dotCMS approaches AI is a deliberate two-layer architecture that serves both AI as an assistant and AI as an autonomous agent.
Layer 1 is AI embedded inside dotCMS — content and image generation, translation, semantic search, and workflow automation built natively into the platform. This layer makes content teams faster today.
Layer 2 is dotCMS made available to AI — every function, content type, workflow, and asset exposed as a callable tool through the MCP Server, dotCLI, and agent-consumable APIs. This layer makes dotCMS the content backbone for autonomous agents in end-to-end workflows.
The critical point: Layer 1 capabilities become Layer 2 tools. The same generation, translation, and workflow functions a content editor uses through the UI are the exact functions an autonomous agent calls programmatically via MCP. You build one content infrastructure that serves both humans and agents — on the same data, governed by the same rules, through the same contracts.
Layer 1: AI Across the Content Lifecycle
dotAI is built into the content lifecycle — not bolted on but available wherever content is created, reviewed, and published.
Current Layer 1 capabilities include:
AI-assisted content generation through custom fields and natively in the Block Editor.
Image generation and automatic alt-text tagging via workflow.
Asynchronous translation at scale.
Vector-based semantic search with conversational query support.
Configurable AI workflow sub-actions for batch enrichment and tagging.
Our 2026 roadmap extends this meaningfully. Multi-Provider Support (Q2 2026) removes the OpenAI-only constraint, Brand Voice and Content Standards configuration (Q3 2026) lets admins define tone and vocabulary rules that automatically govern every AI-generated output across the platform. A Content Quality Agent (Q4 2026) will surface SEO gaps, metadata issues, and brand voice violations during editing, before content reaches a reviewer.
This is Layer 1 — AI where editors already work, governed from the start, instrumented for audit. But it is not what makes dotCMS agent-ready.
The Composable Spine: Why Architecture Is the Real Differentiator
The agentic era is a validation of composable architecture, not a disruption of it.
The "spine" concept dotCMS has articulated positions the CMS as a thin, durable layer for content, events, policy, and orchestration — not a monolithic DXP that tries to own every function. The spine was designed to survive the next paradigm. AI agents are that paradigm, and here's why the architecture maps directly.
A composable spine has four structural requirements: a portable content domain model, an observable event stream, codified policy and governance, and stable orchestration contracts. These are precisely what an AI agent needs to operate effectively:
Portable content domain model. Structured content types, explicit field definitions, and predictable schemas give an agent the context to act without hallucinating the architecture.
Observable event stream. Agents operate on state. They need to know what happened — workflow transitions, publish events, content changes. An event-driven system exposes this natively.
Codified policy and governance. Agents inherit permissions from the system they call into. Governance expressed as policy travels with every programmatic action. Governance enforced only by UI friction does not.
Stable orchestration contracts. Brittle APIs and undocumented schemas are manageable inconveniences for humans. For agents, they are failure modes.
"Model once, deliver anywhere" was always about structured content reaching websites, apps, and partner APIs without channel-specific forks. In the agentic era, agents are consumers too. A content type defined in dotCMS is readable by an editor, a headless frontend, a search index, and an AI agent orchestrating a content pipeline — through the same schema, the same API, the same governance layer.
Layer 2: dotCMS as an Agent-Ready Platform
This is the layer that defines the next five years.
The right question for an enterprise architect is not "does our CMS have AI features?" It is: Can an autonomous agent create content, move it through a governance workflow, translate it, publish it to the right channels, and return a structured result — without a human touching a browser?
dotCMS answers yes. Here is the architecture behind that answer.
The MCP Server
The Model Context Protocol (MCP) Server is the primary interface between AI agent harnesses and dotCMS. It is not an API wrapper — it is a structured context layer that gives agents full visibility and operational capabilities in the dotCMS environment: content types, field definitions, workflow states, site configurations, permissions, and audit state, before they take any action.
Why does context matter? A generic LLM operating against an unfamiliar CMS guesses at the schema, hallucinates field names, and attempts operations that fail silently or produce malformed content. The dotCMS MCP Server eliminates the guessing. An agent knows exactly what content types exist, what fields they carry, what workflow transitions are valid from the current state, and what its permission scope allows.
Current MCP Server capabilities cover the full content operations lifecycle:
Query and audit content and content types
Create and modify content type schemas
Create, update, and search content
Move content through the complete workflow lifecycle: draft → review → publish → unpublish → archive
Manage site configurations and permissions
dotCMS is actively expanding the MCP Server's function surface — more operations exposed as tools, broader API coverage, deeper workflow integration. The goal is explicit: every operation available through the UI should be callable by an agent through the MCP Server.
dotCLI
The dotCLI provides command-line native access to dotCMS for agents that operate via shell — common in coding agents, infrastructure pipelines, and CI/CD automation. Schema migrations, content imports, and publishing operations are fully scriptable. The most capable agent harnesses (Cursor, Claude Code, GitHub Copilot Workspace) operate in terminal-native environments. A content platform without CLI access is one those agents work around.
Agent-Consumable APIs
dotCMS's REST and GraphQL APIs are designed for programmatic consumption: predictable schemas, stable versioning, explicit authentication, and structured error responses. The API investment that powered the headless era now powers the agentic era — the same API a React frontend calls to retrieve content is the same API an orchestration agent calls to create, update, and publish at scale.
What Agentic Architecture Unlocks
Autonomous Content Operations
A marketing team managing a product launch across 14 markets runs a pipeline that lives on email and spreadsheets. Brief, draft, review, legal, translation, regional adaptation, publish — weeks of handoffs, with the content team coordinating rather than creating.
With dotCMS as the agentic backbone, an agent receives the brief, queries the content type schema via MCP, generates structured drafts through dotAI, runs brand voice validation, routes through the approval workflow, triggers translation for 13 additional markets, and queues each variant for regional publication. The content team's role shifts from managing the pipeline to reviewing the exceptions.
This is Layer 1 and Layer 2 operating together: generation and translation via dotAI, workflow operations via MCP, governance baked in throughout.
Developer Scaffold Agent
A developer building a new content model for an enterprise client is working inside Cursor or Claude Code. Without context, the agent guesses at field names and generates components that fail on the first run. With the dotCMS MCP Server, the agent reads existing content type schemas, understands the data model, creates the new content type via MCP, and generates accurate frontend components before the developer writes a line of code. Research from GitHub and Microsoft has documented up to 55% faster task completion when developers use AI coding tools — the key variable being how much relevant context the agent has. The MCP Server is what delivers that context.
Multi-Market Localization Pipeline
A regulated financial services company needs compliance-sensitive content published across 12 markets in a 48-hour window. The current process is a two-week cycle: translation vendor handoff, regional legal review, per-market publishing. With dotCMS, an agent takes content approved in the primary market, calls dotAI translation via MCP, applies market-specific brand voice rules, routes translated variants to regional review queues, and publishes after approval — with a complete, native audit trail recording which model produced which output, which rules governed it, and who approved it.
The Enterprise Case: Governance Is the Architecture
AI agents introduce a governance challenge that feature-level AI does not. A content editor using an AI suggestion is accountable for the output. An autonomous agent acting without human review is not accountable in the same way. This is the primary objection enterprise buyers raise when evaluating agentic workflows — and it is the right objection.
dotCMS's response is structural, not a feature layer added on top.
Permission inheritance. An agent operating via the MCP Server inherits the same role-based access controls governing human editors. It cannot publish what a human with the same token scope cannot publish. Governance is not applied differently because the actor is an agent.
Audit trails. Every workflow action is recorded: state transitions, actors, timestamps. The Admin Observability and Governance feature (Q4 2026) adds AI-specific activity dashboards, real-time activity feeds, and brand voice enforcement reporting — making AI actions as visible and auditable as any other content operation.
Multi-provider governance. Multi-Provider Support lets enterprises route AI operations to approved model providers — Azure OpenAI for financial services and healthcare, AWS Bedrock for AWS-compliant organizations, Google Vertex for global enterprise deployments. dotCMS is the governance layer. The model is a configuration, not an architectural constraint.
Brand voice as policy. Brand Voice and Content Standards configuration means every AI output — from any model, through any channel — is checked against the same centrally managed ruleset. Gartner projects AI governance platforms will become a $492 million market in 2026, growing to over $1 billion by 2030, driven by regulatory pressure and enterprise risk requirements. Organizations building on platforms without native AI governance will be retrofitting compliance. dotCMS has it in the architecture.
Conclusion: The Spine is ready for the Agentic Age
The spine concept was always a bet that organizations needed a thin, well-governed content layer — composable, stable, observable, built on contracts that hold under change. It was designed to survive the next paradigm without a rewrite.
AI agents are the paradigm. They expose every brittle API, every undocumented schema, every piece of governance that existed only as UI friction. The platforms that survive this test are the ones whose architecture is already oriented toward composability and programmatic access.
dotCMS built that architecture before agents were a mainstream concern. The content types that structured your web content are the same schemas agents read. The workflows that govern your editorial process are the same tools agents call via MCP. The APIs that powers your headless frontends are the same APIs agents use to build, translate, and publish at scale.
Models will commoditize. What will not commoditize is a content infrastructure designed to put any capable agent to work on governed data, through stable contracts, with a complete audit trail.
FAQ
Q1. What is agentic AI, and how is it different from an AI assistant?
An AI assistant works inside human tools — it suggests, and a human approves. Agentic AI is autonomous: agents call functions, read structured data, and chain operations across systems without waiting for a human click. For a CMS, that shift changes the architectural requirements from “UI with AI features” to “system of tools agents can call.”
Q2. What makes a CMS “agent-ready”?
An agent-ready CMS exposes every important function — content creation, workflow, translation, publishing, governance — as a callable tool with a stable schema, observable events, enforced policy, and an audit trail. In practice, that means an MCP Server or equivalent, a CLI, predictable REST/GraphQL APIs, and permissions that apply equally to humans and agents.
Q3. What is the dotCMS MCP Server?
The dotCMS MCP (Model Context Protocol) Server is a structured context layer between AI agents and dotCMS. It exposes content types, field definitions, workflow states, site configurations, permissions, and audit state so agents know exactly what exists and what they are allowed to do before they act. It prevents the schema guessing and hallucinated field names that cause generic LLMs to fail against unfamiliar CMS platforms.
Q4. What are dotCMS’s two AI layers?
Layer 1 is AI embedded inside dotCMS for editors — content and image generation, translation, semantic search, and workflow automation in the native UI. Layer 2 is dotCMS exposed to AI — the same functions made callable by autonomous agents through the MCP Server, dotCLI, and agent-consumable APIs. The same capability serves both, governed by the same rules.
Q5. How does dotCMS govern AI agents?
Governance is structural, not a feature. Agents inherit the same role-based permissions as humans; every action is recorded in a native audit trail; multi-provider support lets enterprises route AI operations to approved model providers (Azure OpenAI, AWS Bedrock, Google Vertex); and Brand Voice and Content Standards apply a single policy ruleset to every AI output regardless of model or channel.
Q6. What is a composable “spine” in content architecture?
A composable spine is a thin, durable CMS layer — built on portable content schemas, an observable event stream, codified policy, and stable orchestration contracts — rather than a monolithic DXP that owns every function. The four requirements of a spine are also the four things an AI agent needs to operate reliably, which is why composability maps directly to agent-readiness.
Q7. Which AI providers does dotCMS support?
Today dotCMS integrates with OpenAI. Multi-Provider Support is on the 2026 roadmap (Q2 2026) and will let enterprises route AI operations to Azure OpenAI, AWS Bedrock, Google Vertex, and other approved providers — with dotCMS acting as the governance layer so the model choice becomes a configuration, not an architectural constraint.
Q8. What does the 2026 dotCMS AI roadmap include?
Multi-Provider Support (Q2 2026), Brand Voice and Content Standards configuration (Q3 2026), and a Content Quality Agent plus Admin Observability and Governance (Q4 2026) that surfaces SEO gaps, metadata issues, and brand voice violations during editing and provides AI-specific activity dashboards and enforcement reporting.
Q9. How big is the market shift toward agentic AI?
Gartner projects that by 2028, 33% of enterprise software applications will include agentic AI (up from less than 1% in 2024), and that 40% of enterprise apps will feature task-specific AI agents by 2026 (up from under 5% in 2025). Gartner also forecasts the AI governance platform market will reach $492 million in 2026 and exceed $1 billion by 2030.
Q10. Why can’t most enterprise CMS platforms support autonomous agents today?
Most were designed for human users navigating menus and clicking publish. They add AI features to a human-first UI rather than exposing functions as callable tools. They lack predictable schemas, observable events, agent-aware permission scopes, and CLI access — all of which agents need to operate without human supervision.