Building an AI-powered search on dotCMS used to mean manual REST integration; having dotAI Search built into our SDK changes that — client.ai.search() puts working semantic search in any headless dotCMS project in under 15 minutes. In the video above, Jalinson Diaz walks through setting up a search for a headless site in just 3 minutes!
Sections
The problem: raw REST endpoints, no SDK, too much friction
Why this matters for developer velocity and competitive positioning
What dotAI Search in the SDK actually delivers
Real-world outcomes: what changes day-to-day
How to get started
The Problem: Every Project Starts from Scratch
Developers building headless apps on dotCMS have had access to dotAI's semantic search for a while. It's powerful, it works, and it surfaces content in context. But accessing it meant going directly against REST endpoints.
That meant hand-rolling authentication headers, constructing request payloads, parsing responses, managing error states, and writing loading UI — for every project, every time. There was no SDK method, no reference component, no shared starting point.
The result was predictable: AI search was either skipped entirely to avoid the overhead or built as a one-off integration that was expensive to maintain, hard to document, and inconsistent across projects.
Why This Matters
AI search is no longer optional. Keyword search fails the moment users describe what they want instead of what they've memorized — AI search understands intent, not just exact terms, surfacing the right content even when the words don't match. As companies are forced to produce more content to stay visible in an era where AI is reshaping discovery, the ability to make that content findable by meaning — not just metadata — is no longer a nice-to-have.
The implementation friction carried a real cost measured in developer hours. Every hour spent on REST request plumbing was an hour not spent on the experience end users actually see. Multiply that across every customer project, and the compounding effect is significant.
There's also a support dimension: inconsistent hand-rolled implementations are harder to diagnose, harder to document, and harder to update when the underlying API evolves.
The Solution: client.ai.search()
dotAI Search in the SDK removes the integration layer entirely. The `@dotcms/client` SDK now includes a `client.ai.search()` method that handles authentication, request construction, and response parsing behind a clean, typed interface. You pass a query string and configuration; you get back search results.
import { createDotCMSClient } from '@dotcms/client';
const client = createDotCMSClient({
dotcmsUrl: 'https://your-instance.dotcms.cloud',
authToken: process.env.DOTCMS_AUTH_TOKEN,
});
// Semantic AI search — one line
const results = await client.ai.search('tropical beach destinations', {
indexName: 'my-content-index',
threshold: 0.25,
limit: 10,
});For React developers, the dotCMS Next.js starter now ships a working AI search implementation built on a React hook pattern. Copy it, customize it, and ship it. Two modes are included out of the box:
Full-text search — a search input that queries your content index and returns ranked, semantically relevant results with loading and error state handling built in.
Related content — a sidebar pattern that surfaces similar content based on the current page's content identifier, no search input required.
Both patterns support analytics callbacks (`onSearch`, `onResultClick`) so you can track usage without adding external tooling. Angular support is on the roadmap for a future epic.
Real-World Outcomes
Under 15 minutes from zero to working AI search — install the latest SDK, configure your dotAI index, call `client.ai.search()`. The Next.js reference implementation handles the rest.
Consistent patterns across every project — no more per-project REST integrations. Every team builds from the same method and the same reference examples.
Demo-ready AI search for Sales Engineers — any headless dotCMS setup can now show AI search in a sales cycle without custom setup work.
PDF and file content is searchable — the dotAI backend indexes file assets including PDFs, so document-heavy sites get full AI search coverage without workarounds or custom extractors.
Lower support burden — standardized SDK usage means issues follow predictable patterns, making them easier to diagnose and faster to resolve.
How to Get Started
dotAI Search in the SDK is available now for all Evergreen headless customers at no additional charge. Here's the path to a working implementation:
Enable dotAI on your dotCMS instance and configure an OpenAI API key in the dotAI app settings.
Create an embedding index in Dev Tools → dotAI → Manage Embeddings/Indexes. Configure which content types should be included and build the index.
Update to the latest `@dotcms/client` SDK: `npm install @dotcms/client@latest`.
Call `client.ai.search()` in your application, or copy the search implementation from the dotCMS Next.js starter as a starting point.
Note: the current release supports OpenAI only. Multi-vendor support (bringing AI search to all configured dotAI model providers) is targeted for end of Q1 2026.
Conclusion
AI search in a headless app used to mean a day of integration work before you wrote a single line of product logic. `client.ai.search()` changes the equation: one method, a working reference implementation, and you're shipping.
The dotAI Search in the SDK feature is the start of a broader SDK-first approach to dotAI capabilities — making AI features as easy to reach as any other CMS operation. We'll be expanding SDK coverage to additional dotAI capabilities in upcoming releases.
To get started, see the documentation for `@dotcms/client` and the AI Search implementation in the dotCMS Next.js starter. If you're an Evergreen customer and want help setting up your first index, reach out to your CSM.