An architect’s view on AI-powered personalization in Edge Delivery Services
Delivering hyper-personalized experiences can be a challenge for enterprise leaders who refuse to sacrifice speed for conversion. Adobe Edge Delivery Services (EDS) is built on a philosophy of making the web 'Faster. Better. Period.” and provides a foundation that prioritizes uncompromised performance. This guide explains how to take advantage of this head start with an architecture for AI-powered personalization.
We’ll look at how semantic- and vector-based backends are best integrated to present the right content to the right audience. The goal is to collect the context, interpret it, act on it, and then learn from the results. We provide a technical blueprint for the setup to future-proof your content strategy.
Focusing the personalization and AI strategy
Let's begin by defining our goal. While personalization covers everything from email retargeting to loyalty dashboards, this architecture we suggest is flexible enough to handle many of those scenarios. For the purpose of this guide, we’ll focus the on-site experience as our reference.
Everyone wants to start integrating artificial intelligence, but meaningful and practical use cases are still emerging. That’s why we anchor our architecture in tangible outcomes rather than hype.
Our core objectives are to:
- Collect static data and content in a central index (database)
- Collect user context. This can range from a full Real-Time Customer Data Platform (RT-CDP) profile to server-side signals like IP-based geolocation or simply everything we have available via JavaScript APIs
- Use this behavioral and context data to filter and sort content presentation in real time
- Analyze performance data to learn and add additional feedback to the process
A pragmatic approach to implementation
We recommend focusing on established techniques so we can foresee results and achieve fast time-to-market. You don't need a ‘big bang’ launch or complex experiments to succeed with AI. Best results come from starting small, measuring the impact, and then scaling what works.
A platform for your static data
If you already have a modern site search, this part is very straightforward. Once we have your web content and any other relevant visitor data consolidated in one bucket, we can enable semantic capabilities.
If you need to set up something from scratch, a robust setup we have seen deliver great results is an ElasticSearch index to store the data. We won’t go into details here around Elastic configuration. We provide accelerators for Adobe Experience Manager (AEM) to automatically ingest website content, which significantly speeds up the process.
Whether you use a fully managed Search as a Service platform or prefer the self-managed flexibility that a Platform as a Service promises, what matters most is the ability to find by meaning rather than just matching keywords.
Why semantic search matters
With a classic keyword we would need to optimize the context to match our taxonomy. Semantic search bridges this gap for us.
If you search for "music," a traditional search engine returns zero results if that exact word isn't on the page. A semantic search powered by a vector database is smarter and understands the relationship between concepts. You can see this in action on the netcentric.biz website. If you scroll down, you will see results for the 'AEM Rockstar' event, even though there is no exact keyword match.
AI-enabled data interpretation
The data platform is the core of this setup. It provides classic keyword search and enables AI-driven vector search. Combining the two creates a powerful tool. If you already know the product category from a menu click, you can pre-filter the content before letting the AI step in to help.
Technically, an embedding model transforms the data into a vector (a numerical representation of the user's intent). This vector is used to query a vector database. The database compares the user's vector against your content vectors, mathematically calculating the "distance" between them. The items with the shortest distance and highest semantic similarity are returned as the search results.
Bringing it together
After your backend is ready you need to ask the right question. For that you want to form a context. Depending on the use case, there are different factors that become important, like browsing history, recent orders, geo-location, user-agent, device type, or the current page. While you need to feed the AI engine enough data, experience shows that excluding irrelevant data points can significantly improve the accuracy of results.
Combining the user context with content context means personalization stays relevant to the user's current location on the site. If a user is browsing a specific category, show them content that matches their interest. This works for both logged-in and anonymous users, as long as you can record and interpret the behavioral data.
Personalization in Edge Delivery
AEM Edge Delivery Services renders strictly optimized pages. In particular, the bootstrapping process prioritizes critical elements like Largest Contentful Paint (LCP). This logic, combined with the fast delivery infrastructure, allows EDS pages to achieve near-perfect Google Lighthouse scores.
Personalization should be done in the frontend, because:
- It cannot and should not be server-side cached, as this would mean sharing personalized data between different users.
- Personalization adds latency, by handling it on the frontend, you can render the static parts of the page instantly, so the user sees progress instead of a blank screen.
- As with everything, this isn't absolute. While some use cases favor server-side personalization, in our experience, they are the exception rather than the rule.
The existing plugin for EDS Experimentation already provides a mechanism for personalization and even provides a basic UI for the author to control it. For our personalization, this building block is definitely the right starting point and can be extended to integrate with our AI backend.
Avoid implementing personalization as a third-party script that manipulates the DOM in parallel to the actual page rendering logic. That approach will not scale and can negatively impact UX and performance.
Building complexity on a solid foundation
The core architecture here is built to last, so you can integrate AI capabilities straight into your current stack. This architecture is highly flexible and fits into most existing environments.
The following diagram outlines the data flow:
Ingest
This triggers automatically on publish/unpublish events from your authoring environment. It manages the flow of content into the search index. You can feed that same index with data from multiple sources, like other CMS platforms, Adobe Commerce (Magento), or even external job boards. Keeping query and selection in one place reduces lag and removes complexity.
Index
This content repository uses an embedding model to transform plain text into vector embeddings, preparing it for similarity search.
API Gateway
Acting as the traffic controller, the gateway handles search queries from the browser, transforming them into vector embeddings via inference before executing the search against your indexed vectors.
Hybrid search
Our hybrid search model ensures accuracy by prioritizing exact keyword matches while pulling in semantic context for broader relevance.
Key areas for future extension
- Experience Platform integration: Combine this search architecture with Adobe Experience Platform’s (AEP) RT-CDP, Identity Providers (IDPs) or CRMs. It lets you use real-time context with user profiles and their history.
- RAG for chatbots: Integrate a Large Language Model (LLM) with your content index using Retrieval Augmented Generation (RAG). You can deploy a custom chatbot that answers questions using only your enterprise content, keeping your brand safe and hallucinations low.
- Advanced e-commerce: Plug in external commerce services to support sophisticated product recommendations and personalized pricing.
This approach is the best starting point for enterprises to successfully implement AI and personalization. This makes sure content is targeted effectively for every user interaction, without needing a massive overhaul on day one.
Ready to optimize your digital foundation?
Talk to our team today to discuss your architecture, explore new use cases for Edge Delivery Services, and start building a scalable, AI-powered foundation.
Frequently asked questions (FAQ)
Can you personalize Adobe Edge Delivery Services without losing performance?
There will always be an impact but by using a hybrid architecture that combines client-side vector search with Edge Delivery Services (EDS), you can deliver hyper-personalized experiences with great speed. This approach executes personalization logic in the browser, allowing the static parts of the page to render instantly while dynamic content loads in parallel, maintaining near-perfect Lighthouse scores.
Does AI personalization require a full platform overhaul?
No. The architecture outlined in this guide allows you to integrate AI capabilities into your current stack. By using a decoupled API gateway and vector database, you can add semantic search and personalization layers to your existing Adobe Experience Manager (AEM) setup without needing a "big bang" replatforming project.
How does semantic or vector search improve on-site personalization?
Vector search transforms user behavior and content into mathematical representations (vectors). This allows the system to calculate the "distance" between a user's intent and your content catalog in real-time. The result is personalization that goes beyond simple segmentation, delivering content that is semantically relevant to the user's current context and journey.
Is client-side personalization better than server-side for Edge Delivery Services?
For EDS, client-side personalization is generally preferred to maintain cacheability and speed. Server-side personalization often prevents efficient caching, which can degrade performance. By handling personalization in the browser, you ensure the core page loads instantly from the edge, while personalized elements are injected dynamically, providing the best balance of speed and relevance.
What data fuels AI-powered personalization?
AI engines thrive on context. Key data points include browsing history, recent orders, geolocation, device type, and the current page context. However, data quality is more important than quantity; feeding the engine irrelevant data can introduce noise and reduce accuracy.