Where typical on-prem or ‘self-managed’ CMS is installed and run on-site, with AEM as a Cloud Service this architecture has changed fundamentally. It was born as a Cloud-native system so it provides companies with even more capabilities than using on-premise or self-managed systems. It also reduces complexity and costs. There are 3 ways it does this.
Firstly, as the new cloud-native architecture leverages more automation, processes that could be time-consuming and not as efficient with on-premise CMS or DAM deployment, are now much easier. For example, with AEM as a Cloud Service, the upgrade cycle is turned around entirely. Instead of getting a service pack every three months, you get weekly updates from Adobe that have already been tested against your code so they don't break anything in your environment. You simply make a change, observe any breakages, and fix the backend yourself before it even makes it to production.
Secondly, with Cloud-native AEM, you can also test new versions of your code easily and quickly as it streamlines A/B testing (also known as blue/green or black/red deployments). You can run new versions on 5% of your traffic and analyze the performance. If the traffic doesn’t match your expectations, you take it down and try to figure out why. In short, AEM as a Cloud Service enables you to achieve data-driven results. Say, for example, an online retailer deploys a new ‘shop’ button on their site. If the button produces more traffic and conversions, great! They can go forward and run it on 100% of their traffic. If it doesn’t, there will be a drop in conversions, so, they’ll take it down and have a closer look. Thanks to real-time data analysis, anomaly detection, and advanced automation, brands are told in real-time whether something is or isn’t running as it should be.
Thirdly, AEM as a Cloud Service offers true cluster for authoring and Adobe I/O integration. In the new architecture, the authoring environment provides more scalability — you can add a lot more nodes working in parallel as many of the tasks that traditionally ran in one instance are now serverless. For example, if you want to upload multiple videos there is no longer the risk of a “self-inflicted DOS attack”. With the new architecture, instead of uploading them, you point to an S3 bucket, and a serverless call is made to this with cloud-only calculations. Even if you have a long queue, it doesn't matter. You can launch a lot of these services in parallel and you’d still have the same processing time, making it much more efficient.