Hi everyone, and welcome to Microsoft Ignite 2023! The AKS team is looking forward to connecting in person and virtually with all the AKS community, throughout the Ignite Keynotes, Breakouts, QA sessions, expert meetups or over a beverage in the hallways!
The team has been hard at work making Azure the best platform to run kubernetes and a truly kubernetes-powered cloud. And over the last year Microsoft continued to leverage AKS as a tried and test platform for its critical workloads, putting a healthy amount of pressure on the service and continuing to help us push the boundaries of what’s possible with cloud native platforms and intelligent apps.
As Kubernetes continues to become pervasive, a lot of teams find themselves at different steps of their adoption, skillset or learning stage. At Build 2023 we showed a prototype of an assistant for AKS that would make the perfect companion for any everyday task with AKS and Kubernetes. As part of our private preview, a lot of our users told us how great it would be if they could have that for all of Azure and today, we’re happy to announce the Microsoft Copilot for Azure. This AI companion will help you design, operate, optimize and troubleshoot any service and brings every integration we showed for AKS and much more, including new handlers for log collection or permissions validation. The Microsoft Copilot for Azure will be the perfect assistant for all teams at any stage of their AKS and cloud adoption journey! Sign up for the preview here!
Improving resilience and uptime with simplified global footprint
Nowadays, all industries, companies and solutions rely in one way or another on software and digital components, and more than ever users expect flawless services that never fail performing at a high level. This has increased a lot of the resiliency requirements of most of our existing and new users. One of the most common ways to increase availability and resilience is ensuring a global footprint across multiple regions and geographies, with the added value that you can serve your users closer to their locations while benefiting from protection in case anything goes wrong with one of the regions. However, this could increase the complexity of management and operations of multiple clusters across these regions, so we’re thrilled to announce that Azure Kubernetes Fleet Manager is now Generally Available, allowing you to create fleets of AKS clusters with a few clicks and easily distribute workloads across them while orchestrating operations like upgrades across them in a consistent manner. Fleet Manager is also very modular, allowing you to use exactly the functionality you want without requiring you to change your practices for scenarios you already solved for today. You can choose to use it with a hub (a fully managed hub cluster that controls things like namespace and workload placement without requiring any management) or hubless if you just want a central view of clusters and central management of operations like upgrades.
As we wrapped KubeCon North America last week, 2 trends became very apparent as we talked with users and the community. The first one is how Kubernetes is uniquely positioned to power the AI revolution, providing a scalable, reliable and extensible platform that can meet the ever-changing needs of our users. The second was around the continued need for better cost visualization and optimization and more streamlined operations in order to reduce costs from all angles and allow users and business to focus on creating value for their business. These trends have long resonated with the team and we’re happy to show some of the latest things we’ve been working on in these areas.
Kubernetes powering the AI revolution
Today you can already use kubernetes in conjunction with some of Azure’s great AI services in order to very quickly and efficiently create intelligent applications that can scale and sustain any demand. However, many scenarios with privacy or customization requirements, for example, might need you to run/host your own model and customized inferencing. This brings a lot of challenges as you need to figure out how to containerize the models, host them, find capacity and the right GPUs for them, schedule them, provide endpoints so your apps can plug into them, etc.
So to simplify this we’re happy to announce the AI Toolchain Operator addon for AKS, based on the open source KAITO project. This addon will drastically simplify the experience and number of steps to run an OSS model from dozens and many days/weeks of work, to just a couple steps and a few minutes. It will also assist you setting up an endpoint for your applications to consume so you can quickly integrate with new or existing apps. We’re looking forward to partnering with our existing preview customers and now with all users to continue to simplify and enrich this experience and provide further integrations with the Azure ecosystem.
One of the key tenants of responsible AI is ensuring privacy concerns are addressed and respected. A few months ago, we were able to demonstrate a prototype for Confidential Containers allowing you to run any workload leveraging confidential hardware capabilities without any code changes and we’re happy to announce that Kata Confidential Containers are now in public preview.
Another important aspect is ensuring the provenance of your images to ensure that your supply chain doesn’t suffer from any tampering, so last month AKS announced Image Integrity, which allows you to sign any container image in ACR and validate its signature via policy on an AKS cluster, leveraging the Ratify open-source project.
Visualize, reduce costs & streamline operations
In our wider conjuncture, it’s of the utmost importance to ensure that teams’ infrastructure and operations are as efficient as possible and allow them to focus on their business outcomes.
We’ve been focusing on 3 main areas:
- Cost Visualization: We’re announcing the cost analysis addon that allows direct integration of namespace and kubernetes assets billing with the Azure Cost Management portal.
- Efficient scaling and cost reduction: For pod level scaling we’re very happy to announce the General Availability of KEDA (Kubernetes Event Driven Autoscaler) addon.
For the infrastructure, one of the key pain points is knowing the best, cheapest and most readily available SKUs as well as ensure the most efficient usage of nodes by tightly binpacking pods/containers within them. So we’re ecstatic to be able to announce the Node Auto Provision addon, which leverage the open source Karpenter project to efficiently select the cheapest, highly available and most suited VM SKUs that allow for the most efficient bin-packing of your environments.
- Lighting fast and efficient container starts: We’re announcing Artifact Streaming for Linux, allowing faster image pulls/container starts of at least 15% (with many cases well over 50%) prioritizing the pull of the essential layers and using the containerd overlaybd project.
- System reserved optimization: The team has worked hard on optimizing the resource usage of the kubernetes system components, allowing that every node after Kubernetes 1.28 (now GA) will have 20% more allocatable space for workloads.
- Simplifying common operations: We’ve provided over 10 new enhancements for Virtual nodes allowing for many more bursting and serverless scenarios to be possible, from LB services integration, container probes, debug containers and exec/port-forward capabilities, bringing a lot more parity to native node capabilities to this option. Additionally, for one of the most common tasks for applications, setting up routing (ingress, DNS, certificates), we’re making the Application Routing addon generally available, so you can have a fully managed and scalable bundle of all those capabilities delivered out of the box by AKS.
The are some of the latest things the teams have been working on, with many more available on the AKS Release Notes and throughout our announcements.
We can’t wait to meet you and chat about these or many other of our announcements and discuss what’s coming next and how we can help you achieve even more.