Study extra about what’s new with Microsoft Azure Storage at KubeCon Europe 2025

Wow, KubeCon + CloudNativeCon Europe 2025 in London was a implausible time! Our Azure Storage group was thrilled to change insights and success tales at this vibrant group occasion. When you missed it, don’t fear! We’re right here to share what we confirmed off on the con: how we’re enhancing efficiency, cost-efficiency, and AI

Wow, KubeCon + CloudNativeCon Europe 2025 in London was a implausible time! Our Azure Storage group was thrilled to change insights and success tales at this vibrant group occasion. When you missed it, don’t fear! We’re right here to share what we confirmed off on the con: how we’re enhancing efficiency, cost-efficiency, and AI capabilities for your workloads on Azure.

Optimize your open-source databases with Azure Disks

Open-source databases similar to PostgreSQL, MariaDB, and MySQL are among the many mostly deployed stateful workloads on Kubernetes. For situations that demand extraordinarily low latency and excessive enter/output operations per second (IOPS)—similar to working these databases for transactional workloads—Azure Container Storage allows you to faucet into native ephemeral non-volative reminiscence specific (NVMe) drives inside your node pool. This supplies sub-millisecond latency and as much as half 1,000,000 IOPS, making it a robust match for performance-critical use instances. In our upcoming v1.3.0 replace, now we have made important optimizations particularly for databases.

In comparison with the earlier v1.2.0 model, you’ll be able to count on as much as a 5 occasions enhance in transactions per second (TPS) for PostgreSQL and MySQL deployments. When you’re on the lookout for one of the best stability of sturdiness, efficiency, and value for storage, Premium SSD v2 disks stays our really helpful default for database workloads. Premium SSD v2 presents a versatile pricing mannequin that expenses per gigabyte and consists of beneficiant baseline IOPS and throughput totally free out of the field. You may dynamically scale IOPS and throughput as wanted, permitting you to fine-tune efficiency whereas optimizing for price-efficiency.

At KubeCon, we demonstrated how builders might readily benefit from native NVMe and Premium SSD v2 Disks to construct their extremely out there and performant PostgreSQL deployments. If you wish to observe alongside your self, take a look at the newly republished PostgreSQL on AKS documentation under!

Speed up your AI workflows with Azure Blob Storage

Constructing an AI workflow calls for scalable storage to host huge quantities of knowledge—whether or not it’s uncooked sensor logs, high-resolution photos, or multi-terabyte mannequin checkpoints. Azure Blob Storage and BlobFuse2 with the Container Storage Interface (CSI) driver present a seamless approach to retailer and retrieve this knowledge at scale. With BlobFuse2, you entry blob storage simply as a persistent quantity, treating it like an area file system. With the most recent model of Blobfuse2 – 2.4.1, you’ll be able to:

  • Pace up mannequin coaching and inference: BlobFuse2’s enhanced streaming help reduces latency for preliminary and repeated reads. Utilizing BlobFuse2 to load massive datasets or fine-tuned mannequin weights straight from blob storage into native NVMe drives on GPU SKUs successfully optimizes the effectivity of your AI workflow.
  • Simplify knowledge preprocessing: AI workflows usually require frequent transformations—similar to normalizing photos or tokenizing textual content. By utilizing BlobFuse2’s file-based entry, knowledge scientists can preprocess and retailer outcomes straight in blob storage, preserving pipelines environment friendly.
  • Guarantee knowledge integrity at scale: When dealing with petabytes of streaming knowledge, integrity checks matter. BlobFuse2 now consists of improved CRC64 validation for knowledge saved on native disk, guaranteeing dependable reads and writes, even when working with distributed AI clusters.
  • Parallel entry of huge datasets: Carried out parallel downloads and uploads to considerably lower the time required for accessing massive datasets saved in blobs. This enhancement permits for sooner knowledge processing and elevated effectivity, guaranteeing optimum utilization of GPU assets and enhancing coaching effectivity.

Scale your stateful workloads with Azure Information

Steady Integration and Steady Supply/Deployment (CI/CD) pipelines, one of many widespread stateful workloads, want shared persistent volumes to host artifacts within the repository, the place Azure Premium Information is the storage of selection on Azure. These artifacts are saved in small recordsdata that incur heavy metadata operations on a file share. To hurry up CI/CD workflows, the Azure Information group not too long ago introduced the overall availability of metadata caching for premium SMB file shares. This new functionality reduces metadata latency by as much as 50 p.c, benefiting metadata-intensive workloads that sometimes host many small recordsdata in a single share. At KubeCon, we showcased how metadata caching can speed up repetitive construct processes on GitHub, discuss with the repo and check out it out your self.

For much less performance-demanding stateful workloads, Normal Information with the brand new Provisioned v2 billing mannequin presents higher price predictability and management as shared persistent volumes. The Provisioned v2 mannequin shifts from usage-based to provisioned billing, permitting you to specify required storage, IOPS, and throughput with higher scale. Now you can develop your file share from 32 GiB to 256 TiB, 50,000 IOPS, and 5 GiB/sec throughput as your purposes demand.

KubeCon + CloudNativeCon was a beautiful alternative to straight work together with builders and study from our clients. As at all times, because of our clients and companions for contributing to the occasion’s worth and significance, and we stay up for seeing you once more in November for KubeCon North America!