Placeholder image

Updates on Azure NetApp Files

| Geert van Teylingen |

ANF Storage


Episode #199

Introduction

In episode 199 of our SAP on Azure video podcast we talk about Azure NetApp Files.

A few weeks ago we already talked about the importance of high IO and throughput when it comes to storage. From an Azure side we have some really powerful storage solutions. Among them is Azure NetApp Files, a first party integration of NetApp Files in Azure. This service has been around for over 5 years and we already talked about it in the past. Now I am happy to have Geert back with us to share with us the latest news on Azure NetApp File.

Find all the links mentioned here: https://www.saponazurepodcast.de/episode199

Reach out to us for any feedback / questions:

#Microsoft #SAP #Azure #SAPonAzure #ANF

Summary created by AI

  • Azure NetApp Files: The speaker highlighted the importance of high IO and throughput for storage, particularly for SAP and databases, and introduced Azure NetApp Files as a powerful storage option integrated into Azure. This service, celebrating its fifth anniversary, continues to evolve with new features and capabilities.
  • Application Volume Group for Oracle: The speaker detailed the Application Volume Group for Oracle, explaining its ability to support large Oracle databases by allowing for multiple volumes and independent storage endpoints. This feature is crucial for databases requiring high throughput and is designed to eliminate choke points in storage access.
  • Azure NetApp Files Backup: Azure NetApp Files Backup, now generally available, was discussed as a major advancement for data protection. It offers block-level incremental backups, significantly reducing the data transfer volume compared to traditional file-level backups, especially for large databases. 15:31 Sizing and Cost Optimization: The speaker introduced a tool for assessing Oracle landscape and Azure NetApp Files sizing, aiming to optimize cost and performance. This tool helps in selecting the right configuration based on database size and throughput requirements, potentially leading to significant cost savings.