Create,Configure and Manage Datastore Clusters
What is Datastore Cluster?
A datastore cluster is a collection of datastores with shared resources and a shared management interface. Datastore clusters are to datastores what clusters are to hosts.
When you add a datastore to a datastore cluster, the datastores resources become part of the datastore cluster’s resources. Datastore clusters are used to aggregate storage resources, which enables you to support resource allocation policies at the datastore cluster level. Also datastore cluster provides following benefits:
Space utilization load balancing
I/O latency load balancing
We will talk about these in greater details a bit later in this post.
Datastore Cluster Requirements
Before creating a datastore cluster, one should keep following important points in mind:
1: Datastore clusters must contain similar or interchangeable datastores: A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. However, the following types of datastores cannot coexist in a datastore cluster.
NFS and VMFS datastores cannot be combined in the same datastore cluster.
Replicated datastores cannot be combined with non-replicated datastores in the same Storage-DRS-enabled datastore cluster.
2: All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later else Storage DRS will not work.
3: Datastores shared across multiple data centers cannot be included in a datastore cluster.
4: As a best practice, do not include datastores that have hardware acceleration enabled in the same datastore cluster as datastores that do not have hardware acceleration enabled.
How to create Datastore Cluster?
To create datastore cluster, navigate to storage view within web client and right click on virtual datacenter and select Storage > New Datastore Cluster
Provide a name for the datastore cluster and Turn on SDRS if you want so (of course you would need SDRS else whats the point of creating the datastore cluster)
Select the automation level for SDRS. This automation level is similar to what we chose for DRS but with the difference that there is no partial automation thing with SDRS. Either it should be fully automated or no automation at all.
No Automation (Manual Mode)
When the datastore cluster is operating in manual mode, VM disk placement and migration recommendations are presented to the user, but do not run until they are manually approved.
Fully automated allows SDRS to apply space and I/O load-balance migration recommendations automatically. No user intervention is required. However, initial placement recommendations still require user approval.
Also you can override automation level for individual options against automation level selected for SDRS as a whole.
Check-mark the Enable IO metric so that SDRS can migrate a VM from one datastore to another if there are IO imbalances across cluster. This is very helpful when a VM have high IO demands but underlying datastore is not able to meet the IO demands of that virtual machine. SDRS will check if there is any other datastore in cluster which is not very busy and will migrate the VM disks to that datastore so that VM’s IO demand can be fulfilled.
Also for space load balancing, you can set the threshold percentage. By default it is set to 80%. When a datastore utilization crosses the threshold, an alarm will be generated in vCenter. SDRS on its next run will migrate VM disks around so the overall utilization of individual datastore should come under defined threshold.
Alternatively you can select Minimum free space option. If this option is selected, SDRS will keep an eye on free space left across all datastore and when the amount of free space drops below what is configured here, it will invoke disk migration.
You can set default VM affinity to keep VMDK’s together always or on separate datastore. Its very similar to affinity rules in DRS which keeps two or more VM’s always together or always away.
If you deselect the “Keep VMDKs together by default” option, all new virtual machines are configured with an anti-affinity rule, which means that SDRS initial placement and load balancing will keep the VM files and VDMK files stored on separate datastores.
By default SDRS checks for Space/IO imbalances every 8 hours, if you want you can change this to a lower or higher value depending upon your infrastructure needs.
If its set to too low, there will be a lot of disk migrations whenever a datastore cluster is imbalanced.
If its set to too high, the cluster will be imbalanced for a longer period of time and virtual machine can/may suffer storage performance issues.
Select the host cluster to which you want to map this datastore cluster and hit next.
Next is the selection of datastores which will be part of this datastore cluster.
As a best practice and for sake of making SDRS life easy, always select only those datastores which are connected to all hosts in the compute cluster. Lets talk about this a bit more to understand the importance of datastore selection.
There are 2 types of datastore cluster: Fully Connected and Partially Connected. This is explained in chapter 24 of one of the most awesome book ever writter i.e. “Clustering Deep Dive” by Frank and Duncan. Below is the excerpt from that book:
Fully connected datastore clusters: A fully connected datastore cluster is one that contains only datastores that are available to all ESXi hosts in a DRS cluster. This is a recommendation, but it is not enforced.
Partially connected datastore clusters: If any datastore within a datastore cluster is connected to a subset of ESXi hosts inside a DRS cluster, the datastore cluster is considered a partially connected datastore cluster.
What happens if the DRS cluster is connected to partially connected datastore clusters? It is important to understand that the goal of both DRS and Storage DRS is resource availability. The key to offering resource availability is to provide as much mobility as possible. Storage DRS will not generate any migration recommendations that will reduce the compatibility (mobility) of a virtual machine regarding datastore connections.
Storage DRS prefers datastores that are connected to all hosts. Partial connectivity adversely affects the system’s abilities to find a suitable initial location and load balancing becomes more challenging. During initial placement, selection of a datastore may impact the mobility of a virtual machine amongst the clustered hosts since selecting a host impacts the mobility of a virtual machines amongst the datastores in the datastore cluster
On Ready to Complete page, review your settings and hit finish to close the wizard.
Select the newly created datastore cluster and navigate to Summary tab to see overall information about the cluster.
You can change the settings for the datastore cluster any time later by selecting the Datastore cluster > Manage > Storage DRS and clicking on edit button.
Now since we have created a datastore cluster and configured SDRS, lets talk a bit more about the advantages which SDRS brings with it.
The goal of Initial Placement is to place virtual machine disk files based on the existing load on the datastores, ensuring that neither the space nor the I/O capacity is exhausted prematurely.
You might have noticed that when you dont have a datastore cluster, then at the time of new virtual machine creation you are presented with where to store the VM. As a normal practice, an administrator selects the datastore with max free space and there is nothing wrong with that. But have you ever thought about IO while choosing a datastore for virtual machine disk placement?
Lets take an example to understand this.
Suppose you are creating a virtual machine for IO extensive application and the datastores are backed by SSD disks. While placing the virtual machine, you selects a datastore which is big enough to hold that VM (and you are happy for your SSD backed LUN).
But what if all the existing VM’s on that datastore are also running IO extensive applications like SQL server or an Orcale DB, then the new VM will be competing with those VM. The new VM might never get the IO which it needs for best performance of application running inside guest OS.
With datastore cluster, while creating a new VM, you are presented with option to select the datastore cluster and not the individual datastore, and since you have enabled SDRS which monitors the datastores for IO imbalances, it will present a recommendation to end user for where to place the new VM. Isn’t that cool?
While creating datastore cluster, we observed the two distinct load-balancing modes for SDRS i.e. No Automation and Fully Automated. Where Initial Placement reduces complexity in the provisioning process, Load Balancing addresses imbalances within a datastore cluster.
Over the time an environment grows and new virtual machines are added regularly, the underlying datastores started to get full. Also IO imbalance might start to occur as every virtual machine have different IO demands.
SDRS invokes load balance process periodically (by default every 8 hours). SDRS generates the placement recommendations if the space utilization or I/O latency of a datastore exceeds the configured threshold values. Depending upon selection of SDRS automation level, these recommendations will be automatically applied or will be presented to the administrator who then can decide to apply them manually.
SDRS Anti-Affinity Rules
We discussed a little bit about affinity rules earlier when we created datastore cluster. We saw that if we deselect the default “Keep VMDK’s together”, by default anti-affinity rules are applicable for all VM’s. With SDRS there are 2 type of anti-affinity rules:
Inter-VM Anti-Affinity Rules : This rule controls which virtual machines should never be kept on the same datastore. We can compare this with DRS anti-affinity rule which do not allows 2 similar VM’s to run together on a single host.
This rule can help maximize the availability of a collection of related virtual machines. For e.g VM’s in Oracle RAC clusters.
Thus even if a datastore is lost because of some issues, you have other similar VM running on other datastore and thus providing you high availability for your application.