Choosing VMFS block size with vSphere 4.1
Last month I regularly received requests from colleagues concerning VMFS block sizes. Although it’s a simple setting, it still raises a lot of questions and the introduction of vSphere 4.1 has somewhat changed the game.
The block size on a VMFS datastore defines two things:
- The maximum file size;
- The amount of space a file occupies.
First of all, the block size determines the maximum file size on the datastore. If you select a block size of 1MB on your datastore the maximum file size is limited to 256GB. So you cannot create a virtual disk beyond 256GB.
Also, the block size determines the amount of disk space a file will take up on the datastore. This is theoretical because VMFS3 uses sub-block allocation (see below).
It is not possible to change the block size after you set it without deleting the datastore and re-creating it. Therefore you should create a good design and determine the block size before creating the datastores.
Limitations, overhead & performance
When creating datastores, you have the following options:
- 1MB block size – maximum file size: 256 GB;
- 2MB block size – maximum file size: 512 GB;
- 4MB block size – maximum file size: 1 TB;
- 8MB block size – maximum file size: 2 TB – 512 bytes.
Because a datastore will primarily contain large VMDK files, increasing the block size will not result in a lot more overhead or the use of much more disk space. Also, VMFS3 uses sub blocks for directories and files smaller than 1 MB. When the VMFS uses all the sub block (4096 sub blocks of 64 KB each), file blocks will be used. For files of 1 MB or larger, file blocks are used. The size of the file block depends on the block size you selected when the datastore was created.
Until now, I always matched the blocksize to the size of the datastore. When using smaller datastore (less than 500GB) I used a 1 MB block size because virtual disks greater will never exceed 256GB. When using medium size datastores (500GB – 1TB), there is a chance you might create larger virtual disks so I used a 2 MB block size. For larger datastores (1TB – 2TB), I choose a 4MB block size because of the same reasons. I’ve never created virtual disks equal to the maximum size of a datastore (2TB), so almost never used a 8MB block size. In these cases, I usually create RDMs.
There are also a lot of questions regarding VMFS blocksize and storage performance on VMTN, blogs and other VMware related media and every time the outcome is: ‘There is no noticeable I/O performance difference by using a larger block size’.
But when there is hardly any overhead and no performance penalty when using a larger blocksize and it only limits you in the maximum file size, why don’t we uses 8MB block sizes for all datastores?.
Especially when you know that using different block sizes for different datastores can create problems, like:
- vStorage APIs for Array Integration (VAAI) will not work across datastore with different block size;
- VMware Consolidated Backup (VCB) using hot-add backup may not work;
- other backup program that use hot-add, have the same limitation; for example the VDR manual specify: ‘When choosing a datastore on which to store the files for the backup appliance, choose a datastore with the largest VMFS block size. This is necessary to ensure that the backup appliance can back up virtual machines from all datastores.’;
- there could be some problem with RDM and snapshot (in this case the problem is on a single datastore, but is still related to block size); see http://virtualkenneth.com/2009/11/10/vmfs-and-block-size-is-important-for-virtual-rdms/.
vStorage APIs for Array Integration
I will make it even more interesting for you and add the vStorage APIs for Array Integration (VAAI) into the equation.
With the release of vSphere 4.1, VMware introduced VAAI which enables offloading of storage tasks to the storage array. Using VAAI will reduce the load on the datastores and on the servers. (I intentionally say that VAAI will reduce the load on the datastores and not reduce the load on the storage array because offloading tasks to the storage array will of course increase the load on the array.) Check the VMware HCL for a list of VAAI capable storage arrays.
In the past, one of the best practices was not to overload your datastores by limiting the maximum number of virtual machines per datastore because of the amount of SCSI reservations and the associated performance-impact. Using VAAI will change how we do our datastore design. We can now simply base our datastore size on the maximum number of virtual machines per datastore, which is now limited by IOPS requirements. So using VAAI will result in having larger datastores which subsequently impacts the used VMFS block size.
Now, how should we choose our VMFS block sizes?
First of all, knowing that using several block sizes can result in problems. I would advice you to choose identical block sizes for all datastores.
Second, because larger block sizes do not impact performance or storage overhead and with the introduction of VAAI datastore sizes will grow (max 2TB – 512b), I would advice you to choose a block size which gives you the ability and flexibility to grow with it. Several people have proposed to use the maximum block size of 8 MB because of this (for example: Duncan Epping).
- VMware communities and KB;
- Duncan Epping – Block sizes and growing your VMFS;
- Duncan Epping – vStorage APIs for Array Integration aka VAAI;
- Didier Pironet – Understanding VMFS block size and file size.
- Frequently asked questions for the vSphere Client (HTML5) by Anne Jan Elsinga
- VMGuru TV Episode 3 - Our VMworld 2020 recommendations by Anne Jan Elsinga
- VMGuru TV Episode 4 by Anne Jan Elsinga
- Onboard existing workloads in Cloud Automation Services by Erik Scholten
- NSX load balancer buffer size by Sander Martijn