If you are also interested in the topic of virtualization in general and the “dark horse” of VVOL in particular, I suggest that you familiarize yourself with a useful article from colleagues from computer weekly, the translation of which is just before you. The material reveals some features of the technology and discusses the issues of support by manufacturers of storage systems.
By the way, if you read about virtual volumes for the first time, I recommend starting with introductory material, where the main idea is outlined with large strokes.
With the release of vSphere 6, VMware launched one of the most discussed storage technologies on the market – Virtual Volumes (VVOL). The idea is to manage storage policies at the virtual machine level, not the datastore. This is a remarkable improvement in the management of external storage and VM compared to vSphere 5.5. Surely this will encourage many vendors to announce the support of virtual volumes in their devices and try to come first to the watering.
Installations with VVOL will help simplify operational tasks, improve the efficiency of the allocation of disk resources and move to more flexible management tools.
But the devil is in the details. In this article, we dive into VVOL technology, find out why we need virtual volumes and formulate five important questions before implementation.
Regardless of the type of storage system (block or file), vSphere virtual machines are always stored in a storage logical entity called datastore.
When shared external devices are used as storage, the datastore is almost always built from one large LUN. Moreover, such a large partition is the smallest logical element in the storage system. On non-virtualized systems, LUN-level granularity was perfectly acceptable, since there was only one server for each volume. You could easily handle service policies using the LUN.
With the advent of virtualization, the relationship between LUN and virtual machines has changed: now there are a lot of guests on one LUN / datastore, each of whom is forced to receive the same level of service.
This architecture has demonstrated a number of problems. QoS and other service policies could only be applied at the disk volume level, even with block-level tiering. Thus, any transfer of “hot” data would occur for all VMs on this storage. Until recently, VMware users had to use options like DRS to somehow distribute the load on storage systems.
Changing the service level for a VM meant transferring it to another storage, with its own capacity and performance characteristics. Of course, this is not the fastest and easiest process, moreover, it requires reserving a place in advance on all storage systems.
Even standard QoS-capable systems were not able to take advantage of the VM prioritization, because this required placing the virtual machine on its own disk volume. The “one VM – one LUN” strategy did not justify itself, because ESXi is limited to 256 LUNs per host. Not surprisingly, scenarios with individual LUNs did not arouse much enthusiasm among the public.
To solve the problem, VMware changed its approach to working with VM disks, which resulted in the emergence of virtual volumes technology.
After a quick glance, VVOL looks like a simple container for a virtual machine, but the actual implementation is somewhat thinner. Each VVOL stores one of the components of a virtual machine (configuration, paging file, VMDK), German lederhosen and together they constitute the “virtual machine” object in the storage.
VVOL offers several new concepts that will help to understand the essence of the novelty:
- Storage provider (SP). Acts as an interface between the hypervisor and external storage. Works separately from the data path and uses the already existing VASA protocol; transmits all information about the repository and VVOL. Virtual volumes require VASA 2.0.
- Storage container (SC). This is just a storage pool on the external array, which each vendor implements in its own way. In the future, a logical volume can be assembled from several such pools to accommodate the VVOL. In general, the reliability and stability of VVOL storage facilities are largely dependent on the implementation of the Storage container.
- Protocol endpoint (PE). Responsible for providing a virtual volume to the hypervisor. Implemented as a regular LUN, although not storing any data. We can say that this is a kind of gateway for access to the connected VVOL. If you are familiar with the EMC VMAX family, then PE is similar to the EMC gatekeeper.