Vendor lock-in on the storage side is a big problem in many environments today due to the expense of getting the storage area network (SAN) or Network Attached Storage (NAS) array — often in the hundreds of thousands to millions of dollars per array — along with the expertise required to operate the array, tune it and complete the necessary provisioning, monitoring and optimization tasks.
In addition, with the cost per gigabyte dropping rapidly, it’s cheaper to purchase storage just before you need it. But doing so can lead to a “death by a thousand cuts” as you constantly have to go back to management and ask for another few disks (which are relatively inexpensive), a new shelf (somewhat more expensive) or even an entire new array (a very expensive proposition).
Up until now, the cost was deemed unavoidable and worth it to ensure high availability, shared access across hosts, low latencies, etc. These features will still probably be required for large, complex companies (and for core data center functions, etc.) for years to come, but in many other cases, they may not be required.
Virtual SAN (VSAN) is implemented at the kernel level, and thus doesn’t suffer from the performance disadvantages of the Virtual Storage Appliance (VSA), which was (and is) implemented as a virtual appliance (VA). While the VSA was designed as a small-to-medium business (SMB) or remote office / branch office (ROBO) solution where a SAN or NAS array was too expensive, VSAN is designed for use in the enterprise in addition to the VSA use cases. Both the VSA and VSAN have the same basic purpose: Take local storage located in individual servers and turn it into shared storage that can be used by HA, vMotion, DRS, etc.
VSAN is enabled at the cluster level, similar to HA and DRS today; in fact, it is just another property of a cluster. It can be enabled in just two clicks, though many advanced options can be set, along with storage policies, to provide the needed availability to VMs at the best cost and performance. The nice thing about this product is that you can scale up by adding additional storage within an ESXi host (up to 42 disks), and you scale out by simply adding another ESXi host into the cluster (up to the vSphere maximum of 32 nodes).
VSAN has the following requirements:
- 3–32 hosts per vSphere cluster
- HA must be enabled for the cluster (DRS often will be as well)
- 1 SSD and 1–7 magnetic (spinning) disks, which create a disk group
- 1 GbE minimum, with 10 GbE recommended
- vSphere and vCenter (5.5 or higher)
- VSAN license key
A few quick notes on disks and disk groups before we move on. First, SSD space is used for caching (both read and write) only, and thus any discussion of usable space ignores all of the SSD space in every host. Second, VMware’s best practice is that 10 percent of the space in each disk group is SSD to ensure there is enough space for caching. It will work if there is less, but performance may be impacted. Third, each host can have zero to five disk groups located on it. Any host with zero disk groups on it can run VMs like any other host, but storage requests will go to the other nodes. Finally, note that just because a VM runs on a given host, there is no guarantee the storage needed by that VM is local to the same host.
While some may question performance and/or scalability of the VSAN solution as well as the CPU performance cost, VMware has tested and shown nearly two million input/output operations per second (IOPS) in a single cluster (read only; roughly half that in a mixed read/write scenario) and, at that level, only a 10 percent hit to CPU performance. While the 10 percent may sound like a lot, most ESXi servers today are running closer to 50 percent CPU utilization, so the extra 10 percent hit will not likely affect VM performance. Each cluster supports up to 4.4 petabytes of space as well, allowing for large amounts of data per cluster. Note that this space is given directly to VSAN to use; if any RAID is used (and usually it is not — just the raw disks are given to VSAN to use as it sees fit), only RAID 0 is supported. In fact, in many ways, it acts like Storage Spaces in Windows Server 2012 in this regard.
This is an excerpt from the Global Knowledge white paper VSAN: Reimagining Storage in vSphere.
Related Training
Authorized VMware training