StarWind compared to VMware and Hyper-V
With both VMware and Microsoft developing software based storage solutions, we started to wonder what kind of edge a product like StarWind still would have on these solutions. When we asked StarWind, they quickly replied with a couple of feature comparison documents. These documents point out some features in which StarWind still has a benefit over the products from VMware and Microsoft.
StarWind compared to VMware
In the document for VMware the comparison is made to VMware VSA and VMware vSAN. I left the VMware VSA comparison out of this article because it is End-of-life (EOL).
StarWind | VMware vSAN | |
Minimal # of nodes | 2 | 3 |
Maximum # of nodes | Unlimited | 32 |
Maximal capacity | Unlimited | 148TB |
Flash (SSD) | Optional | Mandatory |
RAID | Optional | 5,10 |
Switch (10GB) | Optional | Mandatory |
Deduplication | In-line | N/A |
Cache | RAM WB or WT, Flash cache | Flash |
When I first looked at the table of comparison I didn’t quite get some of the values that where presented. For example the “minimal # of nodes” since I figured you would need at least 2 nodes to create the storage you would want to provide. And besides that still need hardware to run your vSphere hosts on. That would mean that with vSAN you could use the local disks, but not with StarWind.
The explanation I got from StarWind is that you could run two different approaches in combination with vSphere hosts:
- Create separate hardware nodes besides the virtualization hosts
- Create a VM on each host with StarWind software installed and provision the local disks of the vSphere host to that VM
The “switch” for VMware vSAN is semi true since the documentation states that 1Gbps private is required but it’s highly recommended to use 10Gbps.
StarWind compared to Hyper-V
The comparison document for StarWind and Hyper-V is divided in 5 parts, each describing a different Hyper-V scenario.
1. Shared Nothing Live Migration
As the scenario name suggests you do not have shared storage or clustered Hyper-V hosts. This means that you can perform live migrations, but only for planned downtime as the data needs to be moved to another host. Depending on the amount of data that the migration needs to move it can take a long time.
With StarWind you can use the local storage resources to create a fault-tolerant iSCSI SAN, which would allow protection of unplanned downtime as data is synchronized continually.
2. Hyper-V Replica
Enabling Hyper-V replica at a VM level will asynchronously replicate VM data every 5 minutes. This would be an effective level for disaster recovery and it would enable manual fail over when needed. Along with manual fail over, StarWind ensures automatic fail over, thereby providing continuous availability, zero data loss, and the best RTO and RPO values. Since StarWind makes synchronous mirroring, there is no difference between the data kept on both storage servers.
3. SMB 3.0
SMB 3.0 provides a very simple way to cluster Hyper-V hosts. At least 3 hosts and all the associated network infrastructure are required to build a solution with a SMB 3.0 shared storage. In this scenario you would have a single point of failure with the storage server. Additionally all the read and write requests will have to go over the network.
Compared to StarWind you would only have to use 2 hosts and read requests are processed locally on the disks. Only write requests need an acknowledgment send over the network to the other host. This configuration also eliminates the single point of failure.
4. Hyper-V cluster with SAS JBOD as shared storage
SAS JBOD is a very easy-to-use solution for the creation of high performance Hyper-V clusters. However, it requires dedicated hardware. A single JBOD does not guarantee 100% fault-tolerance because not all the components inside it are redundant.
With StarWind, there is no need for extra disks, dedicated controllers, or chassis because it builds highly available shared storage by using the existing hardware that is already present on Hyper-V servers. Ofcourse you can still use JBOD in combination with StarWind. In that case your JBOD can also make use of SATA and PCI-E SSD in addition to SAS disks.
5. Scale-Out File Server
Scale-Out File Servers allows for clustering two or more SMB 3.0 servers (up to four) for fault-tolerance, high performance, and load balancing. External shared storage is also necessary to cluster Scale-Out File Servers. Such a complex architecture has a long I/O path (the reads are first processed over the network, and then they are addressed to the local storage – LAN, DAS) which degrades the performance.
SMB 3.0 can only scale with multiple clients. Single client requests are processed by just one node and, therefore, there should be many clients to increase the load and attain better utilization.
StarWind requires only two physical servers to provide fault-tolerance and high performance. That reduces the hardware expenditures at least by half compared to the Microsoft reference configuration. The reads are performed locally, which shortens the I/O path and reduces latency. Utilizing iSCSI MPIO helps distribute client requests between the nodes, which automatically load balances the requests and thus increases the performance.
Conclusion
As you can see, there are still a lot of different scenario’s in which StarWind is a better solution. I for one, sometimes struggle with the clustering options that are offered by Microsoft, so for me personally I think that StarWind can provide some simplicity to some of the scenario’s that where mentioned. There is a free version of StarWind when using 2 nodes, you can download this version from the StarWind site after you register. For more information on the difference between the free and paid version look at this pdf document.
Tags In
Related Posts
2 Comments
Leave a Reply Cancel reply
You must be logged in to post a comment.
In the first table Starwind have a lot of “Unlimited”, I always get the chills when I read that a vendor states that. For me it reads that they have not bothered to test properly.
All solutions have a limit (including starwind) and the question is where that limit is.
With VMware stuff you can at least count with that through testing have gone in to that number and that its what was found to be the limit or feasible to test. (Disclosure: VMware employee but not in marketing)