Sunday, September 7, 2014

About a VMware OpenSack vs. RedHat OpenStack comparison

I watched one of the sessions from last VMworld about the VMware Integrated OpenStack solution. I found it very interesting, and there were a couple of slides that really caught my attention. The messaging goes that OpenStack sits atop of a compute, storage and network set of resources, but does not impose what those resources are made off. This is very true, and one of the nice things of OpenStack.

In the case of VMware, they presented how they can fill all of those resources on an OpenStack solution: compute with vSphere, storage with VSAN and network with NSX. Further to that they said vCAC and vCOPS are complementary and you could use them as well (making you wonder why on earth would you be running OpenStack if you have all of those products … but that is another story). Following that, there were a couple of slides presenting a comparison done between an OpenStack solution built by using VMware products, with another one built using Red Hat. The slide quoted a document by Principled Technologies as supporting material. That document can be found here:

I find very interesting that VMware singled out an OpenStack vendor and chose Red Hat. It is worth reading the document and reflecting about it.

In principle the goal of the testing was to compare VMware vs. RedHat for running an OpenStack cloud giving considerations to the hypervisor and storage parts of the solution (NSX and Neutron were left out of the consideration).

The title of the document is "Cost and Performance Comparison for OpenStack Compute and Storage Infrastructure". The testing is done by using common tools to measure storage performance and by running a Cassandra DB on VMs provisioned via OpenStack and measuring its performance as well using standard performance testing tools.

The conclusions are very neatly articulated in the document introduction and can be summarised as: VMware solution is more performing (159% more IOPS) and less expensive (26% lower cost over three years).

The first point isn't shocking (although I was surprised by the incredible performance advantage, given that I have seen studies showing KVM outperforming ESXi for other DB workloads). But the second point was certainly a surprise.

But as with all studies, what matters is how they reach to conclusions and what are the items that lead to the differences. Let's look at them.

The performance difference can be explained very easily by noticing a few things:

  • the performance was measured only for the storage part of the solution. Not for memory-bound workloads, not for cpu-bound workloads, not for network I/O bound workloads.
  • the tests ran to measure storage performance were biased towards read (70/30 read to write in all cases). This may be realistic or not, depending on workload, but probably reasonable. 
  • the VMware solution (using VSAN) leverages SDD for caching, the RedHat solution (using RedHat Storage Server) does not.

There you go, the difference in performance is primarily justified because of the use of SSD for caching inside VSAN. If you were to use an SSD-based storage solution the performance difference would completely different, probably negligible and not necessarily to VMware's advantage.

In defence of those conducting the testing, RedHat currently does not offer a scale-out storage solution which can use SSD for caching only. You can use GlusterFS with SSDs, but it will be very expensive.

However if VSAN would be removed from the equation and both solutions compared using common storage from say NetApp (using SSD for caching) you probably get equivalent performance to the VSAN scenario. Arguably that would be a more open solution, because unlike VSAN, NetApp storage wouldn't be limited to working with vSphere only.

The price difference is coming from:

  • using dedicated servers for running RedHat Enterprise Storage: effectively more than doubles the cost of hardware.
  • the cost of the Red Hat Storage Server (I am unfamiliar with how this is licensed, so I can't comment and I take it as accurate of course).
  • the cost of using a full blown RHEL for running KVM.

Before going forward, I would like to quote something from the test document, and refer the reader to consider the title of the test document itself:

"While Red Hat does provide Red Hat Enterprise Linux OpenStack Platform, we left OpenStack support out of these calculations as stated above because each OpenStack environment and support engagement is so variable"

This test was commissioned to compare OpenStack solutions, but OpenStack solution pricing was not considered … Confusing, isn't it?

In the test, they chose to use RHEL Server to run KVM. Given they are using RHEL strictly for running KVM, they should have chosen for RHEV instead, which is also supported on the Dell PowerEdge servers they had at hand. This matters because it is more lightweight, optimised for running KVM with greater VM density and … less expensive:

- Red Hat Enterprise Virtualization, Premium (24x7x365 support):
4 (socket pairs) x $1,499 = $5,996.

This already reduces the cost difference a bit … but actually, to make this an apples to apples comparison in the context of running OpenStack, they could have included Ent+ for vSphere, because on RHEV you can actually use Neutron to implement distributed switches and/or to leverage plugins from SDN vendors, and you can't do that on vSphere ENT (which uses standard vSwitch only, not very cloud friendly I believe). Therefore, the pricing for VMware software should also increase in consequence.

Just adding those two licensing considerations, which lower the price of the RedHat solution and increases the VMware one by quite a bit, the price of the two solutions would be almost equivalent in practice. Then again, if you consider using a storage solution from a vendor like NetApp or EMC instead of the vendor's scale-out option, you can build a RedHat based solution with equivalent performance and lower cost.

It then becomes also an issue of considering a converged solution or a separate external shared storage. A VSAN approach would have a density advantage (uses less rack space), and perhaps would be easier to manage too. There also an important element to consider and is operational cost, retraining of staff, etc. An external storage based solution offers greater flexibility, because it can be shared for things other than vSphere. Also, given that storage needs grow faster than compute needs in most environments, an external storage may be cheaper to run in the long term, but this is dependent on each environment.

Net net, just like with all vendor-comissioned TCO studies, I recommend people to actually read the studies (as opposed to just retaining the conclusions) and reflect about them Then customise the study methodology for their own environments. Such studies are usually a source of valuable information and setting up a framework for a valid comparison, but you can never assume that they compare apples to apples.


  1. 1. This is not an apples to apples comparison, it is comparing a "Converged Architecture" (VSAN) with a "Dedicated Storage Tier" (Red Hat Storage). Red Hat Storage is a distributed file system across dedicated storage servers; VSAN is local disks presented as a virtual SAN.
    2. The VSAN configuration consisted of one SSD and several HDDs. The Red Hat Storage configuration was all HDD, no SSD. As the workload in the test was small enough to fit entirely on the single 1.6TB SSD, this test was an SSD to HDD comparison which is not equivalent.
    3. Ceph is an alternative storage solution for OpenStack. Some notes on Ceph: I do not recommend Ceph be used with converged infrastructures because, especially in the cloud, storage grows much faster than compute. Being forced to scale both simultaneously has cost disadvantages.

    1. Hello Jonathan, thanks for reading, and thanks for your comment. I agree with all your remarks. I think indeed that this wasn't and apples to apples comparison. Moreover, the title of the comparison is misleading, as I tried to suggest on my writing. I believe this is just a comparison of VSAN vs. GlusterFS, and the caveat of SSD vs. HDD still applies to understand the performance difference.

  2. Nice blog... It is helpful comparison between two OpenSack. Looking for OpenSack alternative with good performance and storage infrastructure