I have recently seen a slide where it was stated that decoupling networking hardware and software is a must for achieving network virtualisation. This would also enable hardware independence, and provide the missing tool for the holy grail: the software defined data center.
I suppose that the best thought leadership is all about making people think that the (potential) means to an end, are really the objective itself. For instance, when someone says "I need a coke", when in fact they are just thirsty, and they may like a coke to alleviate their thirst. Similarly, while the goal is IT automation, a possible means to that end is a software defined data center, and implementing a software defined data center by using virtualization of everything-on-x86 is only an option for that in itself. In this sense, many are evangelising that you need virtualisation, and because you need virtualisation you need network virtualisation and therefore you need to be able to run virtual networks the way you run virtual servers.
I don't argue against the benefits of server virtualisation, or the need for network virtualisation either for that matter, in certain environments. I just think it is interesting how many of the marketing messages are creating this perception that virtualisation is a goal in itself. Much in the same way that SDN messaging has been distorted in the last few years, where it no longer is about separating control and data plane and opening both of them, but rather about running both in software (even if tightly integrated) … But that is topic for another post.
Why Server Virtualization was a Necessity
I believe server virtualisation solved a problem that had been created by poorly designed operating systems and applications, which could not fully leverage the compute capacity they had available. The x86 architecture was also not good for providing isolation to higher layers. In the end, you had a physical server running one OS with a App stack on top which was not capable of making use of the full compute capacity. Servers were under utilised for that reason.Therefore hypervisors solved a deficiency of operating systems and applications. Applications were, for the most, incapable of using multi-core capabilities and operating systems were unable to provide proper isolation between running applications. This is what the Hypervisor solved. And it was probably a good solution (maybe the best solution at the time), because clearly re-writing applications is much harder to do than instantiating many copies of the same app and load balancing them … However, had the OS provided the proper containment and isolation and the CPU provided performing support for that, a hypervisor would have been less required. Because in that case, even if you would not rewrite applications for better performance, you could still run multiple instances. In other words, if we would have had Zones on Linux 8 years ago, the IT world would have been perhaps somewhat different today. (although in fact, we had them … perhaps in the wrong hands though).
Anyways, it is clear that for instance today, running Apps on LXC is certainly more efficient than doing it in a hypervisor from a performance standpoint. It will be interesting to see how that evolves going forward.
We may need network virtualisation, but we do not need a network hypervisor
Similarly, an IP Network does not natively accommodate for proper isolation and for multiple users with different connectivity and security requirements to share the network. IP networks are not natively multi-tenant, or having the ability to segregate traffic for various tenants or applications. They were conceived to be the opposite really.There are solutions like using MPLS VPN or plain VRFs: in a nutshell, you virtualise the network to provide such functions. You do that at the device level, and you can scale it at the network level (again, MPLS VPN being an example of that, although it only uses IP as control plane, it uses MPLS at the data plane). VPLS is another example, albeit for delivering ethernet-like services.
Arguably, MPLS VPNs and/or VPLS are not the right solution for providing network isolation and multi-tenancy in a high density data center environment. So there are alternatives to achieving this using various overlay technologies. Some are looking to do this with a so-called network hypervisor, essentially running every network function on x86 as an overlay.For those supporting this approach, anything that is "hardware" bound is wrong. Some people would say that VPLS, MPLS VPN, VRF, etc. are hardware solutions and what we need are software solutions.
I believe this is not true. A VRF on a network router or switch involves software, which will program the underneath hardware to implement different forwarding tables for a particular routing domain and set of interfaces. A virtual router running as a VM and connecting logical switches is pretty much the same thing, except that its forwarding table is going to be implemented by an x86 processor.I do not like this partial and simplistic vision of hardware vs. software solutions. There are only hardware+software solutions. The difference is whether you use hardware specialised for networking or hardware for general computing. The first is of course significantly more performing (by orders of magnitude), whilst the second provides greater flexibility. The other aspect is provisioning and configuration. Some would argue that if you run network virtualisation in software (again, meaning on x86 on top of a hypervisor) it is easier to configure/provision. But this is a matter of implementation only.
Conceptually, there is no reason why provisioning network virtualisation on specialised hardware would be any harder than doing it on general compute hardware.
You will always need a physical network … make the best out of it
Because you always need to have a physical network in a data center, it is also evident that if the network infrastructure provides the right isolation and multi-tenancy with a simplified provisioning and operation, it represents a more efficient way of achieving the goal of automating IT than duplicating an overlay on top of a physical infrastructure (much like LXC are more efficient than a hypervisor). This leads to the title of the post.
The goal is not to do virtualisation. Virtualisation is not a goal. The goal is not to do things in software vs. hardware either.
The goal is enable dynamic connectivity & policy for applications that run the business supported by an IT organisation. And to do so fast, and in an automated way, in order to reduce risk of human errors. Whether you do it on specialised sophisticated hardware, or on general compute x86 processors is a matter of implementation, with merits and de-merits on both approaches. Efficiency is usually achieved when software sits as close to specialised hardware as possible.