Virtualization has transformed the enterprise. The ability to run multiple virtual servers on one piece of hardware has given us huge efficiencies.
With a software called a hypervisor, the IT administrator can make a single CPU look like two, a single network interface look like four, and so on. The hypervisor decouples logical systems from the physical, and this gives us other unexpected gifts such as mobility. But the best advantage of virtualization is it eliminates poor utilization by over-provisioning. Over-provisioning is behaving as if you have more hardware than you really do. This is not unlike overbooking in travel. Airlines know that some confirmed passengers will be no-shows. So they sell more tickets than there are seats on the plane. This way the plane is always full even if some passengers make last minute travel changes. Most of the time this makes the airline very efficient, which means you get a cheaper ticket. This can also backfire and make the news, but thats the exception. Just like overbooking airplane seats, servers can be overbooked. You start with the assumption that it is very unlikely that all your applications will simultaneously need to be at full capacity. For example, the financial systems may be strained during a month-end close, but the webservers are probably slacking off. Virtualization exploits this imbalance to your advantage.
Despite these benefits, virtualization has not been widely adopted in High Performance Computing (HPC). Adding the hypervisor also adds overhead, and for performance-obsessed HPC, this is bad news. My friend Mohan from VMware spoke about this topic at this week's HPC Advisory Council meeting in Stanford. They are making advances in bringing the benefits of virtualization to HPC, without sacrificing performance.
Virtualization is implemented by device partitioning. Mohan talked about advances in this area including SR-IOV. Single Root I/O Virtualization, is a PCI standard to virtualize the I/O path from the server to a peripheral device. SR-IOV takes a single PCI device and carves it up into multiple logical devices or functions. A single hardware device can expose multiple light-weight Virtual Functions (VFs) for use by different VMs. But there are limitations to this approach. For example, some of the mobility features of virtualization are lost (ie) no vMotion. Still, this is an interesting technology and Mellanox supports it for its InfiniBand interconnects.
RDMA is a memory access protocol that lets you directly share the contents of memory between two computers, bypassing both operating systems and the traditional network stack. Since so much overhead comes from the TCP/IP stack, bypassing it enables RDMA to deliver the high-throughput, low-latency networking needed for HPC workloads. Mohan showed an ANSYS Fluent benchmark over EDR InfiniBand scaling 20X faster at high core counts, compared to conventional ethernet. This makes RDMA very useful for tightly coupled workloads including MPI-based applications. RDMA technology has been available for a couple of years in the public cloud with Microsoft Azure H Series VMs.
Virtualization delivers outsize gains to enterprises; the proof is their near ubiquity. These same gains can be realized by HPC users thanks to new technologies that are available today.