Combining and separating virtual machine workloads is often frustrating and time-consuming. Here are some tips to help you sort through it and maintain your balance.
It may surprise you to know that creating a new virtual machine (VM) for each and every service is not the right answer to workload balance. The keyword in this workload balancing act is utilization. More efficient utilization is, after all, why you chose virtualization in the first place — right? Don’t spoil the beauty and elegance of virtualization by infecting it with inefficient utilization.
If you’re still confused about how to separate or combine virtual workloads to create an efficient VM host that is also happy and balanced, you’re not alone. This confusion is one reason why virtualization software vendors add dynamic workload and movable workload features to their software. Those features aren’t perfect and that’s where your expertise becomes important in keeping your workloads balanced and your phone from ringing.
One very effective way to manage disparate workloads is to use what’s known as OS-Level Virtualization. In this type of virtualization, also known as zones, containers, or chroot jails, you separate the operating system into secure segments in which each segment runs an application or set of applications. Think of each segment or container as a separate VM. This type of virtualization makes the most efficient use of the VM host’s resources.
Within a VM host, you can run databases, web services, network services, and applications (or combinations of those) in these isolated zones. The only limitation is that your services must all be able to run on Linux, run in a chrooted environment, and use the currently running VM host kernel.
For workloads or product options that don’t lend themselves to OS-Level Virtualization, you may need to resort to full virtualization with products such as Vmware, Xen, Hyper-V, or xVM. For fully virtualized workloads, there are some basic guidelines you should follow.
First, always purchase hardware that is optimized for virtualization. Choose AMD-V or Intel VT CPUs for all your VM host systems. These new CPUs allow for the special needs of virtual machines and provide greater efficiency and fewer bottlenecks typically associated with VMs by moving some of the processing from the software to the hardware.
Second, use Storage Area Network (SAN ) for applications such as databases that are disk I/O intensive. Don’t use local disks or Network Attached Storage (NAS) for I/O intensive applications—these storage options introduce too much latency into the equation and their performance will suffer.
Finally, supply adequate RAM to your VMs but don’t overdo it. Surprisingly, adding more RAM to a VM won’t make it operate faster once you’ve allocated the appropriate amount to handle its workload. Adding too much RAM to a VM is a waste of resources and depletes the available pool from the VM host for dynamic resource allocation. You will effectively starve your VM host and all VMs may suffer the consequences.
If you find that all of your VMs are running inexplicably slow, try decreasing the amount of RAM to your VMs. Don’t choke them but using what’s recommended for a physical machine should be sufficient for most applications.
In tackling workload and VM host combinations, you may believe that separating those workloads and assigning a single workload per VM is correct but it isn’t. It is less efficient. For example, if you have a high CPU workload such as a web server and a high Disk I/O workload such as a database, they should be combined into a single VM. The reason for the increased efficiency is that the two workloads require different resources, in other words, their workloads don’t overlap.
Combine workloads that use different resources. You’ll create more efficient VMs and better utilize CPU, memory, and network bandwidth.
Breaking down workloads into their resource usage components may help you to better combine and separate those workloads to create a more efficient virtual infrastructure. By making good workload management decisions, your job as administrator becomes easier, your users will be happier, your management will continue to embrace virtualization as a clearly valuable technology, and ultimately you’ll enjoy greater job satisfaction.
Keep a close watch on utilization with capacity and performance tools and make adjustments as necessary to your environment. Finally, don’t be afraid to remove resources that are being wasted and return them to the available pool.
Kenneth Hess is a Linux evangelist and freelance technical writer on a variety of open source topics including Linux, SQL, databases, and web services. Ken can be reached via his website at
http://www.kenhess.com. Practical Virtualization Solutions by Kenneth Hess and Amy Newman is available now.
Comments on "Walking the Workload Tightrope Part Three"
I agree in the most of the hints… except in combining non-overlapping workloads, that should be considered carefully. Although this technique could optimize the overall workload execution in the virtualized infrastructure, it also penalizes other things. In particular, it breaks application isolation, which is bad from the point of view of faults, maintenance, etc.
Let me clarify with an example. Consider the same case you mention: a CPU intensive application and a disk intensive application coexisting in the same VM. A single OS hang will break both applications (while if they were separated in the VM, it only comprises one of the applications). In addition, if some maintenance operation is required for either the CPU or the database application (i.e. an OS reboot due to the administrator needs to increase VM assigned RAM), this operation penalizes the other application.
In summary, consider not only workload efficiency when combining non-overlapping workloads, but also how it impacts application isolation.