Apache Hadoop provides a platform for building distributed systems for massive data storage and analysis using a large cluster of standard x86-based servers. It uses data replication across hosts and racks of hosts to protect against individual disk, host, and even rack failures. A job scheduler can be used to run multiple jobs of different sizes simultaneously, which helps to maintain a high level of resource utilization. Given the built-in reliability and workload consolidation features of Hadoop it might appear there is little need to virtualize it.
However, there are several use-cases that make virtualization of this workload compelling:
- Enhanced availability with capabilities like VMware High Availability and Fault Tolerance. The performance implications of protecting the Hadoop master daemons with FT were examined in a previous paper.
- Easier deployment with vSphere tools or Serengeti, leading to easier and faster datacenter management.
- Sharing resources with other Hadoop clusters or completely different applications, for better datacenter utilization.
In addition, virtualization enables new ways of integrating Hadoop workloads into the datacenter:
- Elasticity enables the ability to quickly grow a cluster as needs warrant, and to shrink it just as quickly in order to release resources to other applications.
- Multi-tenancy allows multiple virtual clusters to share a physical cluster while maintaining the highest levels of isolation between them.
- Greater security within each cluster as well as elasticity (the ability to quickly resize a cluster) require the separation of the computational (TaskTracker) and data (DataNode) parts of Hadoop into separate machines. However, data locality (and thus performance) requires them to be on the same physical host, leading to the use of virtual machines.
A detailed discussion of these points is beyond the scope of this paper, but can be found elsewhere. As great as the current and potential future benefits of virtualization are for Hadoop, they are unlikely to be realized if the performance costs are too high. The focus of this paper is to quantify these costs and to try to achieve an understanding of the implications of alternative virtual configurations.
This is done through the use of a set of well-understood and high-throughput benchmarks (the TeraSort suite). While these applications are not generally representative of production clusters running many jobs of different sizes and priorities, they are at the high end of infrastructure resource usage (CPU, memory, network and storage bandwidth) of typical production jobs. As such, they are good tools for stressing the virtualization layer.
The ultimate goal is for a Hadoop administrator to be able to create a cluster specification that enables all the above advantages while achieving the best performance possible.