PigOut: Making multiple Hadoop clusters work together
PigOut: Making multiple Hadoop clusters work together.This paper presents PigOut, a system that enables federated data processing over multiple Hadoop clusters. Using PigOut, a user (such as a data analyst) can write a single script in a high-level language to efficiently use multiple Hadoop clusters. There is no need to manually write multiple scripts and coordinate the execution for different clusters. PigOut accomplishes this by automatically partitioning a single, user-supplied script into multiple scripts that run on different clusters. Additionally, PigOut generates workflow descriptions to coordinate execution across clusters.
In doing so, PigOut leverages existing tools built around Hadoop, avoiding extra effort required from users or administrators. For example, PigOut uses Pig Latin, a popular query language for Hadoop MapReduce, in a (virtually) unmodified form. Through our evaluation with PigMix, the standard benchmark for Pig, we demonstrate that PigOut’s automatically-generated scripts and workflow definitions have comparable performance to manual, hand-tuned ones. We also report our experience with manually writing multiple scripts for a set of federated clusters, and compare the process with PigOut’s automated approach.
Similar IEEE Project Titles
- Application traffic classification in Hadoop distributed computing environment
- A Distributed NameNode Cluster for a Highly-Available Hadoop Distributed File System
- Applying Eco-Threading Framework to Memory-Intensive Hadoop Applications
- An approach for fast and parallel video processing on Apache Hadoop clusters
- A Hadoop Extension to Process Mail Folders and its Application to a Spam Dataset