Performance Modeling for RDMA-Enhanced Hadoop MapReduce
Performance Modeling for RDMA-Enhanced Hadoop MapReduce.Hadoop MapReduce is a popular parallel programming paradigm that allows scalable and fault-tolerant solutions to data-intensive applications on modern clusters. However, the performance behavior of this framework shows its inability to take advantage of high-performance interconnects. Recent studies show that by leveraging the benefits of high-performance interconnects, the overall performance of MapReduce jobs can be greatly enhanced by using additional features like in-memory merge, pipelined merge and reduce, and pre-fetching and caching of map outputs.
Existing performance models are not sufficient to predict the performance behavior for RDMA-enhanced MapReduce with these features. In this paper, we propose a detailed mathematical model of RDMA-enhanced MapReduce based on a number of cluster-wide and job-level configuration parameters. We also propose a simplified version of this model for prediction of large-scale MapReduce job executions and validate it in various system and workload configurations. Results derived from the proposed model match the experimental results within a 2-11% range. To the best of our knowledge, this is the first model that correctly predicts the behavior for RDMA-enhanced Hadoop MapReduce.
Similar IEEE Project Titles
- Leveraging hadoop framework to develop duplication detector and analysis using Mapreduce, Hive and Pig
- Automatic Detection and Rectification of DNS Reflection Amplification Attacks with Hadoop MapReduce and Chukwa
- Mammoth: Gearing Hadoop Towards Memory-Intensive MapReduce Applications
- Towards a cost-efficient MapReduce: Mitigating power peaks for Hadoop clusters
- Vessel route anomaly detection with Hadoop MapReduce