Mammoth: Gearing Hadoop Towards Memory-Intensive MapReduce Applications
Mammoth: Gearing Hadoop Towards Memory-Intensive MapReduce Applications.The MapReduce platform has been widely used for large-scale data processing and analysis recently. It works well if the hardware of a cluster is well configured. However, our survey has indicated that common hardware configurations in smalland medium-size enterprises may not be suitable for such tasks. This situation is more challenging for memory-constrained systems, in which the memory is a bottleneck resource compared with the CPU power and thus does not meet the needs of large-scale data processing. The traditional high performance computing (HPC) system is an example of the memory-constrained system according to our survey. In this paper, we have developed Mammoth, a new MapReduce system, which aims to improve MapReduce performance using global memory management.In Mammoth, we design a novel rule-based heuristic to prioritize memory allocation and revocation among execution units (mapper, shuffler, reducer, etc.), to maximize the holistic benefits of the Map/Reduce job when scheduling each memory unit.
We have also developed a multi-threaded execution engine, which is based on Hadoop but runs in a single JVM on a node. In the execution engine, we have implemented the algorithm of memory scheduling to realize global memory management, based on which we further developed the techniques such as sequential disk accessing, multi-cache and shuffling from memory, and solved the problem of full garbage collection in the JVM. We have conducted extensive experiments with comparison against the native Hadoop platform. The results show that the Mammoth system can reduce the job execution time by more than 40% in typical cases, without requiring any modifications of the Hadoop programs. When a system is short of memory, Mammoth can improve the performance by up to 5.19 times, as observed for I/O intensive applications, such as PageRank. Given the growing importance of supporting large-scale data processing and analysis and the proven success of the MapReduce platform, the- Mammoth system can have a promising potential and impact.
Similar IEEE Project Titles
- Towards a cost-efficient MapReduce: Mitigating power peaks for Hadoop clusters
- Vessel route anomaly detection with Hadoop MapReduce
- Optimizing Power and Performance Trade-offs of MapReduce Job Processing with Heterogeneous Multi-core Processors
- DynamicMR: A Dynamic Slot Allocation Optimization Framework for MapReduce Clusters
- Evaluating MapReduce frameworks for iterative Scientific Computing applications