Job scheduling in Hadoop with Shared Input Policy and RAMDISK .
Job scheduling in Hadoop with Shared Input Policy and RAMDISK .Hadoop Framework is a successful option for industry and academia to handle Big Data applications. Large input data sets are split into smaller chunks, distributed among the cluster nodes and processed in the same nodes where they are stored. However, some Hadoop data-intensive applications generate a very large volume of intermediate data to the local file system of each node. Many data spilled to disk associated with concurrent accesses from different tasks that are executed on the same node overload the input/output system.
We propose to extend Shared Input Policy, a Hadoop job scheduler policy developed by our research group, by adding a RAMDISK for temporary storage of intermediate data. Shared Input Policy schedules batches of data-intensive jobs that share the same input data set. We add RAMDISK to improve performance of Shared Input Policy. RAMDISK has high throughput and low latency and this allows quick access to intermediate data relieving hard disk. Experimental results show that our approach outperforms Hadoop default policy from 40% to 60% for data intensive applications.
Similar IEEE Project Titles
- Investigating the inclinations of research and practices in Hadoop: A systematic review
- Performance Evaluation of Read and Write Operations in Hadoop Distributed File System
- MIMP: Deadline and Interference Aware Scheduling of Hadoop Virtual Machines
- WOHA: Deadline-Aware Map-Reduce Workflow Scheduling Framework over Hadoop Clusters
- Improving performance of small-file accessing in Hadoop