• -

Mapreduce Projects


Map reduce  Projects is a simple way to processing the enormous data. Map can categorized all the input data and reduce function can produce the thumbnail of the categorized data. These two functions are used in a functional programming. Commonly map function split the original input data into multiple intermediate data and then these data are sorted in a particular order. The reduce function perform the shuffling operation for efficiently stored in a disseminate file system. Many programming languages are used for writing a map reduce library. Distributed parallel computing processes are performed in a map reduce framework. Map function doesn’t depend on other map function. In a map reduce frame work, each cluster consist of one master and slave. Master has a job tracker and slave have a task tracker. Master is responsible for job execution in the slave and also re-executes the failed jobs. Slave executes all the jobs given by the master.



Hadoop Solutions Offers Mapreduce projects ,Mapreduce Project,Mapreduce Thesis,Big data projects,big data project,hadoop projects for students,hadoop project,hadoop project ideas,sample hadoop projects,,project idea with hadoop mapreduce,hadoop mapreduce projects,mapreduce project ideas,hadoop mapreduce project,projects on hadoop,hadoop project topics,hadoop research projects,big data hadoop projects,hadoop projects ideas,hadoop based projects,hadoop related projects,projects in hadoop,projects using hadoop,projects based on hadoop.For Latest Ieee Mapreduce Projects enquire us for more Details.

  • GPU-in-Hadoop: Enabling MapReduce across distributed heterogeneous platforms                                                                                                                                                                                                                             As the size of high performance applications increases, four major challenges including heterogeneity, programmability, failure resilience, and energy efficiency have arisen in the underlying distributed systems. To tackle with all of them without sacrificing performance, traditional approaches in resource utilization, task scheduling and programming paradigm should be reconsidered. As Hadoop has handled data-intensive applications well in Clouds, GPU has demonstrated its acceleration effectiveness for computation-intensive ones. This paper intends to integrate Hadoop with CUDA to exploit both CPU and GPU resources. Hadoop will schedule MapReduce’s Map and Reduce functions across multiple nodes, whereas CUDA code helps accelerate them further on local GPUs. All available heterogeneous computational power will be utilized. MapReduce in Hadoop will ease the programming task by hiding communication details. Hadoop Distributed File System will help achieve data-level fault resilience. GPU’s energy efficiency characteristics help reduce the power consumption of the whole system. To achieve Hadoop and GPU integration, four approaches including Jcuda, JNI, Hadoop Streaming, and Hadoop Pipes, have been accomplished. Experimental results have demonstrated their effectiveness.


  • Introducing SSDs to the Hadoop MapReduce Framework                                                                                                                                                                                                                                                                           Solid State Drive (SSD) cost-per-bit continues to decrease. Consequently, system architects increasingly consider replacing Hard Disk Drives (HDDs) with SSDs to accelerate Hadoop MapReduce processing. When attempting this, system architects usually realize that SSD characteristics and today’s Hadoop framework exhibit mismatches that impede indiscriminate SSD integration. Hence, cost-effective SSD utilization has proved challenging within many Hadoop environments. This paper compares SSD performance to HDD performance within a Hadoop MapReduce framework. It identifies extensible best practices that can exploit SSD benefits within Hadoop frameworks when combined with high network bandwidth and increased parallel storage access. Terasort benchmark results demonstrate that SSDs presently deliver significant cost-effectiveness when they store intermediate Hadoop data, leaving HDDs to store Hadoop Distributed File System (HDFS) source data.


  • TS-Hadoop: Handling access skew in MapReduce by using tiered storage infrastructure

    Over the last few years, MapReduce systems has become popular for processing large-scale data sets and are increasingly being used in web indexing, data mining, and machine learning. Unlike simple application scenarios such as word count, many applications of MapReduce exhibit strong skewed access patterns in real production environment, the data access is non-uniform, often only a small portion of data are accessed far more frequently than others. Clearly, handling these hot data efficiently is quite critical to the overall performance of the MapReduce computation. In this paper, we present TS-Hadoop, a MapReduce system based on Apache Hadoop. The most significant feature of TS-Hadoop is that it utilizes tiered storage infrastructure, besides HDFS, TS-Hadoop also has a shared-disk cluster called HCache, it can be guaranteed that the data in HCache could be processed in highly parallel way. TS-Hadoop automatically distinguish hot and cold data based on current workload, and move them into HCache and HDFS respectively, the hot data in HCache could would be processed efficiently. Experiments show that the average execution time of MapReduce jobs in TS-Hadoop is much faster than traditional Hadoop platform when facing access skew workloads.


  • Hadoop MapReduce for Tactical Clouds

    We envision a future where real-time computation on the battlefield provides the tactical advantage to an Army over its adversary. The ability to collect and process large amounts of data to provide actionable information to soldiers will greatly enhance their situational awareness. Our vision is based on the observation that the U.S. Military is attempting to equip soldiers with smartphones. While individual phones may not be sufficiently powerful for processing large amount of data, using the mobile devices carried by a squad or platoon of Soldiers as a single distributed computing platform, a Tactical Cloud, would enable large-scale data processing to be conducted in battlefields. In order for this vision to be realized, two issues have to be addressed. The first is the complexity of writing applications for distributed computing environments, and the second is the vulnerability of data on mobile devices. In this paper, we propose combining two existing technologies to address these issues. The first is Hadoop MapReduce, a scalable platform that provides distributed storage and computational capabilities on clusters of commodity hardware, and the second is the Mobile Distributed File System (MDFS) which allows distributed data storage with built-in reliability and security. By making the MDFS file system work with Hadoop on mobile devices, we hope to enable big data applications on tactical clouds.

  • Hadoop MapReduce implementation of a novel scheme for term weighting in text categorization                                                                                                                                                                                                                                                                                                                                                                   Text Categorization is problem assigning text documents into fixed number of pre-defined categories. Feature selection and Term weighting are two important steps that decide the result of any Text Categorization problem. In this paper we focus on two things first is to develop effective term weighting by proposing new term weighting scheme and second is to utilize the parallel and distributed processing capability of Hadoop MapReduce for training and testing of dataset. These two things leads to great performance improvement of text categorization by remarkable improvement in accuracy with a significant reduction of computational cost. Also because of the use of Hadoop MapReduce it reduces the training and testing time significantly.


Mapreduce Projects

  1. In unity there is strength: Showcasing a unified big data platform with MapReduce Over both object and file storage
  2. Energy-aware Scheduling of MapReduce Jobs for Big Data Applications
  3. End-to-end Optimization for Geo-Distributed MapReduce 
  4. Hashdoop: A MapReduce framework for network anomaly detection
  5. FedLoop: Looping on Federated MapReduce
  6. Impact of MapReduce Task Re-execution Policy on Job Completion Reliability and Job Completion Time
  7. MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and its application to SNOMED CT
  8. LIBRA: Lightweight Data Skew Mitigation in MapReduce
  9. A Platform to Deploy Customized Scientific Virtual Infrastructures on the Cloud
  10. Scaling MapReduce Vertically and Horizontally
  11. MRSMRS: Mining repetitive sequences in a MapReduce setting
  12. Scheduling MapReduce tasks on virtual MapReduce clusters from a tenant’s perspective.
  13. Incremental FP-Growth mining strategy for dynamic threshold value and database based on MapReduce 
  14. MapReduce-based warehouse systems: A survey
  15. Improving MapReduce Performance Using Smart Speculative Execution Strategy

Work Progress

PHD - 24

M.TECH - 125

B.TECH -95

BIG DATA -110.


ON-GOING Hadoop Projects





Achievements – Hadoop Solutions


Twitter Feed

Customer Review

Hadoop Solutions 5 Star Rating: Recommended 4.9 - 5 based on 1000+ ratings. 1000+ user reviews.