An experimental approach towards big data for analyzing memory utilization on a hadoop cluster using HDFS and MapReduce
An experimental approach towards big data for analyzing memory utilization on a hadoop cluster using HDFS and MapReduce. When the amount of data is very large and it cannot be handled by the conventional database management system, then it is called big data. Big data is creating new challenges for the data analyst. There can be three forms of data, structured form, unstructured form and semi structured form. Most of the part of bigdata is in unstructured form. Unstructured data is difficult to handle.
The Apache Hadoop project provides better tools and techniques to handle this huge amount of data. A Hadoop distributed file system for storage and the MapReduce techniques for processing this data can be used. In this paper, we presented our experimental work done on big data using the Hadoop distributed file system and the MapReduce. We have analyzed the variable like amount of time spend by the maps and the reduce, different memory usages by the Mappers and the reducers. We have analyzed these variables for storage and processing of the data on a Hadoop cluster.
Similar IEEE Project Titles
- Hadoop: Addressing challenges of Big Data
- DataMPI: Extending MPI to Hadoop-Like Big Data Computing
- Research on big data information retrieval based on hadoop architecture.
- Effectiveness Assessment of Solid-State Drive Used in Big Data Services
- Parallel Processing of Big Data Using Power Iteration Clustering over MapReduce.