Leveraging hadoop framework to develop duplication detector and analysis using Mapreduce, Hive and Pig
Leveraging hadoop framework to develop duplication detector and analysis using Mapreduce, Hive and Pig.The burgeoning volume of torrential data continues to grow exponentially in this very age of the Internet of Things. As this torrent of digital datasets continue to outgrow in datacenters, the focus needs to be shifted to stored data reduction methods and that too pertaining to NoSQL databases as traditional structured storage systems continuously tend to face challenges in providing the required storage, throughputs and computational power requirements necessary to capture, store, manage and analyze the deluge of data.
Deduplication systems, thus designed, retain a single copy of redundant data on disk to save disk space, but what if we want to keep certain copies intentionally and need wishful elimination. This paper leverages Hadoop framework to design and develop a duplication detection system that detects multiple copies of the same data right at the file level itself and that too before transmission. Thereafter, various datasets are tuned for better performance and analysed using MapReduce, Hive and Pig.
Similar IEEE Project Titles
- Automatic Detection and Rectification of DNS Reflection Amplification Attacks with Hadoop MapReduce and Chukwa
- Mammoth: Gearing Hadoop Towards Memory-Intensive MapReduce Applications
- Towards a cost-efficient MapReduce: Mitigating power peaks for Hadoop clusters
- Vessel route anomaly detection with Hadoop MapReduce
- Optimizing Power and Performance Trade-offs of MapReduce Job Processing with Heterogeneous Multi-core Processors