Category Archives: hadoop project topics

  • -

Hadoop Project Topics

Hadoop Project Topics

Hadoop Project Topics helps on processing large amount data using cluster commodity hardware.

Hadoop-Project-Topics

Hadoop-Project-Topics

  • Parallel Detrended Fluctuation Analysis for Fast Event Detection on Massive PMU Data                                                                                                                                                                                                        Phasor measurement units (PMUs) are being rapidly deployed in power grids due to their high sampling rates and synchronized measurements. The devices high data reporting rates present major computational challenges in the requirement to process potentially massive volumes of data, in addition to new issues surrounding data storage. Fast algorithms capable of processing massive volumes of data are now required in the field of power systems. This paper presents a novel parallel detrended fluctuation analysis (PDFA) approach for fast event detection on massive volumes of PMU data, taking advantage of a cluster computing platform. The PDFA algorithm is evaluated using data from installed PMUs on the transmission system of Great Britain from the aspects of speedup, scalability, and accuracy. The speedup of the PDFA in computation is initially analyzed through Amdahl’s Law. A revision to the law is then proposed, suggesting enhancements to its capability to analyze the performance gain in computation when parallelizing data intensive applications in a cluster computing environment.

 

  • An initial study of predictive machine learning analytics on large volumes of historical data for power system applications                                                                                                                                                                                                                                                                                                                                Nowadays large volumes of industrial data are being actively generated and collected in various power system applications. Industrial Analytics in the power system field requires more powerful and intelligent machine learning tools, strategies, and environments to properly analyze the historical data and extract predictive knowledge. This paper discusses the situation and limitations of current approaches, analytic models, and tools utilized to conduct predictive machine learning analytics for very large volumes of data where the data processing causes the processor to run out of memory. Two industrial analytics cases in the power systems field are presented. Our results indicated the feasibility of forecasting substations fault events and power load using machine learning algorithm written in MapReduce paradigm or machine learning tools specific for BigData.                                                                                                                                                      
  • Managing Tiny Tasks for Data-Parallel, Subsampling Workloads                                                                                                                                                                                                                                                 Subsampling workloads compute statistics from a set of observed samples using a random subset of sample data (i.e., a subsample). Data-parallel platforms group these samples into tasks, each task subsamples its data in parallel. In this paper, we study subsampling workloads that benefit from tiny tasks-i.e., tasks comprising few samples. Tiny tasksreduce processor cache misses caused by random subsampling, which speeds up per-task running time. However, they can also cause significant scheduling overheads that negate the time reduction from reduced cache misses. For example, vanilla Hadoop takes longer to starttiny tasks than to run them. We compared the task scheduling overheads of vanilla Hadoop, lightweight Hadoop setups, and BashReduce. BashReduce, the best platform, outperformed the worst by 3.6X but scheduling overhead was still 12% of a task’s running time. We improved BashReduce’s scheduler by allowing it to size tasks according to kneepoints on the miss rate curve. We tested these changes on high-throughput genotype data and on data obtained from Netflix. Our improved BashReduce outperformed vanilla Hadoop by almost 3X and completed short, interactive jobs almost as efficiently as long jobs. These results held at scale and across diverse, heterogeneous hardware.

 

  • GISQF: An Efficient Spatial Query Processing System                                                                                                                                                                                                                                                                                 Collecting observations from all international news coverage and using TABARI software to code events, the Global Database of Event, Language, and Tone (GDELT) is the only global political geo referenced event dataset with 250+ million observations covering all countries in the world from January 1, 1979 to the present with daily updates. The purpose of this widely used dataset is to help understand and uncover spatial, temporal and perceptual trends and behaviors of the social and international system. To query such big geospatial data, traditional RDBMS can no longer be used and the need for parallel distributed solutions has become a necessity. MapReduce paradigm has proved to be a scalable platform to process and analyze Big Data in the cloud. Hadoop as an implementation of MapReduce is an open source application that has been widely used and accepted in academia and industry. However, when dealing with Spatial Data, Hadoop is not equipped well and falls short as it doesn’t perform efficiently in terms of running time. Spatial Hadoop is an extension of Hadoop with the support of spatial data. In this paper, we present Geographic Information System Querying Framework (GISQF) to process Massive Spatial Data. This framework has been built on top of the open source Spatial Hadoop system which exploits two-layer spatial indexing techniques to speed up query processing. We show how this solution outperforms Hadoop query processing by orders of magnitude when applying queries on GDELT dataset with a size of 60 GB. We show the results for three types of queries, Longitude-Latitude Point queries, Circle-Area queries, and Aggregation queries.

 

  • PAGE: A Framework for Easy PArallelization of GEnomic Applications                                                                                                                                                                                                                                                    With the availability of high-throughput and low-cost sequencing technologies, an increasing amount of genetic data is becoming available to researchers. There is clearly a potential for significant new scientific and medical advances by analysis of such data, however, it is imperative to exploit parallelism and achieve effective utilization of the computing resources to be able to handle massive datasets. Thus, frameworks that can help researchers develop parallel applications without dealing with low-level details of parallel coding are very important for advances in genetic research. In this study, we develop a middleware, PAGE, which supports ‘map reduce-like’ processing, but with significant differences from a system like Hadoop, to be useful and effective for parallelizing analysis of genomic data. Particularly, it can work with map functions written in any language, thus allowing utilization of existing serial tools (even those for which only an executable is available) as map functions. Thus, it can greatly simplify parallel application development for scenarios where complex data formats and/or nuanced serial algorithms are involved, as is often the case for genomic data. It allows parallelization by partitioning by-locus or partitioning by-chromosome, provides different scheduling schemes, and execution models, to match the nature of algorithms common in genetic research. We have evaluated the middleware system using four popular genomic applications, including VarScan, Unified Genotyper, Realigner Target Creator, and Indel Realigner, and compared the achieved performance against with two popular frameworks (Hadoop and GATK). We show that our middleware outperforms GATK and Hadoop and it is able to achieve high parallel efficiency and scalability.

 

Similar Latest IEEE Hadoop Project Topics 

Hadoop Solutions offers Hadoop Project Topics – Topics/Thesis/Projects for an affordable price with guaranteed output. Enquire us for more details.


Work Progress

PHD - 24

M.TECH - 125

B.TECH -95

BIG DATA -110.

HADOOP -90.

ON-GOING Hadoop Projects

HADOOP MAP -90.

HADOOP YARN -27.

HADOOP HEBROS - 25.

HADOOP ZOOKEEPER -18.

Achievements – Hadoop Solutions

Hadoop-Projects-Achievement-Awards

Twitter Feed

Customer Review

Hadoop Solutions 5 Star Rating: Recommended 4.9 - 5 based on 1000+ ratings. 1000+ user reviews.