|
Map /Reduce Design and Implementation of Apriori Algorithm for Handling Voluminous Data - SetsKeywords: Frequent Itemset , Distributed Computing , Hadoop , Apriori , Distributed Data Mining Abstract: Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucialstep in analysing structured data and in finding association relationship between items. This stands as anelementary foundation to supervised learning, which encompasses classifier and feature extractionmethods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of thestructured data in scientific domain are voluminous. Processing such kind of data requires state of the artcomputing machines. Setting up such an infrastructure is expensive. Hence a distributed environmentsuch as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one ofthe cluster frameworks in distributed environment that helps by distributing voluminous data across anumber of nodes in the framework. This paper focuses on map/reduce design and implementation ofApriori algorithm for structured data analysis.
|