Hadoop Job Scheduling Using Improvised Ant Colony Optimization

Main Article Content

G. Joel Sunny Deol, et. al.

Abstract

Hadoop Distributed File System is used for storage along with a programming framework MapReduce for processing large datasets allowing parallel processing. The process of handling such complex and vast data and maintaining the performance parameters up to certain level is a difficult task. Hence, an improvised mechanism is proposed here that will enhance the job scheduling capabilities of Hadoop and optimize allocation and utilization of resources. Significantly, an aggregator node is added to the default HDFS framework architecture to improve the performance of Hadoop Name node. In this paper, four entities viz., the name node, secondary name node, aggregator nodes, and data nodes have been modified. Here, the aggregator node assigns jobs to data node, while Name node tracks aggregator nodes. Also, based on the job size and expected execution time, an improvised ant colony optimization method is developed for scheduling jobs.In the end, the results demonstrate notable improvisation over native Hadoop and other approaches.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
et. al., G. J. S. D. . (2021). Hadoop Job Scheduling Using Improvised Ant Colony Optimization. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(2), 3417–3424. Retrieved from https://turcomat.org/index.php/turkbilmat/article/view/2403
Section
Articles