Performance Optimization for Short Job in Hadoop
Parmeshwari Sabnis1, Prof Chaitali Laulkar1
Citation : Parmeshwari Sabnis, Prof Chaitali Laulkar, Performance Optimization for Short Job in Hadoop International Journal of Research Studies in Computer Science and Engineering 2015, 2(2) : 13-17
For solving large data-intensive problem, Hadoop Map Reduce, parallel computing framework is a widely used. To be able to process large-scale datasets, Hadoop focuses on high throughput of data than on job execution performance,. So when there is use of Hadoop Map Reduce to execute short jobs that requires quick responses, causes performance limitation. Short Map Reduce job are usually expected for short execution or quick response times. We optimized the standard Hadoop for greater performance. By optimizing the job initialization and termination stages, changing the task assignment from heartbeat-based pull-model to a push model, and providing an instant message communication mechanism instead of heartbeats, increase in execution speed is achieved.