Big Data Hadoop & Spark

Hive Parallel Jobs

Hive sometimes can take lot of time to complete a job.

The jobs may have many different stages to get completed.By default, Hive executes these stages one at a time.
Different stages may include a Map stage,Reduce stage, a sampling stage, a merge stage, a limit stage, or other possible tasks Hive needs to do.

A particular job may consist of some stages that are not dependent on each other and could be executed in parallel, possibly allowing the overall job to complete more quickly.
Hive can converts a query into one or more stages and to save time executes multiple jobs parallely.
For basics on HIVE and multiple instace in HIVE follow the blogs linked.
Beginers guide HIVE
Metastore integration in HIVE
NOTE:- We have done this exercise in single-node cluster, which on execution of query shares the single resource in order to complete the task(i.e taking longer time to complete multiple jobs),  fully distributed hadoop cluster would be best platform to see the actual time difference in job execution.
Below is the result for a sample query fired inside Hive shell when parallel processing is turned off.
You can note only one job was assigned to mapper .
The query fired is:
table1 JOIN table2 ON (table1.a =table2.a )
join table3 ON (table3.a=table1.a)
join table4 ON (table4.b=table3.b);

If the query is optimized and more stages are run simultaneously, the job may complete much faster.
However,If a job is running more stages in parallel, it will increase its cluster utilization.
NOTE:-Developer must keep track not to occupy complete bandwidth of cluster.
The configuration file named hive-site.xml shall be created in hive/conf/ directory, where we need to change the properties for parallel execution and override this property.
Refer the screenshot for default property.

We can do enable parallel execution of job stages by setting hive.exec.parallel to true .  
<description>Whether to execute jobs in parallel</description>
Also numbers of mappers assigned to execute parallel processing can also be controlled by following tag.
<description>How many jobs at most can be executed in parallel</description>

Once property is set save it and restart the hive shell in new terminal.
Shoot the optimized query inside shell to see multiple jobs launching .
Refer the result below(red coloured):
(SELECT table1.a FROM table1 JOIN table2 ON table1.a =table2.a ) r1
(SELECT table3.a FROM table3 JOIN table4 ON table3.b =table4.b ) r2
ON (r1.a =r2.a) ;

You can execute a sample data in your Hadoop cluster and see yourself the difference between serialized execution and parallel execution of job in HIVE.
For more technical blogs keep visiting



An alumnus of the NIE-Institute Of Technology, Mysore, Prateek is an ardent Data Science enthusiast. He has been working at Acadgild as a Data Engineer for the past 3 years. He is a Subject-matter expert in the field of Big Data, Hadoop ecosystem, and Spark.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles