Big Data Hadoop & Spark

Dynamic Creation of Table in Hive

Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data, and makes querying and analyzing easy.
Before going deep into this topic, readers are recommended to go through these blogs:
Basics of Hive
Integrating hive with MySQL
Getting ready
Prerequisites to go further in this blogs are:
>>hadoop single node cluster running
>>mysql inside hadoop ecosystem
>>hive-0.7.0 or higher running
We will first create a table acad with below schema and we will be dynamically creating one more table where the column sessionID will be replaced by column weblength.
Create table acad(webpage string,sessionid int,sessionin string,sessionout string) row format delimited fields terminated by ‘,’ lines terminated by ‘\n’;

On checking for data inside the sample test data we see:

Input some sample data by the below command syntax and in this case we are taking the dataset shown in the above screenshot.
load data local inpath<filename> into table<tablename>
load data local inpath ‘/home/hadoop/Desktop/test2’ into table acad;

Enter the following command in the Hive shell:
describe acad;
You will see the following response:

Using Hive to dynamically create tables
This blog will give technique for inline table creation when the query is executed.Creating every table definition up front is impractical and does not support for large ETL. Dynamically defining tables is very useful  for complex analytics and  with multiple staging points.
As discussed above we will be creating a column weblength dynamically and replacing the column sessionID from the table acad.
New table  contains three fields from acad:
In addition to this, we will define a new field called weblength(int)
Carry out the following steps to create an inline table definition using an alias:

  1. Open a text editor of your choice.
  1. Add the following inline creation syntax:

create table <tablename> as select <columnname1,columnname2,…><columnname> as <new_column name>from acad;
create table acad_with_length as select webpage,sessionin,sessionout,length(webpage) as weblength from acad;

  1. Save the script as filename.hql (acad_createtable_as.hql) in the active directory.
  1. Run the script from the operating system shell by supplying the -f option to the Hive client, as follows: hive -f acad_createtable_as.hql

  1. To verify that the table was created successfully, issue the following command to the Hive client directly, using the -e option: hive -e “describe acad_with_length”
  1. You should see a table with three string fields and a fourth int field holding the URL length:


Explanation of the scripts:
CREATE TABLE acad_with_weblength AS
The above statement initially defines a new table by the name acad_with_ length:
SELECT webpage,sessionin,sessionout , length(webpage) as weblength FROM acad;
We then define the body of this table as an alias to the result set of a nested SELECT statement. In this case, our SELECT statement simply grabs the webpage,sessionin, and sessionout fields from each entry in the acad table. The field names are copied as field names to our new table acad_with_length. We also defined an additional field aliased as weblength to be calculated for each selected record. It stores an int value that represents the number of characters in the record’s url field.
In one simple statement, we created a table with a subset of fields from our starting table, as well as a new derived field.
Keep visiting our site for more updates on Bigdata and other technologies.

One Comment

  1. Hi Prateek,
    How we define hive table with large number of columns? let’s suppose i have TSV file with 350 columns how to create table with 350 columns and with corresponding data types?

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles