All CategoriesBig Data Hadoop & Spark - Advanced

Sentiment Analysis on Tweets with Apache Hive Using AFINN Dictionary

In this post we will be discussing how to perform Sentiment Analysis on the tweets from Twitter using Hive. In our previous post, we had discussed how to perform Sentiment Analysis on the tweets using Pig.

We have collected the tweets from Twitter using Flume, you can refer to this post to know how to collect tweets from Twitter in real-time.

As the tweets coming in from twitter are in Json format, we need to load the tweets into Hive using json input format. We will use Cloudera Hive json serde for this purpose.

You can download the Cloudera Json serde from the below link:

hive-serdes-1.0-SNAPSHOT.jar

After downloading Cloudera Json serde, we need to copy the jar file into lib directory of your installed Hive folder. We need to ADD the jar file into Hive as shown below:

Syntax:

ADD jar 'path of the jar file';

After successfully adding the Jar file, we need to create a Hive table to store the Twitter data.

For performing Sentiment Analysis, we need the tweet_id and tweet_text, so we will create a Hive table that will extract the id and tweet_text from the tweets using the Cloudera Json serde.

Below is the one of the tweet which we have collected:

{"filter_level":"low","retweeted":false,"in_reply_to_screen_name":"FilmFan","truncated":false,"lang":"en","in_reply_to_status_id_str":null,"id":689085590822891521,"in_reply_to_user_id_str":"6048122","timestamp_ms":"1453125782100","in_reply_to_status_id":null,"created_at":"Mon Jan 18 14:03:02 +0000 2016","favorite_count":0,"place":null,"coordinates":null,"text":"@filmfan hey its time for you guys follow @acadgild To #AchieveMore and participate in contest Win Rs.500 worth vouchers","contributors":null,"geo":null,"entities":{"symbols":[],"urls":[],"hashtags":[{"text":"AchieveMore","indices":[56,68]}],"user_mentions":[{"id":6048122,"name":"Tanya","indices":[0,8],"screen_name":"FilmFan","id_str":"6048122"},{"id":2649945906,"name":"ACADGILD","indices":[42,51],"screen_name":"acadgild","id_str":"2649945906"}]},"is_quote_status":false,"source":"<a href=\"https://about.twitter.com/products/tweetdeck\" rel=\"nofollow\">TweetDeck<\/a>","favorited":false,"in_reply_to_user_id":6048122,"retweet_count":0,"id_str":"689085590822891521","user":{"location":"India ","default_profile":false,"profile_background_tile":false,"statuses_count":86548,"lang":"en","profile_link_color":"94D487","profile_banner_url":"https://pbs.twimg.com/profile_banners/197865769/1436198000","id":197865769,"following":null,"protected":false,"favourites_count":1002,"profile_text_color":"000000","verified":false,"description":"Proud Indian, Digital Marketing Consultant,Traveler, Foodie, Adventurer, Data Architect, Movie Lover, Namo Fan","contributors_enabled":false,"profile_sidebar_border_color":"000000","name":"Bahubali","profile_background_color":"000000","created_at":"Sat Oct 02 17:41:02 +0000 2010","default_profile_image":false,"followers_count":4467,"profile_image_url_https":"https://pbs.twimg.com/profile_images/664486535040000000/GOjDUiuK_normal.jpg","geo_enabled":true,"profile_background_image_url":"http://abs.twimg.com/images/themes/theme1/bg.png","profile_background_image_url_https":"https://abs.twimg.com/images/themes/theme1/bg.png","follow_request_sent":null,"url":null,"utc_offset":19800,"time_zone":"Chennai","notifications":null,"profile_use_background_image":false,"friends_count":810,"profile_sidebar_fill_color":"000000","screen_name":"Ashok_Uppuluri","id_str":"197865769","profile_image_url":"http://pbs.twimg.com/profile_images/664486535040000000/GOjDUiuK_normal.jpg","listed_count":50,"is_translator":false}}

The tweet is in nested json format. From this tweet we will extract the id, which is the tweet_id and text, which is the tweet_text.

Our tweets are stored in the ‘/user/flume/tweets/‘ directory of HDFS.

Now, let’s create an external table in Hive in the same directory where our tweets are present i.e., ‘/user/flume/tweets/’, so that tweets which are present in this location will be automatically stored in the Hive table.

The command for creating a Hive table to store id and text of the tweets is as follows:

create external table load_tweets(id BIGINT,text STRING) ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' LOCATION '/user/flume/tweets'

We can check the schema of the table using the below command:

describe load_tweets;

In the above image, we can see that the created Hive table has two rows id and text.

We can view the tweet_id and tweet_text which are present in the table by using the below command:

select * from load_tweets;

In the above image, we can see that tweet_id and tweet_text has been loaded successfully into the table.

Next, we will split the text into words using the split() UDF available in Hive. If we use the split() function to split the text as words, it will return an array of values. So, we will create another Hive table and store the tweet_id and the array of words.

create table split_words as select id as id,split(text,' ') as words from load_tweets;

We can see the schema of the table by using the ‘describe’ command.

Now, we can view the contents of the table by using the below command:

select * from split_words;

Next, let’s split each word inside the array as a new row. For this we need to use a UDTF(User Defined Table Generating Function). We have built-in UDTF called explode which will extract each element from an array and create a new row for each element.

Now, let’s create another table which can store id and word.

create table tweet_word as select id as id,word from split_words LATERAL VIEW explode(words) w as word;

Note: Syntax for LATERAL VIEW explode UDTF is as follows:

lateralView: LATERAL VIEW udtf(expression) tableAlias AS columnAlias (',' columnAlias)*fromClause: FROM baseTable (lateralView)

In general, explode UDTF has some limitations; explode cannot be used with other columns in the same select statement. So we will add LATERAL VIEW in conjunction with explode so that the explode function can be used in other columns as well.

We can see the schema of the table by using the ‘describe’ command.

In the above image, we can see that the array of values has been converted into a string, we can see the contents of the table by using the following command:

select * from tweet_word;

In the above image, we can see that the array of words has been split as one word in a new row.

Let’s use a dictionary called AFINN to calculate the sentiments. AFINN is a dictionary which consists of 2500 words rated from +5 to -5 depending on their meaning.

We will create a table to load the contents of AFINN dictionary. You can download the dictionary from the below link:

AFINN dictionary

create table dictionary(word string,rating int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

Now, let’s load the AFINN dictionary into the table by using the following command:

LOAD DATA INPATH '/AFINN.txt' into TABLE dictionary;

We have this AFINN dictionary in the root directory of HDFS.

We can view the contents of the dictionary table by using this command:

Hadoop

select * from dictionary;

Now, we will join the tweet_word table and dictionary table so that the rating of the word will be joined with the word.

create table word_join as select tweet_word.id,tweet_word.word,dictionary.rating from tweet_word LEFT OUTER JOIN dictionary ON(tweet_word.word =dictionary.word);

We can see the schema of the table by describing it.

In the above image, we can see that the rating column has been added along with the id and the word. Whenever there is a match with the word of the tweet in the dictionary, the rating will be given to that word else NULL will be present.

Let’s view the contents of the table by using the below command:

select * from word_join;

Now we will perform the ‘groupby’ operation on the tweet_id so that all the words of one tweet will come to a single place. And then, we will be performing an Average operation on the rating of the words of each tweet so that the average rating of each tweet can be found.

select id,AVG(rating) as rating from word_join GROUP BY word_join.id order by rating DESC;

In the above command, we have calculated the average rating of each tweet by using each word of the tweet and arranging the tweets in the descending order as per their rating.

In the above screen shot, you can see the tweet_id and its rating.

Hope this post was helpful in calculating the sentiments of tweets using Hive. Keep visiting our site www.acadgild.com for more updates on Bigdata and other technologies. Click here to learn Big data Hadoop from our Expert Mentors

Hadoop Training

18 Comments

  1. Hello,
    Thank you for this article.
    I followed the instruction. The table is created and I could see its description with the appropriate command but the table empty.
    I don’t have any result when I query “select * from load_tweets”
    I have the appropriate files created by flume but nothing in the table.
    Can you help me resolving this problem please ?
    Thank you so much

    1. Hi Raidh,
      It seems hive-serdes-1.0-SNAPSHOT.jar has not been added properly in the classpath. Please check it once and ensure that you are giving the correct schema while creating the table as the data is in Json format, you need to give the column names exactly as in the data.

  2. hi , i am also working on sentimental analysis of twitter logs using hadoop,flume,hive is there is any mapreduce code s necessary for analysis???

  3. Hi Satyam,
    Your Blog is simply superb,
    Same as fallows i am trying this twitter hive analysis in cloudera,
    Flume engine is working fine and required twitter data also captured in HDFS
    after that required Hive serde jar also added in Hive lib
    and created a external table as u mentioned and it created.
    But i am not able to see the data with select * from load_tweets;
    I am getting the below error
    Fetching results ran into the following error(s):
    Bad status for request TFetchResultsReq(fetchType=0, operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=0, operationId=THandleIdentifier(secret=’\xcd |?8(E\xdc\xb3\x10\xaa-\x19\x13* ‘, guid=’\xaf?,\xa4Q\xb0A7\x94\xaa\x00 S\x0b\x88\xad’)), orientation=4, maxRows=100): TFetchResultsResp(status=TStatus(errorCode=0, errorMessage=”java.io.IOException: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)\n at [Source: [email protected]; line: 1, column: 2]”, sqlState=None, infoMessages=[“*org.apache.hive.service.cli.HiveSQLException:java.io.IOException: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)\n at [Source: [email protected]; line: 1, column: 2]:25:24”, ‘org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:366’, ‘org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:275’, ‘org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:752’, ‘sun.reflect.GeneratedMethodAccessor18:invoke::-1’, ‘sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43’, ‘java.lang.reflect.Method:invoke:Method.java:606’, ‘org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78’, ‘org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36’, ‘org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63’, ‘java.security.AccessController:doPrivileged:AccessController.java:-2’, ‘javax.security.auth.Subject:doAs:Subject.java:415’, ‘org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1693’, ‘org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59’, ‘com.sun.proxy.$Proxy25:fetchResults::-1’, ‘org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:438’, ‘org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:692’, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553’, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1538’, ‘org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39’, ‘org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39’, ‘org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56’, ‘org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285’, ‘java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145’, ‘java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615’, ‘java.lang.Thread:run:Thread.java:745’, “*java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)\n at [Source: [email protected]; line: 1, column: 2]:29:4”, ‘org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:507’, ‘org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:414’, ‘org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:138’, ‘org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:1790’, ‘org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:361’, “*org.apache.hadoop.hive.serde2.SerDeException:org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)\n at [Source: [email protected]; line: 1, column: 2]:30:1”, ‘com.cloudera.hive.serde.JSONSerDe:deserialize:JSONSerDe.java:128’, ‘org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:488’, “*org.codehaus.jackson.JsonParseException:Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)\n at [Source: [email protected]; line: 1, column: 2]:38:8”, ‘org.codehaus.jackson.JsonParser:_constructError:JsonParser.java:1291’, ‘org.codehaus.jackson.impl.JsonParserMinimalBase:_reportError:JsonParserMinimalBase.java:385’, ‘org.codehaus.jackson.impl.JsonParserMinimalBase:_reportUnexpectedChar:JsonParserMinimalBase.java:306’, ‘org.codehaus.jackson.impl.ReaderBasedParser:_handleUnexpectedValue:ReaderBasedParser.java:630’, ‘org.codehaus.jackson.impl.ReaderBasedParser:nextToken:ReaderBasedParser.java:364’, ‘org.codehaus.jackson.map.ObjectMapper:_initForReading:ObjectMapper.java:2439’, ‘org.codehaus.jackson.map.ObjectMapper:_readMapAndClose:ObjectMapper.java:2396’, ‘org.codehaus.jackson.map.ObjectMapper:readValue:ObjectMapper.java:1602’, ‘com.cloudera.hive.serde.JSONSerDe:deserialize:JSONSerDe.java:126’], statusCode=3), results=None, hasMoreRows=None)
    Please help me to sort this issue;
    Thanks,
    Syam.

  4. Great post it helps a lot for us . i had one doubt how we can avoid fault tweets like if single user creating multiple account and twitting +vely

  5. hi team,
    as per the suggestion of Support Team I am posting my query in comment box of this blog.
    I have encountered an error after following the steps of this blog.
    Till step –
    hive > select * from load_tweets; every thing was fine.
    but after :-
    hive > create table split_words as select id as id,split(text,’ ‘) as words from load_tweets;
    i am facing error.
    hive> create table split_words as select id as id,split(text,’ ‘) as words from load_tweets;
    Total jobs = 3
    Launching Job 1 out of 3
    Number of reduce tasks is set to 0 since there’s no reduce operator
    Starting Job = job_1482551799650_0002, Tracking URL = http://localhost:8088/proxy/application_1482551799650_0002/
    Kill Command = /usr/lib/hadoop-2.2.0/bin/hadoop job -kill job_1482551799650_0002
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
    2016-12-24 14:49:25,356 Stage-1 map = 0%, reduce = 0%
    2016-12-24 14:49:45,659 Stage-1 map = 100%, reduce = 0%
    Ended Job = job_1482551799650_0002 with errors
    Error during job, obtaining debugging information…
    ———————————————————————————————————
    details are mentioned here in :-
    https://drive.google.com/open?id=0B2nmxAJLHEE8bWhWMUV3anZEM2M
    ——————————————————————————————————–
    Please help me out if you people are comfortable to resolve it.

    1. Hi Anand,
      It seems your hive shell is unable to find the following class com.cloudera.hive.serde.JSONSerDe.Please download this jar file from this link and add this jar into your hive shell by using the command ADD jar ‘here you need to give the path of the downloaded jar file’. Please try again with the above suggestions and let us know if you are still facing any issues.

      1. I did the same but its giving same error ….
        hive> create table split_words as select id as id,split(text,’ ‘) as words from load_tweets;
        WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
        Query ID = hduser_20171027184239_2d41c837-13a5-4c4f-aef6-42c3b1cf1179
        Total jobs = 3
        Launching Job 1 out of 3
        Number of reduce tasks is set to 0 since there’s no reduce operator
        Starting Job = job_1509097883368_0008, Tracking URL = http://akshay-HP-Pavilion-15-Notebook-PC:8088/proxy/application_1509097883368_0008/
        Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1509097883368_0008
        Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
        2017-10-27 18:42:50,858 Stage-1 map = 0%, reduce = 0%
        2017-10-27 18:43:16,344 Stage-1 map = 100%, reduce = 0%
        Ended Job = job_1509097883368_0008 with errors
        Error during job, obtaining debugging information…
        Examining task ID: task_1509097883368_0008_m_000000 (and more) from job job_1509097883368_0008
        Task with the most failures(4):
        —–
        Task ID:
        task_1509097883368_0008_m_000000
        URL:
        http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1509097883368_0008&tipid=task_1509097883368_0008_m_000000
        —–
        Diagnostic Messages for this Task:
        Error: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:169)
        Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        … 9 more
        Caused by: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
        … 14 more
        Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        … 17 more
        Caused by: java.lang.RuntimeException: Map operator initialization failed
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:137)
        … 22 more
        Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.data.JsonSerDe not found
        at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:328)
        at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:420)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:106)
        … 22 more
        Caused by: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.data.JsonSerDe not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2122)
        at org.apache.hadoop.hive.ql.plan.PartitionDesc.getDeserializer(PartitionDesc.java:177)
        at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:295)
        … 24 more
        FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
        MapReduce Jobs Launched:
        Stage-Stage-1: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
        Total MapReduce CPU Time Spent: 0 mse

  6. An error occurring when I am apply the last command.
    I have mentioned the command and error.
    Please help….
    hive> select id,AVG(rating) as rating from word_join GROUP BY word_join.id order by rating DESC;
    FAILED: Error in semantic analysis: Line 1:7 Expression not in GROUP BY key id

  7. hi
    great post
    i have encountered an error while i entered select * from load_tweets;
    it says
    Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)
    at [Source: [email protected]; line: 1, column: 2]
    pls can u help me out!!
    its urgent

  8. Hi
    This is really a great post,
    but I also encounter an error, I have successfully created a hive table but when I use to run command “select * from load_tweets;” then it is showing single row only , but there are lots of data
    hive> select * from load_tweets;
    OK
    855367491400478720 RT @SalsaBGB: Trevor Noah has proof that Donald Trump is always high https://t.co/igAUkLm6Wa via @HuffPostComedy
    Time taken: 5.423 seconds, Fetched: 1 row(s)
    Need Urgent help , please

  9. Hi Kiran,
    This is the best article i have found. Great work.
    I am able to create tweet_word and dictionary tables properly. But when i try join them, i am getting null rating for every id.
    Is this because of json data that i am getting or are they not able to join properly?

  10. I am using cloudera and i am getting this error while loading data into my hive table . please give me any clue how i resolve this issue.

    hive> load data inpath ‘/flume/twitter’ into table load_tweets;
    Loading data to table twitter.load_tweets
    Table twitter.load_tweets stats: [numFiles=398, numRows=0, totalSize=4755646863, rawDataSize=0]
    OK
    Time taken: 2.257 seconds
    hive> select * from load_tweets;
    OK
    Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character (‘O’ (code 79)): expected a valid value (number, String, array, object, ‘true’, ‘false’ or ‘null’)
    at [Source: [email protected]; line: 1, column: 2]
    Time taken: 0.201 seconds

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Close