All CategoriesBig Data Hadoop & Spark - Advanced

MapReduce Use Case: YouTube Data Analysis

YouTube Data Analysis

This blog is about, how to perform YouTube data analysis in Hadoop MapReduce.
This YouTube data is publicly available and the YouTube data set is described below under the heading Data Set Description.
Using that dataset we will perform some Analysis and will draw out some insights like what are the top 10 rated videos on YouTube, who uploaded the most number of videos.
By reading this blog you will understand how to handle data sets that do not have proper structure and how to sort the output of reducer.

DATA SET DESCRIPTION

Column 1: Video id of 11 characters.
Column 2: uploader of the video

Column 3: Interval between the day of establishment of Youtube and the date of uploading of the video.

Column 4: Category of the video.

Column 5: Length of the video.

Column 6: Number of views for the video.

Column 7: Rating on the video.

Column 8: Number of ratings given for the video
Column 9: Number of comments done on the videos.

Column 10: Related video ids with the uploaded video.
You can download the data set from the below link.

100% Free Course On Big Data Essentials

Subscribe to our blog and get access to this course ABSOLUTELY FREE.

DATA SET LINK

Youtube Data set

PROBLEM STATEMENT 1

Here we will find out what are the top 5 categories with maximum number of videos uploaded.

SOURCE CODE

Now from the mapper, we want to get the video category as key and final int value ‘1’ as values which will be passed to the shuffle and sort phase and are further sent to the reducer phase where the aggregation of the values is performed.

MAPPER CODE

public class Top5_categories {
   public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>{
      private Text category = new Text();
      private final static IntWritable one = new IntWritable(1);
      public void map(LongWritable key, Text value, Context context )
        throws IOException, InterruptedException {
           String line = value.toString();
           String str[]=line.split("\t");
          if(str.length > 5){
                category.set(str[3]);
}
      context.write(category, one);
}
}

Explanation of the above Mapper code:
In line 1 we are taking a class by name Top5_categories,
In line 2 we are extending the Mapper default class having the arguments keyIn as LongWritable and ValueIn as Text and KeyOut as Text and ValueOut as IntWritable.

In line 3 we are declaring a private Text variable ‘category’ which will store the category of videos in youtube

In line 4 we are declaring a private final static IntWritable variable ‘one’ which will be constant for every value. MapReduce deals with Key and Value pairs.Here we can set the key as gender and value as age.

In line 5 we are overriding the map method which will run one time for every line.

In line 7 we are storing the line in a string variable ‘line’

In line 8 we are splitting the line by using tab “\t” delimiter and storing the values in a String Array so that all the columns in a row are stored in the string array.

In line 9 we are taking a condition if we have the string array of length greater than 6 which means if the line or row has at least 7 columns then it will enter into the if condition and execute the code to eliminate the ArrayIndexOutOfBoundsException. 

In line 10 we are storing the category which is in the 4th column.

In line 12 we are writing the key and value into the context which will be the output of the map method.

REDUCER CODE

public static class Reduce extends Reducer<Text, IntWritable,Text,IntWritable>{
   public void reduce(Text key, Iterable<IntWritable> values,Context context throws IOException, InterruptedException {
           int sum = 0;
           for (IntWritable val : values) {
               sum += val.get();
}
           context.write(key, new IntWritable(sum));
}
}

While coming to the Reducer code

line 1 extends the default Reducer class with arguments KeyIn as Text and ValueIn as IntWritable which are same as the outputs of the mapper class and KeyOut as Text and ValueOut as IntWritbale which will be final outputs of our MapReduce program.

In line 2 we are overriding the Reduce method which will run each time for every key.

In line 3 we are declaring an integer variable sum which will store the sum of all the values for each key.

In line 4 a for each loop is taken which will run each time for the values inside the “Iterable values” which are coming from the shuffle and sort phase after the mapper phase.

In line 5 we are storing and calculating the sum of the values.

In line 7 writes the respected key and the obtained sum as value to the context.

CONF CODE

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

This two configuration classes are included in the main class whereas to clarify the Output key type of mapper and the output value type of the Mapper.
You can download the whole source code from the below link.

SOURCE CODE LINK

GitHub link for Problem statement 1

HOW TO EXECUTE

hadoop jar top5.jar /youtubedata.txt /top5_out

Here ‘hadoop’ specifies we are running a Hadoop command and jar specifies which type of application we are running and top5.jar is the jar file which we have created consisting of the above source code.

The path of the Input file in our case is root directory of hdfs denoted by /youtubedata.txt  and the output file location to store the output has been given as top5_out.

How to view output

hadoop fs -cat /top5_out/part-r-00000 | sort –n –k2 –r | head  –n5
Here ‘hadoop’ specifies that we are running a Hadoop command and dfs specifies that we are performing an operation related to Hadoop Distributed File System and ‘- cat’ is used to view the contents of a file and top5_out/part-r-00000 is the file where output is stored.
Part file containing the actual output is created by default by the TextInputFormat class of Hadoop.
Here sort –n –k2 –r | head –n5 brings you the top 5 categories with maximum number of videos uploaded.
Instead of writing a secondary sort after reducer we can simply use this command to get the required output.
Sort will sort the data, –n means sorting numerically, –k2 means second column,  –r is for recursive operation and head –n5 means to bring the first 5 values after sorting.

Output

[[email protected] -]hadoop jar top5.jar /youtubedata.txt /top5_out
15/10/22 11:06:45 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform... using builtin-java classes where applicable
15/10/22 11:06:48 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0 :8032
15/10/22 11:06:49 WARN mapreduce.JobSubmitter: Hadoop command-line option parsin g not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/10/22 11:06:50 INFO input.FileInputFormat: Total input paths to process : 1
15/10/22 11:06:50 INFO mapreduce.JobSubmitter: number of splits:1
15/10/22 11:06:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14 45504384269 0002
15/10/22 11:06:51 INFO impl.YarnClientlmpl: Submitted application application 14 45504384269_0002
15/10/22 11:06:52 INFO mapreduce.Job: The url to track the job: http://localhost .localdomain:8088/proxy/application 1445504384269_0002/
15/10/22 11:06:52 INFO mapreduce.Job: Running job: job 1445504384269_0002
15/10/22 11:07:05 INFO mapreduce.Job: Job job_1445504384269_0002 running in uber mode : false
15/10/22 11:07:05 INFO mapreduce.Job: map 0% reduce 0%
15/10/22 11:07:15 INFO mapreduce.Job: map 100% reduce 0%
15/10/22 11:07:27 INFO mapreduce.Job: map 100% reduce 100%
[[email protected] —]$ hadoop fs -cat /top5_out/part-r-88000 1 sort -n -k2 -r 1 head -n5
15/18/22 13:22:06 WARN util.NativeCodeLoader: Unable to load native-hadoop Libra ry for your platform... using builtin-java classes where applicable
Entertainment 911
Music 870
Comedy 420
Sports 253
Education 65

Hadoop
Hadoop

PROBLEM STATEMENT 2

In this problem statement, we will find the top 10 rated videos on youtube.

SOURCE CODE

Now from the mapper, we want to get the video id as key and rating as a value which will be passed to the shuffle and sort phase and is further sent to the reducer phase where the aggregation of the values is performed.

MAPPER CODE

1.    public class Video_rating {
2.     public static class Map extends Mapper<LongWritable, Text, Text,
3. FloatWritable> {
4.        private Text video_name = new Text();
5.        private  FloatWritable rating = new FloatWritable();
6.        public void map(LongWritable key, Text value, Context context )
7. throws IOException, InterruptedException {
8.            String line = value.toString();
9.            If(line.length()>0) {
10.            String str[]=line.split("\t");
11.                 video_name.set(str[0]);
12.                 if(str[6].matches("\\d+.+")){
13.                 float f=Float.parseFloat(str[6]);
14.                 rating.set(f);
15. }
16. }
17.       context.write(video_name, rating);
18. }
19. }
20. }

Explanation of the above Mapper code
In line 1 we are taking a class by name Video_rating
In line 2 we are extending the Mapper default class having the arguments keyIn as LongWritable and ValueIn as Text and KeyOut as Text and ValueOut as FloatWritable.
In line 4 we are declaring a private Text variable ‘video_name’ which will store the video name which is in an encrypted format.
In line 5 we are declaring a private FloatWritable variable ‘rating’ which will store the rating of the video. MapReduce deals with Key and Value pairs.Here we can set the key as gender and value as age.
In line 6 we are overriding the map method which will run one time for every line.
In line 8 we are storing the line in a string variable ‘line’
In line 9 we are taking a condition if we have the string array length greater than 7 which means if the line or row has at least 7 columns then it will enter into the if condition and execute the code to eliminate the ArrayIndexOutOfBoundsException.
In line 10 we are splitting the line by using tab “\t” delimiter and storing the values in a String Array so that all the columns in a row are stored in the string array.
In line 11 we are storing the video name which is in the 1st column.
In line 12 we are checking whether the data in that index is numeric data or not by using a regular expression which can be achieved by “matches function in java”, if it is numeric data then it will proceed and it should be a floating value as well.
In line 13 we are converting that numeric data into Float data by type casting.
In line 14 we are storing the rating of the video in ‘rating’ variable.
In line 17 we are writing the key and value into the context which will be the output of the map method.

public static class Reduce extends Reducer<Text, FloatWritable,Text, FloatWritable> {
       public void reduce(Text key, Iterable<FloatWritable> values,Context context)
         throws IOException, InterruptedException {
           float sum = 0;
           Int l=0;
           for (FloatWritable val : values) {
                l+=1;
               sum += val.get();
}
   sum=sum/l;
   context.write(key, new FloatWritable(sum));
}
}

REDUCER CODE

While coming to the Reducer code

line 1 extends the default Reducer class with arguments KeyIn as Text and ValueIn as IntWritable which are same as the outputs of the mapper class and KeyOut as Text and ValueOut as IntWritbale which will be final outputs of our MapReduce program.

In line 2 we are overriding the Reduce method which will run each time for every key.

In line 4 we are declaring an integer sum which will store the sum of all the ages of people in it.

In line 5 we are taking another variable as “l” which will be incremented every time as many values are there for that key.

In line 6 a for each loop is taken which will run each time for the values inside the “Iterable values” which are coming from the shuffle and sort phase after the mapper phase.

In line 8 we are storing and calculating the sum of the values.

In line 10 we are performing the average of the obtained sum and writes the respected key and the obtained sum as value to the context.

CONF CODE

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FloatWritable.class);

This two configuration classes are included in the main class whereas to clarify the Output key type of mapper and the output value type of the Mapper.
You can download the whole source code from the below link

SOURCE CODE LINK

GitHub link for Problem statement 2

HOW TO EXECUTE

hadoop jar video_rating.jar /youtubedata.txt /videorating_out

The explanation for the above command will be as same as given in problem statement 1.

How to view output

hadoop fs -cat /videorating_out/part-r-00000 | sort –n –k2 –r | head –n10
Explanation for the above command will be as same as given in problem statement 1.

Output:

[[email protected] -]$ hadoop jar video_rating.jar /youtubedata.txt /videoratin g_out
15/10/22 12:52:55 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform... using builtin-java classes where applicable
15/10/22 12:52:59 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0 :8032
15/10/22 12:53:00 WARN mapreduce.JobSubmitter: Hadoop command-line option parsin g not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 15/10/22 12:53:01 INFO input.FilelnputFormat: Total input paths to process : 1
15/10/22 12:53:02 INFO mapreduce.JobSubmitter: number of splits:1
15/10/22 12:53:02 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14 45504384269 0006
15/10/22 12:53:03 INFO impl.YarnClientlmpl: Submitted application application 14 45504384269_0006 15/10/22 12:53:03 INFO mapreduce.Job: The url to track the job: http://localhost .localdomain:8088/proxy/application 1445504384269_0006/
15/10/22 12:53:03 INFO mapreduce.Job: Running job: job 1445504384269_0006
15/10/22 12:53:21 INFO mapreduce.Job: Job job_1445504384269_0006 running in uber mode : false
15/10/22 12:53:21 INFO mapreduce.Job: map 0% reduce 0%
15/10/22 12:53:36 INFO mapreduce.Job: map 100% reduce 0%
15/10/22 12:53:46 INFO mapreduce.Job: map 100% reduce 100%
15/10/22 12:53:46 INFO mapreduce.Job: Job job_1445504384269_0006 completed successfully
[[email protected] -]$ hadoop fs -cat /videorating_out/part-r-00000 1 sort -n -k2 -r 1 head -n10 15/10/22 12:54:28 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform... using builtin-java classes where applicable
r30-2Q3V1jc 4.99
KOweSiiviVO 4.99
jIuCA4RRtXE 4.99
h_8gsd8IT7Y 4.99
cYbVkXai6Ec 4.99
aoDBacpCX34 4.99
3v1oRJYR6A 4.99
xe-f-zg_KIU 4.98
U4yJB1ynN-Y 4.98
sWIOyZnnChk 4.98

We hope this blog will help you to get a grip on MapReduce programming. Refer the below blog to understand the analysis done on Titanic data set.
Titanic Data Analysis Keep visiting our website Acadgild for more updates on Big Data and other technologies.

Related Popular Courses:

BIG DATA ONLINE COURSE

PROFESSIONAL ANDROID APP DEVELOPMENT TRAINING

KAFKA QUICK START

DATA SCIENCE COURSE ONLINE

BIG DATA ANALYSIS COURSE

Hadoop

46 Comments

  1. Pingback: 34 External Machine Learning Resources and Related Articles — Dr. Jonathan Jenkins, DBA, CSSBB, MSQA
  2. hi,
    i run the command hadoop jar top5.jar /youtubedata.txt /top5_out
    but this error is showing-
    Exception in thread “main” java.lang.ClassNotFoundException: /youtubedata/txt
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:278)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
    can you please suggest a solution.

    1. This error might be because of the incorrect path of jar file,please check whether you have given correct path for Jar file.
      Also ensure that you include main class while exporting the jar file and all your daemons are running in hadoop cluster.

    2. HI Dheeraj ,
      Even I am also facing this issue and I have given correct path for jar can you please help me how you had resolved it ?

      1. Hi Priyanshu,
        This specific error Exception in thread “main” java.lang.ClassNotFoundException: /youtubedata/txt is not the problem of code, it regards to the command for executing hadoop program. Command for executing hadoop programs is as follows hadoop jar Path_to_jar_file Input_file_path Output_file_path. Here in the jar file if your main class is inside any package then you need to provide the complete path to the main class i.e., path_to_jar_file/package_name.main_class_name. So in this error terminal is finding the main class in the place of input_file_path for that reason it is throwing an exception that i could not find any class in youtubedata.txt . Please check the command properly and execute. You can also follow our ebook on developing and executing your first mapreduce program from this link . You can also try running the program locally in your linux machine so that you need not to type any commands for executing the program. Please refer to Running map reduce program in local mode blog from this link.

    3. i am sorry for the late comment, if you have the problem like this you have to find tools.jar from java and put the path of it in .bashrc and env.sh, because it you need the hadoop to run the jar and then you have to run it with this syntax top5.jar Top5_categories /youtubedata.txt /top5_out

  3. it throws exception when i run the progrmme..
    log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    SLF4J: Failed to load class “org.slf4j.impl.StaticLoggerBinder”.
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    Exception in thread “main” java.lang.ArrayIndexOutOfBoundsException: 0
    at Video_rating.main(Video_rating.java:59)

  4. it throws exception when i run the program
    Exception in thread “main” java.lang.ArrayIndexOutOfBoundsException: 0
    at Top5_categories1.main(Top5_categories1.java:48)

  5. To sort the Reducer output, the command will be hadoop dfs -cat /user/output/YoutubeData/part-r-00000 | sort -t$’\t’ -k2 -nr | head -5 . This command will take tab as delemeter as Reducer writes its output as tab delemeted by default.

  6. Nice post.
    But I spotted a few errors,
    I think in problem 2 explanation line 9 and line 10 are not coded as per explanation. They are not in sync.
    I was just wondering if you did send out Rating as key and movie id as value wouldn’t they be sorted by shuffle/sort before they are sent to reducer and you interchange key-value pair in reducer so that you don’t have to get sorted results through a command.

    1. Hi Karthik,
      Thanks for the update.
      By default the output of a map reduce program will get sorted in ascending order but according to the problem statement we need to pick out the top 10 rated videos. So to sort it in descending order we have done it using the command.

  7. Pingback: Comprehensive list of data science resources [updated May 6, 2016] – WebProfIT Consulting
  8. MapReduce is a method for distributing a task across multiple nodes, Each node processes data stored on that node Where possible. Consists of two phases : 1. Map, 2. Reduce
    Features of MapReduce
    1. Automatic parallelization and distribution
    2. Fault-tolerance
    3. Status and monitoring tools
    4. A clean abstraction for programmers – MapReduce programs are usually written in Java
    5. MapReduce abstracts all the ‘housekeeping’ away from the developer – Developer can concentrate simply on writing the Map and Reduce functions
    Sir my concept of MapReduce is clear but in programming i face problem, how to overcome this, Any Book Suggestion ?

    1. You can refer to Hadoop: The Definitive Guide by Tom white for understanding how to write basic programs in MapReduce and you can refer to Hadoop Real-World Solutions Cookbook by Srinath perera for some more advanced MapReduce concepts. You can keep following our website AcadGild we will be writing many use cases for beginners on MapReduce and many other Big data technologies.

  9. Hi,
    In the above code we will get how amny videos for each category which are uploaded,but not the top 5 categories with maximum number videos.

    1. Hi Manideep,
      Yes you are right! The above Mapreduce code will get how many videos for each category are uploaded on the output we will be performing a shell command to take out the top 5 categories with maximum number of videos. As this blog is aimed at beginner’s we have followed this procedure. You can also pick the top 5 categories using top k Design patterns in MapReduce. You can refer to this link to know how to apply top k design patterns in mapreduce top pick out the top k records.

  10. hi,
    I had the written individual mapper class,reduce class and runner class in eclipse. Mapper class and Reducer class doesnt have any error . But the runner class is showing two error…
    conf.setMapperClass(Map.class); [it says :”The method setMapperClass(Class) in the type JobConf is not applicable for the arguments (Class)”]
    conf.setReducerClass(Reduce.class); [it says :”The method setReducerClass(Class) in the type JobConf is not applicable for the arguments (Class)”]

  11. Hi Kiran,
    I ran this program in pig and got a different result for 1st problem. I have verified the result manually as well.
    Here is the out I got:
    (Entertainment,908)
    (Music,862)
    (Comedy,414)
    (People & Blogs,398)
    (News & Politics,333)
    In my pig script,
    First I loaded all the data, then filtered category & video id for null, and then simply grouped them by category.
    Why my output is different than yours?
    Thanks

    1. Hi Abdul,
      The columns in the dataset are not proper so we have given the following condition if(str.length > 5) in the program so because of this condition few records will be skipped. If you remove that condition then you will get ArrayIndexOutOfBoundsException. So this is the reason for the difference in result.

  12. Hello mr krishna, i use centos 7 minimal and i somewhat cannot find the main class, with the error of
    [[email protected] ~]$ hadoop jar top5.jar main /youtubedata.txt /youtube
    Exception in thread “main” java.lang.ClassNotFoundException: main
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
    i already read the book:develop and execute your first mapreduce from the first comment and i cannot use the eclipse because i use the minimalist. can you give me an answer on what is the main.class for this program?

    1. Hi vitalis,
      This particular error you have got is regarding the main class not found. In the hadoop jar command, you may have to provide the driver class name::this is required when you haven’t specified the main class while creating the jar file, if you have specified your main class while creating the jar file, then you need not to specify the main class name explicitly while running the hadoop application, you can directly run the jar as hadoop jar top5.jar /youtubedata.txt /youtube by just mentioning the jar file path, input dataset link and the output file path. If the main class is not specified while creating the jar file, your command will look like this hadoop jar top5.jar package_if_any.main_class_name /youtubedata.txt /youtube . The main class name used in this use case is Top5_categories so the command will look like hadoop jar top5.jar Top5_categories /youtubedata.txt /youtube.

      1. ok i got it, it seems that my hadoop doesn’t have the tools.jar in it so i put the path on the .bashrc and env.sh and run the jar with hadoop jar top5.jar Top5_categories /youtubedata.txt /youtube. i used centos 7 minimalist as the os, i got this answer from my lecturer though lol, but i appreciate your reply and keep the good work

  13. Hi,
    I am running the Example 1 in the Hortonworks Sandbox so i am always getting a error : java.lang.ClassNotFoundException: Class Top5_categories.java not found. I have set the path of jar as : /user/hue/hadoop.jar the jar contains the class Top5_categories.java .
    Can you help me to solve this error .

    1. Hi Pratishtha,
      Please check whether the code is inside any package, if it is inside any package then you need to include the package name also as package_name.Top5_categories . Also try building the jar file again from eclipse, this time please make sure that you are specifying the main class:: you will get this in the step Select the class of application entry point of jar creation.

  14. Hi,
    Thanks for your quick reply i tried this but nothing works.
    The process which i created jar is:
    Right click the project in eclipse -> select the export option ->then under java selected jar file -> selected the src package and then selected the location of storage and then finished.
    Request you to explain if i am wrong and explain me again but can u plz include the images of steps to be followed

    1. Hi Pratishtha,
      It seems you have missed out the step Select the class of application entry point in this step you need to select the main class of your application. After the selecting, click on Finish to complete the jar file creation. Now you can run the jar using the command hadoop jar jar_file_path input_file_path output_file_path.

  15. MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster. MapReduce is a framework for processing parallelizable problems across huge datasets using a large number of computers (nodes), collectively referred to as a cluster or a grid. Processing can occur on data stored either in a filesystem (unstructured) or in a database (structured).

  16. Hello,
    After seeing the various blog , I recently started learning hadoop . I have a small problem . I am having problem in converting your source code of problem 1 into a proper jar file . Can you expalain it ? . I downloaded the code into a .java file called Top5_categories.java as per classname .Then I javac them and jar all resultant classes . Do i need to put a manifest.txt ? Do i need to add the java file also ?
    P.S.I am also beginner in java(I am studying in school 9th standard and they dont teach us java , It is result of my own curiosity ). I hope not offend anyone.
    Thanks in advance

  17. Pingback: Comprehensive list of data science resources | causality.io | causality.io
  18. TO me output seems incorrect from command line, as it is not considering the Whole string with empty spaces as categories, and hence you are getting wrong output.
    below is the expected output.
    Entertainment 911
    Music 870
    Comedy 420
    People & Blogs 399
    News & Politics 343

  19. Hi,
    I am working on a project on youtube data analysis for my school. I need 10GB of dataset for my project. Can you please provide me the dataset which will be really helpful.Please do let me know
    Thanks
    Veda

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Close
Close