Data Science and Artificial Intelligence

Implementation Of Neural Network

I assume you must have spent some time reading the last three articles in the Data Science series. If not, I recommend that you do give these earlier blog articles  a read.

Applications of Deep Learning Technology in Today's World

Introduction to Artificial Neural Network

Devil Is in the Detail: Data Science vs Artificial Intelligence vs Machine Learning vs Deep Learning

Reading something in theory however, is one thing and implementing it in practice is an entirely different matter. As they say, “In theory everything is possible, however, in practice it’s not”. Our aim is to master both and the only way we can do this is by practicing. Starting from this one, all our future articles will be a combination of theory and their implementation using Python.

If you are not familiar with the Python language, do spend some time learning the basic syntax. Believe me, as daunting as it sounds now, you will enjoy it once you start. Even if you do not enjoy, well- “It’s always better to sweat on the practice field than bleed in the war”.

So let’s get started. In the first article (Link 1 to the 1st Blog) we had seen the origin of Neural Network and also had gained some idea about how it works. To just revisit it quickly:

Artificial Neural Networks are composed of multiple nodes (like biological neurons). The nodes are connected by links. Each link is associated with weight. They can communicate with each other. The nodes take input data, perform some kind of operation and finally give a combined output. The output at each node is called its activation value. Learning takes place by altering weight values. Here is what it looks like:

Neural Network Implementation (Without TensorFlow)

The most popular Machine Learning library for Python is Scikit Learn. In this section we will implement Neural Networks with the Python programming language and the latest version of SciKit Learn!

(The code for the below example can be downloaded from here as well –https://github.com/palashgoyal1/AG_DeepLearning/tree/master/Neural_Networks)
Step 1:

Load your data. I am using a random dataset with 14 columns (including label column) and 178 rows. To view how your data looks,use “.head ()” function.

Step 2:

Now define which columns you are using as input and which columns as output.

Step 3:

Split you dataset into training and testing part. I have used 0.1% of data for testing purpose.

Step 4:

Define your model. I am using a Multilayer perceptron with 3 layers and the same number of neurons in each layer. There are plenty of parameters one can define inside their model but for easy understanding, let’s limit ourselves to a simple Neural Network model. ‘Max_iter’ represents the number of iterations we are interested in; the ‘hidden_layer_sizes’ tells us the number of neurons in each layer; the count of layers is defined by the count of such neurons numbers mentioned.

Step 5:

Fit your dataset into the model.

Step 7:

Predict using test dataset. Result is shown below.

Step 8:

To view/analyze how well your model has performed, you can use indicators such as Precision, Recall etc. If you do not have any idea about them, please read about them from the official documentation of Scikit Learn here http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html .

Let’s see how the model is performing on the test dataset.

Clearly, as we observe from the precision and recall, this is not a very effective model. One of the major reasons being that there are many problems associated with it, such as we have not normalized the data set. Another prolem could be that the train and test datasets split is not very effective. You could also attribute the network structure and data size to the low performance of the model. If you want to try and tweak few of these parameters and play around with the modeling results, it would be good exercise and you might get end up with better results.

You can also see the output in Step-5 that shows the default values of the other parameters in the model. It’s always a good idea to play around with them and discover what effects they have on the model. So this was a simple implementation of Neural Network using Scikit Learn.

In the next section we will understand the implementation of a basic Neural Network using Python package TensorFlow.

TensorFlow

The next step is to implement the Neural Network using Tensorflow. To get a better idea about what is “Tensorflow” you can refer to the link which also offers an informative white paper on the working of TensorFlow. I would highly recommend that you read through it (http://download.tensorflow.org/paper/whitepaper2015.pdf).

In a nutshell, TensorFlow is an open-source software library for Machine Learning across a range of tasks. It uses a computational graph which is composed of a series of TensorFlow operations arranged into a graph of nodes. Each node takes zero or more tensors as input and produces a tensor as an output. One type of node is a constant. Like all TensorFlow constants, it takes no inputs, and it outputs a value that it stores internally.

Implementation of Neural Network using Tensorflow:

Step 1:

The first thing we would want to do while working with any model, is to import all the necessary modules. Along with Pandas and NumPy we are going to import Tensorflow as well this time.

Step 2:

Now that we have imported all the necessary modules and set the seed so that the same result can be produced later, we are going to load the dataset and convert our target variable in one hot encoding form.

Step 3:

Now that we have the dataset, we will divide it into input and target form represented by X and Y.

We are going to shuffle our data for learning. The reason for this isthat with the current dataset, we have a problem right now. Suppose we are training 90% of the data. The problem here is that our data frame is in sorted order of stocks. This means that, when we’re training, we’re training it on data for Class 1, Class 2 and a few values of Class 3 but when we are testing-we are testing it on the dataset which entirely belongs to Class 3.

Now we have shuffled the sets of data. We well split this dataset into train and test set, both for input as well as the label

It’s time to define our model now. ‘Epochs’ represents the number of times we are going to use our dataset. ‘Interval’ represents the number of epochs after which we are going to print our results. We have considered a standard learning rate of 0.002. You can play around with it if required and if you want to compare the results with multiple combinations of it

Finally, we will train the model and test on the dataset reserved for  testing purposes.

Now we have successfully created our very first Neural Network model in TensorFlow.

The result that we received did not seem quite satisfactory again, for the same reason that we have neither normalized our dataset nor tuned the parameters. The main idea is to just get familiar with the Neural Network model using Tensorflow. We will be working with better and more fine-tuned models using TensorFlow in the next parts of the series.

The above steps have covered the implementation of Neural Networks with normal available Python packages and using Tensorflow as well. While it does not make a significant difference whether we use Tensorflow or not at this point of time, as we move ahead you will realize that it’s almost necessary to use Tensorflow as far as bringing the model to production is concerned.

Like in the case of Deep Neural Networks there are two problems we encounter –
a. It requires a lot of computation,b. It requires large training data sets to train the model in a better way.

Also, it takes a lot of trial and error to get the best training results with multiple combinations of different network designs and algorithms. Sometimes, it can even take several days or even weeks for a powerful GPU server to train a Deep Network with a dataset of millions of images. With these difficulties it becomes almost necessary to use Tensorflow for the very fact that it provides astonishing flexibility, portability and reusability.

We will read more on this when we will deal with Convolutional Neural Networks.  The next article in the series is “Introduction to Convolutional Neural Networks”.

Related Popular Courses

Data Science Course Online

Big Data Course Online

Data Analytics Course Online

Full Stack Development Course Online

Blockchain Course Online

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Close