How Does K-nearest Neighbor Works In Machine Learning Classification Problem?

  • Rohit Dwivedi
  • Apr 23, 2020
  • Machine Learning
How Does K-nearest Neighbor Works In Machine Learning Classification Problem? title banner

In a machine learning task, we usually have two kinds of problems which are to be solved either it can be ‘Classification’ or it can be ‘Regression’ problem. 

 

What is the classification? 

 

It is a process of sorting a given set of data into each different class. Classification can be implemented on both kinds of data structured as well as unstructured. Classes are often referred to as labels or targets which hold different classes. For example, classifying different fruits. 

 

What is Regression?

 

It is a problem in which our target holds continuous values or real values. Like prediction of salary or age of a person. 

 

What is K-Nearest Neighbor?

 

K-Nearest Neighbor also known as KNN is a supervised learning algorithm that can be used for regression as well as classification problems. But KNN is widely used for classification problems in machine learning. KNN works on a principle assuming every data point falling in near to each other is falling in the same class. That means similar things are near to each other.

 

Let us understand the concept by taking an example: 

 

(Two classes black and green and a data point which is to be classified)


Image shows the two classes and one data point blue that is to be classified between both the class.

Showing a blue data point which is classified as of black class


Above is the graph which shows different data points that are black ones, green ones and a blue data point which is classified amongst these two classes. 

 

The above graphs show the same two classes black and green, a data point blue which is to be classified by the algorithm either black or green. But how is it computed by the KNN algorithm?

 

KNN algorithms decide a number k which is the nearest Neighbor to that data point which is to be classified. If the value of k is 5 it will look for 5 nearest Neighbors to that data point. In this example, k=4. KNN finds out 4 nearest Neighbors. It is seen that because that data point is close to these Neighbors so it will belong to this class only. The green class is not considered because green class data points are nowhere close to the blue data point. 

 

K- Nearest Neighbor classifier is one of the introductory supervised classifiers, which every data science learner should be aware of. This algorithm was first used for a pattern classification task which was first used by Fix & Hodges in 1951. To be similar the name was given as KNN classifier. KNN aims for pattern recognition tasks. 

 

The simple version of the K-nearest neighbor classifier algorithms is to predict the target label by finding the nearest neighbor class. The closest class to the point which is to be classified is calculated using Euclidean distance. 

 

What is the significance of k?

 

  • Value of k – bigger the value of k increases confidence in the prediction. 

  • Decisions may be skewed if k has a very large value.

 

How to decide the value of k?


Wrong classification if k has big value.

Showing two classes orange and blue and X is to be classified amongst these class


Consider the case of two classes one is blue and the other is orange. If we assign the value of k = (1- 4), the X is classified as correctly which is blue class. But if we assign the value of k to be too large then there is misclassification that is orange class. 

 

How to choose K?

 

  • Deciding the k can be the most critical part of K-nearest Neighbors. 

  • If the value of k is small then noise will have a higher dependency on the result. Overfitting of the model is very high in such cases.

  • Bigger the value of K will destroy the principle behind KNN.

  • You can find the optimal value of K using cross-validation. 

 

KNN algorithm pseudo code implementation

 

  1. Load the desired data. 

  2. Choose the value of k.

  3. For getting the class which is to be predicted, repeat starting from 1 to the total number of training points we have.

  4. The next step is to calculate the distance between the data point whose class is to be predicted and all the training data points. Euclidean distance can be used here.

  5. Arrange the distances in non-decreasing order. 

  6. Assume the positive value of k and filtering k lowest values from the sorted list.

  7. We have top k top distances.

  8. Let ka represent the points that belong to the ath class among k points.

  9. If ka>kb then put x in the class.

 

Let's take a dataset and use the KNN algorithm to get more hands-on experience on how to use KNN for classification. So, we have taken the Iris dataset from the UCI Machine learning Repository. 

 

The images shows that we have loaded the data-set and checked missing values and info about the data.

 

Initially, I have imported the dataset. Then I have done a bit of EDA, like checking for missing values and information about the dataset. There were no missing values found. All the columns were found to be non-null float64 type and 1 class column to be a non-null object which is our target column. 

 

The image shows encoding of the class column and defining the target and idenpendent variables.

 

As you can see the class column is the categorical type and it is needed to label encode the column.

 

So, I have used LabelEncoder for the same. Label Encoder is a function which gives a label to your categorical columns like in this case it has assigned the values of class that were : 

 

Iris-setosa - 0, Iris-versicolor - 1 , Iris-virginica - 2.

 

After label encoding, I have assigned our independent features and target features as X & Y respectively. In continuation of that, I have split my data into a 70:30 ratio that is 70% of the training of the model and rest 30% to test the model using train_test_split

 

After splitting the data I have imported KNeighborsClassfier from sklearn. 

 

Made an object as NN of KNeighborsClassfier to feed the data to the algorithm. Using NN.fit(X_train,y_train) passed training data. 

 

The images shows training and test accuracy.

 

After the data got trained predicted classes for X_test that is the rest 30% data using NN.predict(X_test) and stored it into y_pred. Then I have imported accuracy_score from sklearn.metrics to check the accuracy of our model on the test data.

 

The accuracy score the model gave was around 91%. To check about the evaluation of the model I have used confusion_matrix (Used to evaluate the performance of the model). 

 

The test accuracy was found to be 91% and training accuracy was found to be 98%.

 

The images shows how to use cross_validation to find optimal k.

 

Used cross validation to find the optimal values of k which was found to be 11. 

 

Pros of KNN

 

  • A simple algorithm that is easy to understand.

  • Used for nonlinear data. 

  • The versatile algorithm used for both classification as well as regression.

  • Gives high accuracy but there are more good algorithms in supervised models.

 

Cons of KNN

 

  • The requirement of high storage.

  • Prediction rate slow.

  • Stores all the training data.

 

To know more about how to pre-process the data you can visit here to check different techniques that are used. Also, you can find the jupyter notebook file of the above classification problem here. 

 

Conclusion

 

In this blog, I have tried to explain the K-Nearest Neighbor algorithm which is used widely for classification. I have discussed the basic approach behind KNN, how it works, metrics used to check about the similarity of data, how to find the optimal value of k and pseudo-code for KNN. Discussed the advantages and disadvantages of using KNN. At last, I have used the KNN algorithm to classify flowers in the iris dataset. 

0%

Comments