• Category
  • >Machine Learning
  • >Statistics

AUC-ROC Curve Tutorial: Working and Applications

  • Utsav Mishra
  • Jun 16, 2021
AUC-ROC Curve Tutorial: Working and Applications title banner

“Facts are stubborn things, but statistics are pliable.”- Mark Twain

 

Introduction 

 

Statistics, a well-explained study of data. This data mostly comes in numerical form. Statistics contain a number of different theories and operations. In the vast field of stats, there are graphs and curves and all the things that can be used to process and deliver the final form of data.

 

If we have to define statistics, we will simply say that Statistics is a discipline of applied mathematics that deals with gathering, describing, analyzing, and inferring conclusions from numerical data. 

 

Differential and integral calculus, linear algebra, and probability theory are all used substantially in statistics' mathematical theories. Statisticians are particularly interested in learning how to make trustworthy inferences about big groups and general phenomena from the observable features of small samples that reflect just a tiny share of the big group or a small number of instances of a general occurrence.

 

Statistics consists of different types of curves and graphs. One such curve is the AUC-ROC curve that we will mainly discuss in this blog.

 

(Related blog: Types of data in Statistics)

 

 

What is the AUC-ROC curve?

 

The Area Under the Curve (AUC) - ROC curve (receiver operating characteristic curve) is a performance statistic for classification issues at various threshold levels. AUC indicates the degree or measure of separability, whereas ROC is a probability curve. It indicates how well the model can discriminate between classes. The AUC indicates how well the model predicts 0s as 0s and 1s as 1s.


 

A ROC curve is a graph that depicts a classification model's performance overall conceivable thresholds ( threshold is a particular value beyond which you say a point belongs to a particular class). 

 

The curve shows the relationship between two parameters-

 

  • True positive rate (TPR)

  • False positive rate (FPR)

 

For example, here is an AUC-ROC curve plotted with TPR against FPR, with TPR plotted against the y-axis and FPR plotted against the x-axis.


                   AUC-ROC curve, image source


(Must check: ANOVA Test)

 

Related Terminologies  

 

  1. True Positive: Actual Positivity and Positivity Predicted

  2. True Negative: Actual Negative and Negatively Predicted

  3. Type I Error (False Positive): Although the situation is actually negative, it is anticipated to be positive.

  4. True Positive but anticipated as Negative (Type II Error): Actual Positive but anticipated as Negative.

 

False Positive is a false alert, while False Negative is a miss in basic words.

 

Let's have a look at what TPR and FPR are.

FPR=FP/(TN+FP)

 

  TPR=TP/(TP+FN)         

 

TPR/Recall/Sensitivity is the ratio of properly-recognized positive instances, whereas FPR is the ratio of mistakenly categorized negative instances.

 

(Related blog: What is confusion matrix?)

 

As previously stated, ROC is the plot of TPR and FPR across all possible thresholds, whereas AUC is the whole area underneath this ROC curve.

 

As told above ROC curve is a measure of probability, let us look beyond the geometric aspects of it and focus on the probability interpretation of it.


 

Probabilistic interpretation of AUC-ROC curve

 

To look at this, let us first try to know what the curve actually does, It measures how well a model is able to distinguish between different classes.

 

An AUC of 0.75 means that if we take two data points from two different classes, there is a 75% chance that the model will correctly segregate or rank order them, i.e. the positive class has a greater prediction probability than the negative class. (A greater prediction probability indicates that the point is more likely to belong to the positive class).

 

Let us take an example

 

In the below table we have different classes with different probabilities.

 

index

class

probability

P1

1

0.95

P2

1

0.90

P3

0

0.85

P4

0

0.81

P5

1

0.78

P6

1

0.70

 

We have 6 points where P1, P2, P5 belong to class 1 and P3, P4, P6 belong to class 0, and we have matching predicted probabilities in the Likelihood column. As we previously said, what is the probability that model rank arranges two points belonging to different classes correctly?

 

We'll take all conceivable pairs in which one point belongs to class 1 and the other to class 0, resulting in a total of 9 such pairs; listed below are all of these 9 potential couples.

 

PAIR

Is CORRECT?

(P1,P3)

yes

(P1,P4)

yes

(P1,P6)

yes

(P2,P3)

yes

(P1,P4)

yes

(P2,P4)

yes

(P3,P5)

no

(P4,P5)

no

(P5,P6)

yes

The column, in this case, is Correct indicates whether the mentioned pair is correct rank-ordered based on the predicted probability, i.e., class 1 point has a higher probability than class 0 point, and in 7 of the 9 possible pairs, class 1 is ranked higher than class 0, or we can say that there is a 77 percent chance that the model will be able to distinguish them correctly.

 

Now that we know both the geometric and probabilistic approach of the AUC-ROC curve, let us try to know where they are used.

 

This was an example of single-class classification. But how does this curve work in the multi-class classification?

 

Using the One versus ALL technique, we may draw N number of AUC ROC Curves for N number of classes in a multi-class model. As an example, You'll have one ROC for X classed against Y and Z, another for Y classed against X and Z, and a third for Z classed against Y and X if you have three classes called X, Y, and Z.


 

Where are ROC Curves used?

 

  • ROC curves are used when the datasets don’t have a severe imbalance among them. I.e. There is not much difference in the datasets.

  • ROC curves should be used when there are roughly equal numbers of observations for each class. If the number of observations will be equal the plotting of curves will be easier and clearer hence giving a much better and understandable output.

 

(Must check: Conditional Probability)

 

 

Applications of AUC-ROC curve

 

AUC-ROC curves are an important part of statistics. ROC curves may be used to determine a threshold value in most cases. The threshold value chosen will also be determined by how the classifier will be utilized. 

 

Thus, if the above curve were for a cancer prediction application, you'd want to catch as many positives as possible (i.e., have a high TPR), so you'd pick a low threshold value like 0.16 even if the FPR is rather high. 

They are used in machine learning applications of python to deliver numerous applications. Generally, they are used for different types of classifications.

 

Some important applications of the AUC-ROC curve where they are used solely are listed here.

 

  1. Classification of 3D models

 

The curve is used to classify and separate 3D models from the normal ones. With the application of python and machine learning, it separates the 3D models from the given classes. With a given threshold they classify the ones which are non-3D and separate out the 3D ones.

 

(Also read: 3D printing Technology)

 

  1. In Hospitals

 

This curve is used in hospitals for different purposes. One such purpose can be the detection of cancer. With the use of false negative and false positive, it predicts with a given threshold that whether a person has cancer or not. The accuracy of it depends upon the threshold value used.

 

  1. In Binary Classification

 

ROC curves are commonly used in binary classification to investigate a classifier's output. Binarizing the output is required to expand the ROC curve and ROC area to multi-label classification. Each label can have its own ROC curve, but each member of the label indicator matrix may be treated as a binary prediction to create a ROC curve (micro-averaging).

 

(Also read: What is Group theory?)

 

 

Conclusion 

 

The evaluation of multiple models against each other is a critical stage in any machine learning process. Selecting the incorrect assessment measure or failing to comprehend what your measure truly implies might cause havoc with your entire system. 

 

(Recommended blog: Statistical Data Analysis)

 

The AUC-ROC curve was one such important part of both the statistical and machine learning world. We hope that in the near future with a better understanding of it, we will see applications of these curves more and more.

Latest Comments