With the advancement in technology and trends in connected-devices could consider huge data into account, their storage and privacy is a big issue to concern.
Data hackers make algorithms to steal any such confidential information from a massive amount of data. So, data must be handled precisely which is also a time-consuming task.
Also, we have seen, not all the data is required for inferences, reduction in data-dimensions can also help to govern datasets that could indirectly aid in the security and privacy of data.
In the core aspects of this blog, we will dwell on data dimensionality reduction techniques, it will cover the concept of Linear Discriminant Analysis(LDA), the difference of LDA and PCA and related applications.
In 1936, Ronald A.Fisher formulated Linear Discriminant first time and showed some practical uses as a classifier, it was described for a 2-class problem, and later generalized as ‘Multi-class Linear Discriminant Analysis’ or ‘Multiple Discriminant Analysis’ by C.R.Rao in the year 1948.
Linear Discriminant Analysis is the most commonly used dimensionality reduction technique in supervised learning. Basically, it is a preprocessing step for pattern classification and machine learning applications.
It projects the dataset into moderate dimensional-space with a genuine class of separable features that minimize overfitting and computational costs.
With the aim to classify objects into one of two or more groups based on some set of parameters that describes objects, LDA has come up with specific functions and applications, we will learn about that in detail in the coming sections.
(Suggested blog: Machine Learning Algorithms)
Under Linear Discriminant Analysis, we are basically looking for
Which set of parameters can best describe the association of the group for an object?
What is the best classification preceptor model that separates those groups?
It is widely used for modeling varieties in groups, i.e. distributing variables into two or more classes, suppose we have two classes and we need to classify them efficiently.
Classification of various objects before and after implementing LDA
Classes can have multiple features, using one single feature to classify may yield in some kind of overlapping of variables, so there is a need of increasing the number of features to avoid overlapping that would result in proper classification in return.
(Must Read: Top Machine Learning Tools)
Here is the the video that clearly explains LDA
Consider another simple example of dimensionality reduction and feature extraction, you want to check the quality of soap based on the information provided related to a soap including various features such as weight and volume of soap, peoples’ preferential score, odor, color, contrasts, etc.
A small scenario to understand the problem more clearly;
Object to be tested -Soap;
To check the quality of a product- class category as ‘good’ or ‘bad’( dependent variable, categorical variable, measurement scale as a nominal scale);
Features to describe the product- various parameters that describe the soap (independent variable, measurement scale as nominal, ordinal, internal scale);
Pictorial view of an object, class category, and features extraction
When the target variable or dependent variable is decided then other related information can be dragged out from existing datasets to check the effectivity of features on the target variables.
And hence, the data dimension gets reduced out and important related-features have stayed in the new dataset.
(Related reading: Clustering methods and application)
Quadratic Discriminant Analysis (QDA): Each class deploys its own estimate of variance, or the covariance where there are multiple input variables.
Flexible Discriminant Analysis (FDA): Where the combinations of non-linear sets of inputs are deployed such as splines.
Regularized Discriminant Analysis (RDA): It adds regularization into the estimate of the variance, or covariance that controls the impact of various variables on LDA. (Source)
Moreover, the limitations of logistic regression can make demand for linear discriminant analysis.
Logistics regression is a significant linear classification algorithm but also has some limitations that leads to making requirements for an alternate linear classification algorithm.
Two-Class Problems: Logistic regression is proposed for two-class or binary classification problems that further be expanded for multi-class classification, but is rarely used for this purpose.
Unstable With Well Separated Classes: Logistic regression is restricted and unstable when the classes are well-separated.
Unstable With Few Examples: Logistic regression behaves as an unstable method while dealing with few examples from which parameters are estimated.
Linear Discriminant Analysis can handle all the above points and acts as the linear method for multi-class classification problems.
Every feature either be variable, dimension, or attribute in the dataset has gaussian distribution, i.e, features have a bell-shaped curve.
Each feature holds the same variance, and has varying values around the mean with the same amount on average.
Each feature is assumed to be sampled randomly.
Lack of multicollinearity in independent features and there is an increment in correlations between independent features and the power of prediction decreases.
While focusing on projecting the features in higher dimension space onto a lower dimensional space, LDA achieve this via three step process;
First step: To compute the separate ability amid various classes,i.e, the distance between the mean of different classes, that is also known as between-class variance.
Second Step: To compute the distance among the mean and sample of each class,that is also known as the within class variance.
Third step: To create the lower dimensional space that maximizes the between class variance and minimizes the within class variance.
Assuming P as the lower dimensional space projection that is known as Fisher’s criterion.
There are various techniques used for the classification of data and reduction in dimension, among which Principal Component Analysis(PCA) and Linear Discriminant Analysis(LDA) are commonly used techniques.
The condition where within -class frequencies are not equal, Linear Discriminant Analysis can assist data easily, their performance ability can be checked on randomly distributed test data.
This method results in the maximization of the ratio between-class variance to the within-class variance for any dataset and maximizes separability.
LDA has been successfully used in various applications, as far as a problem is transformed into a classification problem, this technique can be implemented.
For example, LDA can be used as a classification task for speech recognition, microarray data classification, face recognition, image retrieval, bioinformatics, biometrics, chemistry, etc. below are other applications of LDA;
(Also check: Support Vector Machine (SVM) in Machine Learning)
From the above discussion, we came to know that in general, the LDA approach is very similar to Principal Component Analysis, both are linear transformation techniques for dimensionality reduction, but also pursuing some differences;
The earliest difference between LDA and PCA is that PCA can do more of features classification and LDA can do data classification.
The shape and location of a real dataset change when transformed into another space under PCA, whereas
There is no change of shape and location on transformation to different spaces in LDA. LDA only provides more class separability.
Flow chart showing the difference between LDA and PCA
PCA can be expressed as an unsupervised algorithm since it avoids the class labels and focuses on finding directions( principal components) to maximize the variance in the dataset,
In contrast to this, LDA is defined as supervised algorithms and computes the directions to present axes and to maximize the separation between multiple classes.
In this contribution, we have understood the introduction of Linear Discriminant Analysis technique used for dimensionality reduction in multivariate datasets.
Recent technologies have to lead to the prevalence of datasets with large dimensions, huge orders, and intricate structures.
(Must read: 7 Type of Regression Techniques)
Such datasets stimulate the generalization of LDA into the more deeper research and development field. In the nutshell, LDA proposes schemas for features extractions and dimension reductions.
Reliance Jio and JioMart: Marketing Strategy, SWOT Analysis, and Working Ecosystem
READ MORE6 Major Branches of Artificial Intelligence (AI)
READ MORETop 10 Big Data Technologies
READ MORE8 Most Popular Business Analysis Techniques used by Business Analyst
READ MOREDeep Learning - Overview, Practical Examples, Popular Algorithms
READ MORE7 types of regression techniques you should know in Machine Learning
READ MOREIntroduction to Time Series Analysis in Machine learning
READ MOREHow Does Linear And Logistic Regression Work In Machine Learning?
READ MORE7 Types of Activation Functions in Neural Network
READ MOREIntroduction to Logistic Regression - Sigmoid Function, Code Explanation
READ MORE
Comments
360digitmgas
Jun 18, 2020Really nice and interesting post. I was looking for this kind of information and enjoyed reading this one. Keep posting. Thanks for sharing. <a href="https://360digitmg.com/india/data-science-using-python-and-r-programming-coimbatore">data science course in coimbatore</a>
Neelam Tyagi
Oct 13, 2020You are welcome, visit our website regularly for more updates