Machine learning is a buzzword now in the field of technology. It is the process of giving machines the ability to learn from their previous experiences without any active human intervention. There are many computer science students or other technology enthusiasts who are thinking of pursuing machine learning as their career either in the field of research or by working in any reputed company.
Many large and reputed companies have adopted machine learning for a lot of purposes and they always hire the best of the best as their employees. So, during interviews, they often ask logical and difficult questions to the interviewees to test their knowledge and experience.
(Similar read: 20 python interview questions)
In this article, we are going to discuss the 20 common machine learning questions that might be asked by interviewers along with their solutions.
ML models are generally classified into types:
Supervised Learning: The system learns by analysing labelled data. Before generating judgments with new data, the model is trained on an existing data set. In supervised learning, target variables are present.Topics like Linear Regression, polynomial Regression, quadratic Regression, etc are studied when the target variable is continuous. Whereas, Logistic regression, Naive Bayes, KNN, SVM, Decision Tree, Gradient Boosting, ADA boosting, Bagging, Random forest etc. are studied when target variable is categorical.
Unsupervised learning: In this type of learning, the target variable is absent. The computer is trained on raw data with no supervision. By forming clusters, it automatically infers patterns and correlations in the data. The model learns from observations and derived data structures. Some of the topics under unsupervised learning are Principal component Analysis, Factor analysis, Singular Value Decomposition etc.
Reinforcement Learning: The model is taught by trial and error. This type of learning includes an agent interacting with the environment to generate actions and then discovering mistakes or rewards from those actions.
(Must read: Top 8 Machine learning Models)
Data mining is a process of extracting knowledge or patterns with the help of unstructured data. Whereas machine learning is all about studying, designing and developing algorithms that give the computers the ability to learn things from the experiences without being explicitly programmed by humans. Machine learning algorithms are generally used in data mining.
Machine learning was introduced to make human life easier. Through machine learning we can teach the Machines to learn from the data provided by the humans so that they can solve problems on their own.
In the paper written by Alex Turing in 1950, he explained the different types of human behavior and talked about a concept known as “The Imitation Game”. (For more information: A. M. Turing (1950) Computing Machinery and Intelligence)
Overfitting happens in machine learning when a statistical model represents random error or noise rather than the underlying relationship. Overfitting is commonly noticed when a model is overly complicated, as a result of having too many parameters in relation to the amount of training data types. The model's performance is poor due to overfitting.
Overfitting occurs as the criteria used to train the model is not the same as the criteria that is used to access the ML models.
(Read also: An Underfitting and Overfitting in Machine learning)
Overfitting can be avoided by the usage of more data. It generally occurs when the machine tries to learn from a small dataset. For working with small datasets one needs to go through a process known as cross validation, which splits the dataset into two parts, testing and training datasets. Here, the testing dataset will simply test the model, whereas the training dataset will generate the model from the data points.
(must read: 5 Machine Learning Techniques to Solve Overfitting)
The Naive Bayes algorithm is called ‘Naive’ as it makes assumptions by applying Bayes’ theorem that all attributes are independent of each other. It is a supervised learning algorithm.
From interviewbit, we get that:
Through xn, Bayes theorem states that following relationship:
P(yi | x1,..., xn) =P(yi)P(x1,..., xn | yi)(P(x1,..., xn)
Using the Naive conditional independence assumption that each xi is independent: this connection is reduced for every I to:
P(xi | yi, x1, ..., xi-1, xi+1, ...., xn) = P(xi | yi)
Since, P(x1,..., xn) is a constant given the input, we can use the following classification rule:
P(yi | x1, ..., xn) = P(y) ni=1P(xi | yi)P(x1,...,xn)
Maximum A Posteriori (MAP) estimation may also be used to estimate P(yi) and P(yi | xi), where the former is the relative frequency of class y in the training set.
P(yi | x1,..., xn) P(yi) ni=1P(xi | yi)
y = arg max P(yi)ni=1P(xi | yi)
Because the Naive Bayes classifier converges faster than discriminative models such as logistic regression, less training data are required. The primary benefit is that it cannot learn how features interact with one another.
5 popular Machine Learning Algorithms are:
Decision Trees
Neural Networks (back propagation)
Probabilistic networks
Nearest Neighbor
Support vector machines
A Support Vector Machine (SVM) algorithm is an efficient and adaptable supervised machine learning model that can do linear or nonlinear classification, regression, and even outlier identification.
For example, let us consider some data points belonging to 2 classes. The motive is to separate the two classes on the basis of a set of examples. In SVM, the data point is considered as a p-dimensional vector and we have to separate those with a p-1 dimensional hyperplane.
The functions of supervised learning are:
Classifications
Speech recognition
Regression
Predict time series
Annotate strings
The functions of unsupervised learning are:
Find clusters of the data
Find low-dimensional representations of the data
Find interesting directions in data
Interesting coordinates and correlations
Find novel observations/ database cleaning
(Related blog: Introduction to Machine Learning: Supervised, Unsupervised and Reinforcement Learning)
Machine learning deals with different algorithms that studies and learns from data and experience and apply them to decision making. Whereas, deep learning gives the machines the ability to learn through processing data and analyse the data on identifying any patterns. Structured data is always required for machine learning algorithms, and deep learning networks rely on layers of artificial neural networks.
(Recommended read: Machine Learning vs Deep Learning)
Machine learning is all about designing and developing algorithms according to the experiences based on empirical data and artificial intelligence also deals with topics like knowledge representation, natural language processing, planning, etc. Machine Learning is a subfield of Artificial Intelligence.
To search for important variables, firstly one needs to recognize correlated variables. The finalizing of important variables can be done on the basis of ‘p’ values from Linear Regression. One can perform forward, backward and stepwise selection. Other processes are: Lasso Regression, Random forest and plot variable charts.
We may utilise the bagging method to handle high variance datasets. The bagging algorithm divides the data into subgroups using sampling from random data. Following the splitting of the data, random data is utilised to generate rules using a training method. The polling approach is then used to aggregate all of the model's projected results.
(Must learn: Predictive Analytics and statistical data models)
Bias in ML detects the inconsistency in any data that can happen for several reasons which might not be mutually exclusive. For example, a digital behemoth like Amazon, in order to accelerate the recruiting process, has built a single engine that will accept 100 applicants, put out the best five, and employ those.
One of the two approaches used in machine learning is genetic programming. The approach is based on testing and picking the best option from a collection of options.
Inductive Logic Programming (ILP) is a machine learning discipline that uses logical programming to express prior information and instances.
Ensemble learning is a method for combining numerous machine learning models in order to generate more powerful models.
Sequential ensemble methods and Parallel ensemble methods are the two paradigms of ensemble methods.
(Recommended read: What is AdaBoost Algorithm in Ensemble Learning?)
Random forest is a flexible machine learning approach that can handle both regression and classification tasks. This algorithm works by merging a number of other tree models. Random forest constructs a tree using a random sampling of the test data columns.
The steps to create a tree using a random forest are:
Calculate a sample size based on the training data.
Begin with just one node.
From the start node, execute the following algorithm. Stop if the number of observations is fewer than the node size. Choose variables at random. Determine which variable does the "best" job of dividing the data. Divide your observations into two nodes. Step a should be called on each of these nodes.
To select the K for K-means clustering, one can either use statistical testing methods or direct methods. Statistical testing methods have gap statistics and direct methods contain elbow and silhouettes. When calculating the best value of k, the silhouette is the most typically utilized.
A recommendation engine is a system that predicts users' interests and recommends goods that are very likely to be of interest to them.
Data for recommender systems comes from individual user evaluations after viewing a movie or listening to a music, implicit search engine searches and purchase histories, or other information about the users/items themselves.
To crack any interview for any job in machine learning, one must have a deep knowledge of the subject with hands-on practical knowledge as well.
In this article we have discussed the most common 20 questions that might be asked by the interviewer in an interview. But one must prepare beyond these questions and have a strong grasp on all the details in order to bag the job.
5 Factors Influencing Consumer Behavior
READ MOREElasticity of Demand and its Types
READ MOREWhat is PESTLE Analysis? Everything you need to know about it
READ MOREAn Overview of Descriptive Analysis
READ MOREWhat is Managerial Economics? Definition, Types, Nature, Principles, and Scope
READ MORE5 Factors Affecting the Price Elasticity of Demand (PED)
READ MOREDijkstra’s Algorithm: The Shortest Path Algorithm
READ MORE6 Major Branches of Artificial Intelligence (AI)
READ MOREScope of Managerial Economics
READ MORE7 Types of Statistical Analysis: Definition and Explanation
READ MORE
Latest Comments