• Category
  • >Statistics

Introduction to Bayesian Statistics

  • Riya Kumari
  • Dec 11, 2020
  • Updated on: Mar 18, 2021
Introduction to Bayesian Statistics title banner



Bayesian statistics is the mathematical technique for calculating probabilities where inferences are subjective and get updated when extra data is added. 


This statistics is in contrast with classical or frequentist statistics where probability is computed by evaluating the frequency of a specific random event for a long duration of repeated trials where inferences are meant to be objective. These statistical inferences are the process of extracting conclusions out of massive datasets via analyzing a small portion of sample data. For this, data experts


  • First analyze the sample data and extract the conclusion, this is named as prior inference.
  • After this, they check another sample of data and revise their conclusion, this revised information is known as posterior inferences. 


The term “Bayesian Statistics” is titled at the name of 18th-century mathematician Thomas Bayes who is keenly interested in finding probabilities in terms of measuring one’s belief in a certain hypothesis.


Having its roots in the 18th century, It became more popular in the mid 20th century for a number of applications such as animal breeding in 1950, education measurement in 1960 and 1970’s, spatial statistics in 1980’s and political science and marketing in 1990’s. 


More specifically, the iterative of Bayesian statistics is very particular in use, it allows data experts to make anticipation more precisely. In present time, Bayesian statistics has a significant role in smart execution of machine learning algorithms as it gives flexibility to data experts to work with big data.


Such Bayesian models are practically employed in many industries involving financial forecasting, time series analysis, weather forecasting, medical research methodologies, and information technology



What is statistics?


In simple language, we can define statistics as any raw data that is categorized in the form of numerical or tables. It makes a bunch of information effectively more reasonable. Basically, it is a part of science that examines information and afterwards utilizes it to tackle various sorts of issues identified with the information.


Nevertheless, numerous individuals consider statistics to be a mathematical science. It assists in perusing and comprehending the information in an exceptionally simple manner. There are a few applications of statistics such as applied statistics, theoretical statistics, mathematical statistics, machine learning and data mining, and statistics applied to mathematics.


A few spots from where you can discover the right statistics are Statista, Nation Master, Dyytum, Google Public Data, Gallup, Gapminder, Freebase, and several more. 


(Must read: Importance of Statistics for Data Science)



Frequentist Statistics


Frequentist Statistics is a technique that is used to check whether a any given event (or say Hypothesis) will happen at all or not. It calculates the probability of the occurrence of an event in the long run of an experiment, which means that the experiment is done multiple times without changing the conditions.


A predetermined size of the sample is used for the experiment. When the experiment is taking place, it is conducted for several times theoretically. 


For example,


  • An experiment can be performed with the intention that it will be stopped as soon as it will be repeated 1000 times.
  • In an event, a minimum of 300 heads will be seen in a coin toss.


However, it is impossible to conduct any experiment with an infinite number of times. So, practically, the experiment is done with a fixed condition to stop. Therefore, the complications of frequentist statistics lead to the necessity for Bayesian Statistics.


Bayesian Statistics


Bayesian statistics is a specific way to deal with applying probability to statistical issues. It furnishes us with numerical instruments to refresh our convictions about arbitrary occasions considering seeing new information or proof about those occasions. Specifically,


Bayesian inference deciphers probability as a proportion of authenticity or certainty that an individual may have about the occurrence of a specific occasion.  It gives us numerical apparatuses to normally refresh our emotional convictions considering new information or proof.


We may have an earlier conviction about an occasion, yet our convictions are probably going to change when the new proof is uncovered. Moreover, 


  • It is used in most reasonable areas to resolve the impacts of an examination, however, whether that be molecule material science or medication viability.

  • It is also used in machine learning and artificial intelligence to predict what report you require to watch or a Netflix show to watch.

  • Bayesian statistics gives us a strong numerical method for consolidating our earlier convictions, and proof, to create new posterior beliefs. This is rather than another type of statistical inference, known as classical or frequentist statistics, which expects that probabilities are the recurrence of specific irregular occasions occurring in a since quite a while ago run of rehashed preliminaries.

  • Frequentist statistics expect that probabilities are the since quite a while ago run recurrence of arbitrary occasions in rehashed preliminaries.


When doing statistical inference, that is, inferring statistical data from probabilistic systems, the two methodologies - frequentist and Bayesian - have different ways of thinking.

Frequentist statistics attempt to dispose of uncertainty by giving appraisals. Bayesian statistics attempts to save and refine uncertainty by changing individual convictions considering the new proof.


(Most related: Statistical data analysis)


Uses of Bayesian statistics


While most machine learning algorithms attempt to foresee results from enormous datasets, the Bayesian methodology is useful for a few classes of issues that aren't effectively fathomed with other likelihood models. Specifically;


  • Information bases with not many information focus for reference, 

  • Models with solid earlier instincts from previous perceptions, 

  • Information with elevated levels of uncertainty, or when it's important to evaluate the degree of uncertainty over a whole model or look at changed models. 


At the point when a model creates an invalid theory yet, it's important to guarantee something about the probability of the elective speculation.


Bayesian Statistics in Machine Learning


Bayesian strategies help various machine learning algorithms in removing pivotal data from little informational indexes and taking care of missing information. They assume a significant function in an immense scope of territories from game improvement to sedate disclosure. 


Bayesian strategies empower the assessment of uncertainty in forecasts which demonstrates crucial for fields like medication. The techniques assist setting aside with timing and cash by permitting the pressure of deep learning algorithms a hundred folds and naturally tuning hyperparameters.


Thus, this statistical approach is not directly applicable to every deep learning technique. However, it affects the three key fields of machine learning:


  1. Statistical Inference-  It uses Bayesian probability, to sum up proof for the likelihood of an expectation.
  2. Statistical Modeling- It encourages a few models by grouping and indicating the earlier dissemination of any obscure boundaries.
  3. Experiment Design- By including the idea of "prior belief influence," this strategy utilizes successive investigations to factor in the result of prior tests when planning new ones. These "beliefs" are refreshed by prior and posterior dispersion.



Frequentist Statistics vs Bayesian Statistics



Bayesian inference

Frequentist inference


It uses probabilities for both hypotheses and data.

It doesn’t use or render probabilities of a hypothesis, ie. no prior or posterior.


It relies on the prior and likelihood of observed data.

It only counts on the likelihood for both observed and unobserved data.


It demands an individual to learn or make a subjective prior.

It never seeks a prior.


It had dominated statistical practice earlier than the 20th century

It had dominated statistical practice at the time of the 20th century


It is computationally exhaustive due to integration across many parameters. 

It ends to be computationally less intensive.


An image is showing the example to showcase the basic difference between frequentist and bayesian statistics.

Frequentist vs Bayesian Statistics, Image: XKCD



In conclusion, we can say that Bayesian Statistics are a method that appoints "degree of belief," or Bayesian probabilities, to customary factual demonstrating. In this understanding of statistics, the probability is determined as the sensible desire for an occasion happening dependent on as of now known triggers. 


Again, the probability is a unique cycle that can change as new data is assembled, as opposed to a fixed worth dependent on recurrence or inclination.


(Also read: What is an Algorithm? Types, Applications, and Characteristics)


Bayesian statistics deliver us with numerical mechanisms to relatively stimulating our expressive beliefs deeming new evidence or assurance.

Latest Comments