One major aspect of data science is to identify patterns in the data and to predict the future scenarios based on those patterns. Prediction and probability is an important aspect in data science. Probably distribution is all about predicting the likelihood of an event in given circumstances. Therefore probability distribution becomes an important tool in data science. To identify the patterns in the data and statistical inferences – a data scientist should be well versed with probability distribution of data.
Binomial Probability Distribution
Binomial Distribution is a probability distribution that describes a likelihood of a value which would take place of either of the two independent values under a given set of parameters. The assumptions related to the binomial distribution is that it has only one outcome for each trial which are mutually exclusive and has same probability of success.
It is represented by the following formula:
P (X) = Cxn px qn-x
Where n displays number of trials,
x= 0, 1, 2,…, n
p= probability of success in a single trial
q= probability of failure in single trial
Cxn is a combination
P (X) provides probability of successes in n binomial trials.
It is a common discrete distribution as it counts only two states namely; success (represented by 1) and failure (represented by 0). They are often used in data science statistics as building blocks for models for outcome variables in the dichotomous form – scenarios like pass/fail, win/loss, head/tail etc.
A binomial probability distribution follows following rules :
- Each event is an independent.
- There are only 2 possible outcomes in the event
- Only a specific and limited number of events are conducted
- Probability for any one of the outcome remains same across all the events
A special case of the binomial distribution is Bernoulli distribution which is the sum of a series of multiple independent and identically distributed trials. The experiment in the Bernoulli trial is random and comprises only two possible outcomes in the form of success and failure. One such example is flipping of a coin where there are only two values i.e. heads and tails and there is same probability of each success.
Explained with examples
A simple example of binomial probability distribution would be tossing up of a coin for a specified number of times. There are only two possible outcome of the toss – a head or a tail. Also each toss of coin is independent of the previous tosses and is a discrete event.
One more example could be that of purchase of a lottery ticket which comes with two assumptions. One is that the person would win the money and the other is that they would not win the money. Thus the outcome could be a success or failure only. Also each subsequent buying of a lottery ticket is an independent event, and outcome for new ticket is independent of the outcome of previous tickets.
Nevertheless, there are certain limitations for the binomial distribution. When there is the presence of a large number of trials but the probability of the success is small, the use of binomial probability becomes redundant.
Applications of BPD
Binomial Probability Distribution finds great application in pharmaceuticals. When pharmaceutical research organizations come up with new medicines to test – the probability of the given medicine to work for a particular patient has two discrete outcomes – it works on them or it doesn’t work. Each outcome for an individual patient is an independent discrete event.
The probability distribution has found use in analyzing various business issues which involve risk and uncertainty. It is highly useful in special cases like financial fraud detection. It is also an important tool in business decision making. Knowledge of probability distribution can helps companies improve on profits, reduce the cost and mitigate the business risks.
Doing a statistical analysis on the dataset is very important as it can give important insights on the data. Often some data scientists miss doing the statistical analysis on the dataset which can result in wrong or inaccurate conclusions.