What Is Prior Probability?

Prior probability, in Bayesian statistical inference, is the probability of an event before new data is collected. This is the best rational assessment of the probability of an outcome based on the current knowledge before an experiment is performed.

Prior Probability Explained

The prior probability of an event will be revised as new data or information becomes available, to produce a more accurate measure of a potential outcome. That revised probability becomes the posterior probability and is calculated using Bayes' theorem. In statistical terms, the posterior probability is the probability of event A occurring given that event B has occurred.

For example, three acres of land have the labels A, B, and C. One acre has reserves of oil below its surface, while the other two do not. The prior probability of oil being found on acre C is one third, or 0.333. But if a drilling test is conducted on acre B, and the results indicate that no oil is present at the location, then the posterior probability of oil being found on acres A and C become 0.5, as each acre has one out of two chances.

Baye鈥檚 theorem is a very common and fundamental theorem used in data mining and machine learning.

P(AB)=P(AB)P(B)=P(A)P(BA)P(B)where:P(A)=the聽prior聽probability聽of聽A聽occurringP(AB)=the聽conditional聽probability聽of聽A聽given聽that聽B聽occursP(BA)=the聽conditional聽probability聽of聽B聽given聽that聽A聽occurs\begin{aligned}&P(A\mid B)\ =\ \frac{P(A\cap B)}{P(B)}\ = \ \frac{P(A)\ \times\ P(B\mid A)}{P(B)}\\&\textbf{where:}\\&P(A)\ =\ \text{the prior probability of }A\text{ occurring}\\&P(A\mid B)=\ \text{the conditional probability of }A\\&\qquad\qquad\quad\ \text{ given that }B\text{ occurs}\\&P(B\mid A)\ = \ \text{the conditional probability of }B\\&\qquad\qquad\quad\ \ \text{ given that }A\text{ occurs}\\&P(B)\ =\ \text{the probability of }B\text{ occurring}\end{aligned}P(AB)=P(B)P(AB)=P(B)P(A)P(BA)where:P(A)=the聽prior聽probability聽of聽A聽occurringP(AB)=the聽conditional聽probability聽of聽A聽given聽that聽B聽occursP(BA)=the聽conditional聽probability聽of聽B聽given聽that聽A聽occurs

If we are interested in the probability of an event of which we have prior observations; we call this the prior probability. We'll deem this event A, and its probability P(A). If there is a second event that affects P(A), which we'll call event B, then we want to know what the probability of A is given B has occurred. In probabilistic notation, this is P(A|B), and is known as posterior probability or revised probability. This is because it has occurred after the original event, hence the post in posterior. This is how Baye鈥檚 theorem uniquely allows us to update our previous beliefs with new information.