## What is Conditional Probability?

By definition, the term probability is expressed as a part of mathematics where the chance of an event that may either occur or not is evaluated and expressed in numerical terms. The range of the value within which probability can be expressed is between 0 and 1. The higher the chance of an event occurring, the closer is its value to be 1. If the probability of an event is 1, it means that the event will happen under all considered circumstances. Similarly, if the probability is exactly 0, then no matter the situation, the event will never occur.

Now, conditional probability can be said to be the probability where one event takes place with some relationship to at least one or more other events.

Conditional probability is defined because the likelihood of an incident occurring, supported the occurrence of a previous incident or result. Conditional probability is calculated by the process of multiplication of the probability that the conducting incident holds, with the changed or new probability of the succeeding, or conditional, incident.

Mathematically, the conditional probability is expressed as follows-

$\text{P}\left(\text{A|B}\right)=\frac{\text{P}\left(\text{A}\cap \text{B}\right)}{\text{P}\left(\text{B}\right)}$

In the above expression, the left side denotes the probability of an event A happening when an event B has already occurred. It is stated as the probability of A given B. If it so happens that the probability value of the event A is equal to the probability of A given B, then the events A and B are said to be independent of each other.

In such a case, it is written as-

$\text{P}\left(\text{A|B}\right)=\text{P}\left(\text{A}\right)$

Hence, if A is independent of B, then the probability of A given B is the product of the individual probabilities of A and B.

Therefore, both A and B are independent of each other.

## What is Multiplication Theorem?

This theorem states that while considering A and B as two events, then the following expressions hold-

$\begin{array}{l}\text{P}\left(\text{A}\cap \text{B}\right)=\text{P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right),\text{givenP}\left(\text{A}\right)0\\ \text{P}\left(\text{A}\cap \text{B}\right)=\text{P}\left(\text{B}\right).\text{P}\left(\text{A|B}\right),\text{givenP}\left(\text{B}\right)0\end{array}$Here, the conditional probability that event B will occur when event A has already happened is given by P (B|A), and the conditional probability that event A will occur when event B has already happened is given by P (A|B).

### Proof

Having,

$\begin{array}{l}\text{P}\left(\text{A}\right)\text{=}\frac{\text{n}\left(\text{A}\right)}{\text{n}\left(s\right)}\text{,andP}\left(\text{B}\right)\text{=}\frac{\text{n}\left(\text{B}\right)}{\text{n}\left(s\right)}\\ \text{Also,P}\left(\text{A}\cap \text{B}\right)\text{=}\frac{\text{n}\left(\text{A}\cap \text{B}\right)}{\text{n}\left(s\right)}\\ \text{Therefore,}\\ \text{P}\left(\text{A|B}\right)\text{=}\frac{\text{n}\left(\text{A}\cap \text{B}\right)}{\text{n}\left(\text{B}\right)}\end{array}$The above equations can be re-written as-

$\begin{array}{l}\text{P}\left(\text{A}\cap \text{B}\right)=\frac{n\left(\text{B}\right)}{n\left(s\right)}\times \frac{n\left(\text{A}\cap \text{B}\right)}{n\left(\text{B}\right)}=\text{P}\left(\text{B}\right).\text{P}\left(\text{A|B}\right)\\ \text{Also,wehave}\\ \text{P}\left(\text{A}\cap \text{B}\right)=\frac{n\left(\text{A}\right)}{n\left(s\right)}\times \frac{n\left(\text{A}\cap \text{B}\right)}{n\left(\text{A}\right)}=\text{P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right)\end{array}$The above statements prove the multiplication theorem of probability.

As for the independent events, when the value of the conditional probability of A given B is equal to the probability value of A, then the event A is said to be independent with respect to B.

Similarly, when the value of the conditional probability of B given A is equal to the probability value of B, then the event B is said to be independent with respect to A.

### Statement

If two events A and B are such chosen that both P(A) and P(B) are not equal to zero, and if it is given that event A is independent with respect to event B, then event B is also independent with respect to the event A.

**Proof**

Given, event A is independent with respect to event B.

Hence, it can be written-

$\begin{array}{l}\text{P}\left(\text{A|B}\right)=\text{P}\left(\text{A}\right),\text{and}\\ \frac{\text{P}\left(\text{A}\cap \text{B}\right)}{\text{P}\left(\text{B}\right)}=\text{P}\left(\text{A}\right)\\ \therefore \text{P}\left(\text{A}\cap \text{B}\right)=\text{P}\left(\text{A}\right).\text{P}\left(\text{B}\right)\\ \end{array}$Therefore,

$\begin{array}{l}\frac{\text{P}\left(\text{B}\cap \text{A}\right)}{\text{P}\left(\text{A}\right)}=\text{P}\left(\text{B}\right),\text{and}\\ \text{P}\left(\text{B|A}\right)=\text{P}\left(\text{B}\right)\end{array}$

Hence, event B can be said to be independent with respect to event A.

## What are the Properties of Conditional Probability?

- If two events A and B exist such that they are independent to each other, then-$\text{P}\left(\text{B|A}\right)=\text{P}\left(\text{B}\right)$

- If there are independent events as E1, E2, E3, …, En, then-$\text{P}\left({\text{E}}_{1}\cup {\text{E}}_{2}\cup {\text{E}}_{3}\mathrm{...}\cup {\text{E}}_{n}\right)=1-\text{P}\overline{\left({\text{E}}_{1}\right)}.\text{P}\overline{\left({\text{E}}_{2}\right)}.\text{P}\overline{\left({\text{E}}_{3}\right)}\mathrm{...}\text{P}\overline{\left({\text{E}}_{n}\right)}$

- If A and B are two events, then-
$\begin{array}{l}\text{WhenB}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{A|B}\right)+\text{P}\left(\overline{\text{A}}|\text{B}\right)=1\end{array}$

- Also, if A and B are two events, then-
$\begin{array}{l}\text{WhenA}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{B}\right)\text{=P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right)+\text{P}\left(\overline{\text{A}}\right).\text{P}\left(\text{B|}\overline{\text{A}}\right)\end{array}$

- If there are three events A, B, and C, then-$\begin{array}{l}\text{WhenA}\ne \varphi \text{,AB}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{A}\cap \text{B}\cap \text{C}\right)=\text{P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right).\text{P}\left(\text{C|AB}\right)\end{array}$

## What is Bayes’ Theorem?

While discussing conditional probability, the particular revising probability value is indicated which is derived when the new information is obtained. In probability analysis, this is an extremely important phase. Bayes’ theorem was given by a British mathematician, Thomas Bayes, in 1763.

The Bayes theorem describes the probability of an occasion supported the prior knowledge of the conditions which may be associated with the event. While the conditional probability is known, the Bayes’ rule is used to seek out the reverse probabilities.

Consider a sample A = {a_{1}, a_{2}, a_{3},… a_{n}}. Here, a set of n attributes is considered, and each component denotes some of its values. Here, A will be considered as evidence. Now, consider some hypothesis H. This hypothesis will ensure that data A will be under class C, which can be considered as a specific class.

Take the posterior probability that H will hold when conditioned on A as P (H|A). In this regard, the posterior probability of hypothesis H is given by P (H). Also, P (A|H) will denote the posterior probability that A will hold when conditioned on H. As for the prior probability of A, it is denoted by P (A).

As per the statement of the Bayes’ theorem, the required probability is expressed in the terms of the other probabilities, such as P (H), P (H|A), and P (A). It can be expressed as-

$\text{P}\left(\text{H|A}\right)=\frac{\text{P}\left(\text{A|H}\right)\text{P}\left(\text{H}\right)}{\text{P}\left(\text{A}\right)}$

The above terms have already been discussed above. This condition holds, and this is what Bayes’ theorem states. For finding out normal conditional probabilities, this expression is extremely useful.

## Practice Problem

Consider the experiment that is conducted random manner. Draw a card randomly from a pack full of cards. Event A describes “The card drawn is a queen”. Also, the probability that A happens is given as 1/13. Now, if a black card is drawn, then find out how this statement influences event A.

Take event B as “The card drawn is black”. If B happens, no red card has been drawn. In a full pack, there are 26 black cards and 26 red cards.

Hence, the probability that event A occurs must be relative to the sample space B. Also, there are only 26 black cards. So, n (B) = 26 which implies $P\left(B\right)=\frac{26}{52}$ . In these 26 black cards, there are two (2) Queen Cards. So, $\text{P}\left(\text{A}\cap \text{B}\right)=\frac{2}{52}$.

Therefore, the required conditional probability is-

$\begin{array}{c}\text{P}\left(\text{A|B}\right)=\frac{\text{P}\left(\text{A}\cap \text{B}\right)}{P\left(\text{B}\right)}\\ =\frac{\frac{2}{52}}{\frac{26}{52}}\\ =\frac{2}{26}\\ =\frac{1}{13}\\ =0.077\end{array}$

Thus, the required probability is 0.077.

## Formulas

$\text{P}\left(\text{A|B}\right)=\frac{\text{P}\left(\text{A}\cap \text{B}\right)}{\text{P}\left(\text{B}\right)}$.

If there are independent events as E1, E2, E3, …, En, then-$\text{P}\left({\text{E}}_{1}\cup {\text{E}}_{2}\cup {\text{E}}_{3}\mathrm{...}\cup {\text{E}}_{n}\right)=1-\text{P}\overline{\left({\text{E}}_{1}\right)}.\text{P}\overline{\left({\text{E}}_{2}\right)}.\text{P}\overline{\left({\text{E}}_{3}\right)}\mathrm{...}\text{P}\overline{\left({\text{E}}_{n}\right)}$

If A and B are two events, then-$\begin{array}{l}\text{WhenB}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{A|B}\right)+\text{P}\left(\overline{\text{A}}|\text{B}\right)=1\end{array}$

If A and B are two events, then-$\begin{array}{l}\text{WhenA}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{B}\right)\text{=P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right)+\text{P}\left(\overline{\text{A}}\right).\text{P}\left(\text{B|}\overline{\text{A}}\right)\end{array}$

If there are three events A, B, and C, then-$\begin{array}{l}\text{WhenA}\ne \varphi \text{,AB}\ne \varphi \text{,wehave}\\ \text{P}\left(\text{A}\cap \text{B}\cap \text{C}\right)=\text{P}\left(\text{A}\right).\text{P}\left(\text{B|A}\right).\text{P}\left(\text{C|AB}\right)\end{array}$

The reverse conditional probability is evaluated as-$\text{P}\left(\text{H|A}\right)=\frac{\text{P}\left(\text{A|H}\right)\text{P}\left(\text{H}\right)}{\text{P}\left(\text{A}\right)}$

## Context and Application

This topic is significant in the professional exams for both undergraduate and graduate courses, especially for

- B.Sc Mathematics
- M.Sc Mathematics

### Want more help with your statistics homework?

*Response times may vary by subject and question complexity. Median response time is 34 minutes for paid subscribers and may be longer for promotional offers.

### Search. Solve. Succeed!

Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month.

### Probability and Random Processes

### Conditional Probability, Decision Trees, and Bayes' Theorem

## Conditional Probability Homework Questions from Fellow Students

Browse our recently answered Conditional Probability homework questions.

### Search. Solve. Succeed!

Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month.