Justin MacDonald

The field of inferential statistics is based upon probability: how likely is it that your observed sample came from a particular population? Before we can answer this question we need a basic understanding of probability concepts.

#### Subjective probability

Subjective probability is an individual’s personal belief in the likelihood of an event. This type of probability has its own branch of inferential statistics: Bayesian inference. This branch of statistics attempts to calculate the probability of a hypothesis given the data collected in an experiment. Researchers use prior knowledge and expertise to come to conclusions about their data. This approach has a strong following in business management and economics, and is becoming increasingly popular in the social sciences.

Subjective probability can change over time as the researcher gathers data relevant to their beliefs about event likelihoods. For example, imagine that someone is flipping a coin with two sides (heads and tails) and is trying to guess the likelihood of a heads outcome. Imagine that the actual probability of a heads outcome is 0.85 (an 85% chance). On the first coin flip a heads outcome is observed. What is person’s estimate of the probability of a heads outcome likely to be? A tails outcome is then observed on the second coin flip. How will the subjective probability change? After many trials, what is the subjective probability likely to converge to? The answer: the *objective probability* of a heads outcome, 0.85.

#### Objective probability

Objective probability is the probability of an event as calculated by an all-knowing external observer: the objective likelihood of an event. In the coin-flipping example, the objective probability of a heads event is 0.85. Two successive coin flips are said to be independent events because the occurrence of one has no effect on the probability of the other. In other words, if Flip #1 results in a heads outcome, this has no effect on the probability of a heads outcome on the next flip. If you believe otherwise, you should read up on the gambler’s fallacy.

Another objective probability example: in an undergraduate stats class I decide to assign final grades in the following manner: out of 75 students I will assign 20 A’s, 20 B’s, 20 C’s, 10 D’s, & 5 F’s. If I assign these grades to students at random, what is the probability that student #63 gets a B? The answer:

The events A, B, C, D, and F are said to be mutually exclusive because a student can’t be associated with more than one of the events. In other words, if a student gets an A, then they didn’t get a B, C, D, or F. This set of events is also said to be exhaustive because it includes one of the events must happen: each student must receive a grade that is either an A, B, C, D, or F.

**Properties of probabilities**

- Probabilities range from 0 (the event cannot occur) to 1 (the event must occur).
- Given a set of mutually exclusive events, the sum of all of their probabilities is equal to 1.

The additive rule: Given events A and B: p(A or B) = p(A) + p(B) – p(A and B). If A and B are mutually exclusive, what does this formula reduce to? Answer: If two events A and B are mutually exclusive, then they can’t both occur at the same time. In other words, p(A and B) = 0, so p(A or B) = p(A) + p(B) when events A and B are mutually exclusive.

Examples:

- What is the probability of getting a D or an F? Answer: p(D or F) = p(D) + p(F) – p(D and F) = .
- Using a standard six-sided die, what is the probability of rolling an even number or rolling a number less than 5? Answer: Let R = rolling an even number, Let S = rolling a number less than 5.

Therefore,

**Conditional probability**

Often two events are dependent, meaning that knowing about the accordance of one event gives you more information about the other occurring. For example, consider a standard six-sided die. Let event A = rolling a 2, event B = rolling an even number. These events are not independent, because if event A happens then event B definitely happens. This type of relationship can be stated as a *conditional probability: *given that you rolled an even number, what is the probability that you rolled a 2? Intuitively, we know that this conditional probability is 1/3. More formally, a conditional probability is defined as follows:

Plugging numbers into this formula, p(A and B) = probability of rolling a 2 and rolling an even number = 1/6, and p(B) = probability of rolling an even number = 1/2, so:

Bayesian inferential statistics uses conditional probabilities, as does classical Null Hypothesis Significance Testing (NHST). Bayesian inference calculates p(hypothesis | data), and NHST calculates p(data | hypothesis). Since these two inferential; approaches are by far the most common inferential statistical techniques, it is necessary to understand conditional probabilities at least somewhat in order to have a true understanding of common inferential statistical logic.

We use conditional probabilities to state the multiplicative rule: Given two events A and B, the probability of both occurring (the *joint probability*) is

The multiplicative rule follows directly from the definition of conditional probability given above. Going back to the events A and B from the previous example, what is the probability of rolling a 2 and rolling an even number?

If A and B are independent, what does the multiplicative formula reduce to? For independent events, p(B | A) = p(B). In other words, the fact that A occurred does not change the likelihood of B occurring. Therefore, when A and B are independent,

#### More Worked Examples

- What is the probability of rolling a 2 or rolling an even number? Using the additive rule:

- Events A and B are independent. p(A) = 0.6 and p(B) = 0.8. What is the probability that: a) both will occur? b) neither will occur? c) one or the other or both will occur?

a. Since A and B are independent,

b. We are trying to calculate p(not A and not B) for this exercise.

Event A and event Not A (~A) are mutually exclusive because they both can’t happen at the same time. They are also exhaustive, because one of the two events must always occur (either A happens or it doesn’t, there is no other outcome). According to the one of the properties of probabilities listed above, therefore, p(A) + p(~A) = 1. From this it follows that p(A) = 1 – p(~A) and p(~A) = 1 – p(A). So in our example, if p(A) = 0.6, then p(~A) = 0.4. Similarly, p(~B) = 1 – 0.8 = 0.2. We can then use the multiplicative rule to get the answer:

We can use the shorter version of the multiplicative rule because A and B are independent.

c. For this one we need to calculate p(A or B). Using the additive rule,