1

Foundational Concepts: Expected Value and Variance

These two concepts are the building blocks for understanding the potential returns and risks of an investment.

Expected Value

The Expected Value is the probability-weighted average of all possible outcomes for a random variable. Think of it as the long-term average outcome you would expect if you repeated an experiment many times.

Expected Value Example

Suppose a stock has a 30% chance of returning 15%, a 50% chance of returning 10%, and a 20% chance of returning -5%.

Calculation:

E(X) = (0.30 × 15%) + (0.50 × 10%) + (0.20 × -5%)
E(X) = 4.5% + 5.0% - 1.0% = 8.5%

The expected return of the stock is 8.5%.

Variance and Standard Deviation

Variance measures the dispersion of outcomes around the expected value. It is the expected value of the squared deviations from the mean. In finance, variance is a key measure of risk.

Variance(X) = E{[X - E(X)]²}

Key Properties of Variance:

  • Variance is always greater than or equal to zero (≥ 0).
  • A variance of zero means there is no risk or dispersion; the outcome is certain.
  • A higher variance signifies greater dispersion and, therefore, higher risk.

The Standard Deviation is the positive square root of the variance. Its main advantage is that it is measured in the same units as the random variable (e.g., percentages for returns), making it much easier to interpret than variance.

Variance & Standard Deviation Example

Using the stock from the previous example with an E(X) of 8.5%:

  1. Calculate Squared Deviations:
    • (15% - 8.5%)² = (6.5%)² = 42.25%
    • (10% - 8.5%)² = (1.5%)² = 2.25%
    • (-5% - 8.5%)² = (-13.5%)² = 182.25%
  2. Calculate Variance (Probability-Weighted Average of Squared Deviations):
    • Variance = (0.30 × 42.25) + (0.50 × 2.25) + (0.20 × 182.25)
    • Variance = 12.675 + 1.125 + 36.45 = 50.25 (Note: units are percent squared)
  3. Calculate Standard Deviation:
    • Standard Deviation = √50.25 ≈ 7.09%

The standard deviation of 7.09% gives us a measure of the stock's risk in percentage terms.

2

Conditional Expectations and Probability Trees

Often, we need to analyze outcomes based on specific conditions or scenarios. This is where conditional probability comes in.

Conditional Expected Value and Variance

  • Conditional Expected Value, E(X|S): The expected value of a random variable X, given that a specific event or scenario S has occurred. The formula is a probability-weighted average using conditional probabilities.
    E(X|S) = P(X₁|S)X₁ + P(X₂|S)X₂ + ... + P(Xₙ|S)Xₙ
  • Conditional Variance, σ²(X|S): The variance of a random variable X, given that a specific event or scenario S has occurred.

The Total Probability Rule for Expected Value

This powerful rule allows us to calculate the overall (unconditional) expected value by taking a weighted average of all the conditional expected values. This is often visualized using a probability tree.

E(X) = E(X|S₁)P(S₁) + E(X|S₂)P(S₂) + ... + E(X|Sₙ)P(Sₙ)
Where S₁, S₂, ..., Sₙ are mutually exclusive and exhaustive scenarios.

In simple terms, the overall expected value is the sum of (Expected Value in Scenario 1 × Probability of Scenario 1) + (Expected Value in Scenario 2 × Probability of Scenario 2), and so on.

3

Updating Beliefs with New Information: Bayes' Formula

Bayes' formula (also known as inverse probability) is a formal method for updating the probability of an event based on new information. It allows us to systematically revise our initial beliefs (prior probabilities) to arrive at more informed beliefs (posterior probabilities).

P(Event|Information) = [P(Information|Event) / P(Information)] × P(Event)

Breaking Down the Formula:

  • P(Event): The Prior Probability. This is our initial belief about the probability of an event before we get any new information.
  • P(Information | Event): The conditional probability of observing the new information, assuming that our event is true.
  • P(Information): The unconditional probability of observing the new information.
  • P(Event | Information): The Posterior Probability. This is our updated, revised probability of the event after considering the new information.

Key Terminology:

  • The updated probability, P(Event | Information), is formally called the posterior probability.
  • If our initial beliefs (prior probabilities) are all equal, they are referred to as diffuse priors.

Bayes' Formula Example

Imagine a disease affects 1% of the population (P(Disease) = 0.01). A test for it is 95% accurate for those who have it (P(Positive Test | Disease) = 0.95) and has a 5% false positive rate for those who don't (P(Positive Test | No Disease) = 0.05).

Question: If you test positive, what is the probability you actually have the disease? We want to find P(Disease | Positive Test).

  1. Identify Priors: P(Disease) = 0.01. Therefore, P(No Disease) = 0.99.
  2. Calculate P(Positive Test) using the Total Probability Rule:
    P(Positive Test) = P(Positive|Disease)P(Disease) + P(Positive|No Disease)P(No Disease)
    P(Positive Test) = (0.95 × 0.01) + (0.05 × 0.99) = 0.0095 + 0.0495 = 0.059
  3. Apply Bayes' Formula:
    P(Disease|Positive Test) = [P(Positive Test|Disease) / P(Positive Test)] × P(Disease)
    P(Disease|Positive Test) = [0.95 / 0.059] × 0.01 ≈ 0.161

Even with a positive test, your updated probability (posterior) of having the disease is only about 16.1%.

Progress:
Chapter 4 of 11