CM

Chapter 3 – Linear Combinations of Random Variables

Overview

  • Chapter focus: Linear combinations of random variables and their impact on expectation (mean), variance and probability calculations.
  • Why it matters:
    • Real-world metrics often aggregate several independent random quantities (e.g. monthly stock portfolio profit, triathlon time, weight of jars of honey).
    • Understanding linear combinations lets us derive overall distributions and associated probabilities instead of analysing each component separately.

Expectation & Variance for Translating/Scaling One Random Variable

  • Translation by constant b
    • E\,(X+b)=E\,(X)+b
    • \operatorname{Var}\,(X+b)=\operatorname{Var}\,(X)
  • Scaling by constant a
    • E\,(aX)=a\,E\,(X)
    • \operatorname{Var}\,(aX)=a^{2}\,\operatorname{Var}\,(X)
  • Combined rule (Key Point 3.3):
    E\,(aX+b)=a\,E\,(X)+b
    \operatorname{Var}\,(aX+b)=a^{2}\,\operatorname{Var}\,(X)

Illustrative Dice Examples (Section 3.1)

  • Xing’s die values: {1,1,2,2,2,4}
    • E\,(X)=2; \operatorname{Var}\,(X)=1
  • Yaffa’s die values: {4,4,5,5,5,7}=X+3
    • E\,(Y)=E\,(X)+3=5
    • \operatorname{Var}\,(Y)=\operatorname{Var}\,(X)=1 (shift does not alter variance)
  • Quenby’s die values: {2,2,4,4,4,8}=2X
    • E\,(Q)=2E\,(X)=4
    • \operatorname{Var}\,(Q)=4\operatorname{Var}\,(X)=4 (variance quadruples because a^{2}=4)
  • Exploratory Tasks: add/subtract constants to Mo’s die {0,0,1,1,1,3} or create a rule like 3X+1 to confirm general results.

Worked Example 3.1 (Table → 2X+3)

  • Given discrete P(X) table, first compute raw E(X)=4 and \operatorname{Var}(X)=3.8.
    • Then: E(2X+3)=2\times4+3=11
    • \operatorname{Var}(2X+3)=4\times3.8=15.2

Binomial Link (Worked Example 3.2)

  • For X\sim B(3,\tfrac12) : E(X)=1.5, \operatorname{Var}(X)=0.75.
  • Same rules apply: E(2X+1)=2(1.5)+1=4, \operatorname{Var}(2X+1)=4(0.75)=3.

Linear Combinations of Two Independent RVs (Key Point 3.4)

  • For independent X,\,Y:
    E\,(X+Y)=E\,(X)+E\,(Y)
    \operatorname{Var}\,(X+Y)=\operatorname{Var}\,(X)+\operatorname{Var}\,(Y)
  • Difference: E\,(X-Y)=E\,(X)-E\,(Y) but variance still adds:
    \operatorname{Var}\,(X-Y)=\operatorname{Var}\,(X)+\operatorname{Var}\,(Y)

Multiples Before Summation (Key Point 3.5)

  • For constants a,b and independent X,Y:
    E\,(aX+bY)=aE\,(X)+bE\,(Y)
    \operatorname{Var}\,(aX+bY)=a^{2}\operatorname{Var}\,(X)+b^{2}\operatorname{Var}\,(Y)
  • Extends to any finite sum of independent variables.

Dice Illustration (Green vs Blue tetrahedral dice)

  • Green G: E=\tfrac43, \operatorname{Var}=\tfrac{11}{16}
  • Blue B: E=\tfrac32, \operatorname{Var}=\tfrac14
  • Sum W=G+B → E(W)=\tfrac14+\tfrac32=\tfrac{7}{4}, \operatorname{Var}(W)=\tfrac{15}{16} (matches explicit enumerated distribution).

Scaled Combination Example

  • If we redefine D=2G+3B then:
    • E(D)=2E(G)+3E(B)=2(\tfrac43)+3(\tfrac32)=8
    • \operatorname{Var}(D)=4\,\tfrac{11}{16}+9\,\tfrac14=5 (again matches table).

Distinguishing 2X vs X1+X2

  • 2X: double a single observation (same sample) → variance 4\operatorname{Var}(X).
  • X1+X2: sum of two independent observations → variance 2\operatorname{Var}(X).
  • Example with Xing’s die validates:
    E(2X)=4, \operatorname{Var}(2X)=4 vs
    E(X1+X2)=4, \operatorname{Var}(X1+X2)=2.

Normal Distributions (Key Point 3.6)

  • If X\sim N(\mu,\sigma^{2}) then any linear form aX+b\sim N(a\mu+b,\,a^{2}\sigma^{2}).
  • If independent X\sim N(\mu1,\sigma1^{2}) and Y\sim N(\mu2,\sigma2^{2}) then aX+bY\sim N(a\mu1+b\mu2,\,a^{2}\sigma1^{2}+b^{2}\sigma2^{2}).

Normal Examples

  1. Four Thrift batteries T\sim N(7,2.3^{2}) → Sum S has
    E(S)=28, \operatorname{Var}(S)=4(2.3^{2})=21.16, S\sim N(28,21.16).
    Probability P(S>30)=0.332.
  2. Large vs small rice bags: Y\sim N(6.6,0.4^{2}), X\sim N(2.1,0.2^{2}).
    Want P(Y>3X). Define Z=Y-3X\sim N(0.3,0.52) then P(Z>0)=0.661.
  3. Worktop thickness:
    • Top only: 37+1\Rightarrow N(38,0.09).
    • Top & bottom: 37+1+1\Rightarrow N(39,0.0902).
  4. Gift package: Total mass 3S+T\sim N(240,13^{2}), cheap-rate probability \approx0.779 for mass <250 g.

Poisson Combinations (Key Point 3.7)

  • Independent X\sim \operatorname{Po}(\lambda), Y\sim \operatorname{Po}(\mu) ⇒ X+Y\sim \operatorname{Po}(\lambda+\mu).
  • Important: only sums preserve Poisson; differences or scaling do not stay Poisson.

Rescue-centre Story

  • Lions L\sim\operatorname{Po}(5), Tigers T\sim\operatorname{Po}(3) → Total A=L+T\sim\operatorname{Po}(8).
  • Probability of rescuing exactly 2 animals: P(A=2)=e^{-8}\dfrac{8^{2}}{2!}=0.0107 (far quicker than enumerating each lion/tiger combination).

Text-message Example

  • Josh \lambda=3.2, Reuben \lambda=2.5 ⇒ T\sim\operatorname{Po}(5.7).
    P(T>5)=1-P(T\le4)=0.327.

Non-Poisson after Linear Ops

  • If T=2X-Y where X\sim\operatorname{Po}(2.4),\,Y\sim\operatorname{Po}(3.6):
    • E(T)=1.2, \operatorname{Var}(T)=13.2.
    • Since mean ≠ variance, T is not Poisson.
  • Finance: Portfolio profit as sum of independent share returns.
  • Manufacturing: Jar of honey weight = jar + honey + lid; tolerance analysis uses variance formulas.
  • Sports: Triathlon or relay times are sums of event times; probability of finishing under target uses normal combination.
  • Quality control: Worktop thickness, rice packages, and soaps rely on aggregated normal models for compliance thresholds.
  • Communication planning: Text-message Poisson combo informs network capacity.

Connections to Prior Learning

  • Relies on discrete expectation/variance (P&S 1 Chapters 6–7).
  • Uses Binomial, Poisson, Normal models from earlier chapters and coursebooks.
  • Standardisation z=\dfrac{x-\mu}{\sigma} remains foundational for probability lookup.

Formulae Checklist (end-of-chapter summary)

  • Single RV, constants a,b:
    • E(aX+b)=aE(X)+b
    • \operatorname{Var}(aX+b)=a^{2}\operatorname{Var}(X)
  • Independent X,Y:
    • E(aX+bY)=aE(X)+bE(Y)
    • \operatorname{Var}(aX+bY)=a^{2}\operatorname{Var}(X)+b^{2}\operatorname{Var}(Y)
  • Normal closure:
    • X\sim N(\mu,\sigma^{2})\Rightarrow aX+b\sim N(a\mu+b,a^{2}\sigma^{2})
    • X,Y independent normals ⇒ aX+bY also normal.
  • Poisson closure:
    • X\sim \operatorname{Po}(\lambda),\,Y\sim \operatorname{Po}(\mu) (independent) ⇒ X+Y\sim \operatorname{Po}(\lambda+\mu).

Worked-Example Shortcuts & Tips

  • When summing n identical independent RVs X:
    E\,(\text{sum})=nE(X), \operatorname{Var}(\text{sum})=n\operatorname{Var}(X).
  • For differences, variance always adds: \operatorname{Var}(X-Y)=\operatorname{Var}(X)+\operatorname{Var}(Y).
  • Use standardisation for normal probabilities:
    P(X>k)=1-\Phi\Big(\dfrac{k-\mu}{\sigma}\Big).
  • For Poisson ‘greater than’, compute complement with cumulative sum of small k values to reduce computations.

Worked & Exercise References

  • Exercise sets 3A–3D reinforce computing expectations & variances, distinguishing 2X vs X1+X2, normal combination probabilities, Poisson sums, and practical conversions (°C→°F).
  • End-of-chapter review questions apply concepts to temperatures, egg boxes, triathlon times, cycling hire cost, mining value, etc.