Comprehensive Probability Study Notes
Basic Definitions
Random Experiment
- All possible outcomes known beforehand; specific outcome unpredictable until experiment completes.
- Example: Tossing a coin.
Sample Space (S)
- Set of all possible outcomes of the same random experiment.
- Example: Tossing a die for the upper face number → S={1,2,3,4,5,6}.
Trial / Experiment
- Series of actions with uncertain outcomes.
- Examples: Tossing a coin, selecting a card, throwing a die.
Event
- Any subset of the sample-space.
- We are often interested in events rather than individual outcomes.
Simple Event
- Event containing exactly one sample point.
Compound Event
- Representable as union of sample points (contains more than one element).
- Example:
- Drawing a heart from a deck → A={\text{heart}} (simple).
- Drawing a red card → B={\text{heart}\cup\text{diamond}}={\text{heart},\text{diamond}} (compound).
Probability
For a random experiment with N equally likely outcomes, of which n favour event A:
P(A)=\frac{n}{N}=\frac{\text{favourable cases}}{\text{total cases}}Remarks
- P(A)=1 does not guarantee certainty; it only indicates highest likelihood given information & analysis.
- P(A)=0 does not prove impossibility; merely indicates no support from past information.
Types of Events
Mutually Exclusive Events
- Cannot occur simultaneously.
Independent Events
- Occurrence / non-occurrence of one does not affect the other.
Exhaustive Events
- Together ensure that at least one occurs in every trial.
Conditional Probability
Probability of B given A (event A has occurred):
P(B|A)If A and B are independent → P(B|A)=P(B).
General multiplication form:
P(A\cap B)=P(A)\,P(B|A)Properties (for events E,F\subseteq S with P(F)\neq0)
- P(S|F)=P(F|F)=1.
- If A,B disjoint → related conditional results apply (transcript incomplete).
Multiplication Theorem on Probability
From P(E|F)=\dfrac{P(E\cap F)}{P(F)}, we get
P(E\cap F)=P(E)\,P(F|E) (Multiplication Rule).Example (Urn problem)
- Urn: 10 black + 5 white; draw 2 without replacement.
- E: first ball black; F: second ball black.
- P(E)=\tfrac{10}{15}.
- After first black removed → 9 black, 5 white left: P(F|E)=\tfrac{9}{14}.
- P(E\cap F)=P(E)P(F|E)=\tfrac{10}{15}\times\tfrac{9}{14}=\tfrac{3}{7}.
Extension to three events E,F,G:
P(E\cap F\cap G)=P(E)\,P(F|E)\,P(G|E\cap F).
Independence vs. Mutual Exclusivity
Independent Events (definition via probability)
P(E\cap F)=P(E)P(F).Mutually Exclusive Events (definition via sets)
E\cap F=\varnothing.Key observations
- Non-zero independent events cannot be mutually exclusive.
- Non-zero mutually exclusive events cannot be independent.
- Two independent events may share common outcomes.
- Two experiments are independent if every pair of events (one from each) satisfy P(E\cap F)=P(E)P(F).
Mutual independence for three events A,B,C requires:
P(A\cap B)=P(A)P(B)
P(A\cap C)=P(A)P(C)
P(B\cap C)=P(B)P(C)
P(A\cap B\cap C)=P(A)P(B)P(C)Example (die throw)
- S={1,2,3,4,5,6}.
- E={3,6} (multiples of 3) → P(E)=\tfrac{2}{6}=\tfrac{1}{3}.
- F={2,4,6} (even) → P(F)=\tfrac{3}{6}=\tfrac{1}{2}.
- E\cap F={6} → P(E\cap F)=\tfrac{1}{6}.
- Since \tfrac{1}{6}=\tfrac{1}{3}\times\tfrac{1}{2}, E & F are independent.
Bayes' Theorem
Also called Inverse Probability Theorem (published 1763, posthumously, by Rev. Thomas Bayes).
Scenario illustration
- Bag I: 2 white, 3 red.
- Bag II: 4 white, 5 red.
- One bag chosen with P=\tfrac{1}{2} each; one ball drawn.
- Tasks: forward probabilities (ball colour given bag) & reverse probabilities (bag chosen given colour).
- Reverse probability solved using Bayes' theorem.
Partition of Sample Space
- Events E1,E2,\dots,E_n form a partition if:
- Ei\cap Ej=\varnothing\,(i\neq j)
- E1\cup E2\cup\dots\cup E_n=S
- P(E_i)>0 for all i.
Theorem of Total Probability
- For partition {Ei}{i=1}^n and any event A:
P(A)=\sum{i=1}^n P(Ei)\,P(A|E_i).
- For partition {Ei}{i=1}^n and any event A:
Bayes’ Formula
- For the same partition and non-zero P(A):
P(Ek|A)=\frac{P(Ek)\,P(A|Ek)}{\sum{i=1}^n P(Ei)\,P(A|Ei)}.
- For the same partition and non-zero P(A):
Terminology
- E_i: hypotheses / causes.
- P(E_i): prior (a priori) probability.
- P(E_i|A): posterior (a posteriori) probability.
Random Variables (R.V.)
Numerical quantity assigned to each outcome of a random experiment.
Formal definition: real-valued function X:S\to\mathbb R.
Examples
- Tossing 2 dice → X= sum of upper faces.
- Tossing a coin 50 times → X= number of heads.
- Sampling 4 items from 20 (6 defective) → X= number of defectives.
Example with 2 coin tosses
- S={HH,HT,TH,TT}
- X= number of heads:
- X(HH)=2, X(HT)=1, X(TH)=1, X(TT)=0.
- Y= (heads − tails):
- Y(HH)=2, Y(HT)=0, Y(TH)=0, Y(TT)=-2.
- Multiple R.V.s can coexist on same S.
Probability Distribution of a R.V.
- Provides each possible value xi of X with its probability P(X=xi).
- Must satisfy
\sumi P(X=xi)=1 and P(X=x_i)\ge0.
Mean (Expectation) of a R.V.
- For values x1,x2,\dots,xn with probabilities p1,p2,\dots,pn:
\mu=E(X)=\sum{i=1}^n xi p_i.
Variance & Standard Deviation of a R.V.
- Variance:
\operatorname{Var}(X)=E\big[(X-\mu)^2\big]=E(X^2)-\mu^2. - Standard deviation:
\sigma=\sqrt{\operatorname{Var}(X)}. - Smaller \sigma → values clustered near mean.
- Different distributions can share identical means (illustrated by unspecified X & Y example with equal means).
Bernoulli Trials
Independent trials with only two outcomes (success/failure) and constant success probability p.
Conditions
- Finite number n of trials.
- Trials independent.
- Exactly two outcomes each trial.
- P(\text{success})=p remains constant.
Example (6 successive draws from urn with 7 red, 9 black)
- (i) With replacement → p=\tfrac{7}{16} unchanged → Bernoulli trials.
- (ii) Without replacement → p changes each draw → not Bernoulli trials.
Binomial Distribution
Describes number of successes X in n Bernoulli trials.
Parameters: n (trials), p (success probability), q=1-p (failure probability).
Probability mass function (PMF):
P(X=x)=\binom{n}{x}p^{\,x}q^{\,n-x},\qquad x=0,1,\dots,n.Notation: X\sim B(n,p).
Origin: arises from binomial expansion (q+p)^n.