1/17
A set of flashcards based on key formulas and distributions covered in the Exam P Master Formula Sheet, focusing on discrete and continuous probability distributions, their properties, and essential statistical concepts.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Binomial Distribution
X ∼ Bin(n, p) PMF: P(X = k) = (n choose k) p^k q^(n−k) where q = 1 − p.
Poisson Distribution
X ∼ Pois(λ) PMF: P(X = k) = (e^−λ λ^k)/k! where λ is the average rate.
Geometric Distribution
X ∼ Geom(p) represents the number of trials until the first success; PMF: P(X = k) = q^(k−1)p.
Negative Binomial Distribution
X ∼ NB(r,p) indicates the number of trials until the r-th success; PMF: P(X = k) = (k-1 choose r-1) p^r q^(k−r).
Hypergeometric Distribution
X ∼ Hyper(N,K,n) represents n draws without replacement from a population of size N with K successes; PMF uses combinatorial counts.
Bernoulli Distribution
X ∼ Bern(p) represents a single trial with two outcomes; PMF: P(X=1)=p and P(X=0)=q.
Discrete Uniform Distribution
X ∼ DU(a,b) represents a uniform distribution across n = b − a + 1 outcomes, with PMF = 1/n.
Expectation
E[X] is the mean of the random variable. For a linear transformation, E[aX + b] = aE[X] + b.
Variance
Var(X) measures the spread of a distribution. Var(X) = E[X²] − (E[X])².
Memoryless Property
A distribution is memoryless if P(X > s + t | X > s) = P(X > t). Holds for Geometric and Exponential distributions.
Moment Generating Function (MGF)
The MGF of a random variable X, MX(t), can be used to find all moments E[X^n] via derivatives.
Standard Normal Distribution
N(0, 1) with Z = (X − µ)/σ used for standardization of any normal variable.
Exponential Distribution
X ∼ Exp(λ) models time until the next event, with PDF: f(x) = λe^−λx.
Normal Distribution
X ∼ N(µ, σ²) represents a continuous distribution with bell-shaped curve, characterized by mean (µ) and variance (σ²).
Gamma Distribution
X ∼ Gamma(α, λ) generalizes exponential; PDF contains a shape parameter α.
Law of Total Expectation
E[X] = E[E[X | Y]] accounts for variability from conditioning on another variable.
Covariance
Cov(X,Y) = E[XY] - E[X]E[Y] measures how two random variables vary together.
CLT (Central Limit Theorem)
As sample size increases, the sampling distribution of the sample mean will approach a normal distribution.