1/25
Flashcards covering point estimation, properties of estimators, methods for finding estimators, and methods for evaluating point estimators.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Point Estimation
A statistical method used to provide a single best guess or estimate of an unknown population parameter based on sample data.
Point Estimator
Any function W(X1, X2, …, Xn) of a sample; that is, any statistic is a point estimator. The value obtained from a point estimator is called a point estimate.
Estimator
A function or rule used to calculate an estimate from the sample and is denoted by a statistic (e.g., θ).
Estimate
The actual computed value obtained from the estimator using sample data.
Sample Mean (X̄)
Estimates the population mean (μ).
Sample Proportion (p̂)
Estimates the population proportion (p).
Sample Variance (s²)
Estimates the population variance (σ²).
Sample Standard Deviation (s)
Estimates the population standard deviation (σ).
Unbiasedness
The expected value of the estimator equals the true parameter value, E(θ̂) = θ.
Consistency
As the sample size increases, the estimator converges to the true parameter.
Efficiency
The estimator has the smallest possible variance among all unbiased estimators.
Sufficiency
The estimator captures all necessary information from the sample regarding the parameter.
Method of Moments Estimators
Estimate parameters by equating sample moments to population moments.
Maximum Likelihood Estimators (MLE)
Find the parameter value that maximizes the likelihood function based on observed data.
likelihood function
L(θ|x) = L(θ|x1, …, xn) = ∏ f(xi | θ1, …, θk)
Bayesian Estimators
Use prior distributions and observed data to determine posterior distributions of parameters.
Prior Distribution
A subjective distribution, based on the experimenter's belief, and is formulated before the data are seen.
Posterior distribution
The updated prior using sample information denoted by π(θ|x) = f(x|θ)π(θ) / m(x)
joint distribution
f(x|θ)π(θ)
marginal distribution
m(x) = ∫ f(x|θ)π(θ) dθ if θ is continuous, or m(x) = Σ f(x|θ)π(θ) if θ is discrete.
Conjugate Family
A class IT of prior distributions is conjugate family for & if the posterior distribution is in the class IT for all priors in IT, and all XEX
Mean Squared Error (MSE)
A function of θ defined by MSE(W) = Eθ[(W-θ)²] = Varθ(W) + (Biasθ(W))²
Bias
The difference between the expected value of W and θ: Bias(W) = Eθ[W] - θ
Best Unbiased Estimator
An estimator W* that satisfies EθW* = τ(θ) for all θ and, for any other estimator W with EθW = τ(θ), VarθW* < VarθW for all θ. Also called a uniform minimum variance unbiased estimator (UMVUE).
Cramer-Rao Inequality
Let X1, …, Xn be a sample with pdf f(x/θ) and let w(x) = w(x1, …, xn) be any estimator satisfying d/dθ EθW(X) = d/dθ ∫ [W(x) f(x/θ)] dx and Var(W(x)) < ∞. Then Var(W(x)) >= (d/dθ EθW(x))² / Eθ((∂/∂θ log f(x/θ))²)
Rao-Blackwell Theorem
Let W be any unbiased estimator of τ(θ), and let T be a sufficient statistic for θ. Define φ(T) = E(W|T). Then Eθφ(T) = τ(θ) and Varθφ(T) <= VarθW for all θ; that is, φ(T) is a uniformly better unbiased estimator of τ(θ).