Lecture Notes on Econometrics and Statistical Theory

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/27

flashcard set

Earn XP

Description and Tags

A collection of flashcards covering key vocabulary terms and definitions from the lecture notes on econometrics and statistical theory.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

28 Terms

1
New cards

Conditional Expectation Function (CEF)

A function that represents the expected value of a random variable given the values of other variables.

2
New cards

Mean Squared Error (MSE)

The average of the squares of the errors, which indicates the quality of an estimator or a model. It is given by MSE(\hat{\theta}) = E[(\hat{\theta} - \theta)^2].

3
New cards

Linearity of Expectation

The property that states E[aX + bY] = aE[X] + bE[Y] for any random variables X and Y, and constants a and b.

4
New cards

Poisson Distribution

A discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space. Its probability mass function (PMF) is P(X=k) = \frac{\lambda^k e^{-\lambda}}{k!} where \lambda is the average rate of events and k is the number of events.

5
New cards

Variance

A measure of how far a set of numbers are spread out from their average value. It is calculated as Var(X) = E[(X - E[X])^2].

6
New cards

Homoskedasticity

A condition in which the variance of the errors is constant across all levels of the independent variable.

7
New cards

Heteroskedasticity

A condition in which the variance of the errors varies across levels of the independent variable.

8
New cards

Unbiased Estimator

An estimator that targets the true parameter value on average; E[\hat{\mu}k] = \mu k.

9
New cards

Covariance

A measure of how much two random variables change together. It is given by Cov(X, Y) = E[(X - E[X])(Y - E[Y])].

10
New cards

Orthogonality

The property of two random variables being uncorrelated; E[Xe] = 0 indicates X and e are orthogonal.

11
New cards

Linear Predictor

A model that predicts a dependent variable as a linear combination of one or more independent variables.

12
New cards

Statistical Independence

A situation where the occurrence of one event does not affect the probability of the other.

13
New cards

Boundedness Assumption

A condition required for the variance of an estimator to be finite.

14
New cards

Joint Probability Distribution

A probability distribution that defines the probabilities of simultaneous outcomes for two or more random variables.

15
New cards

Standard Deviation

The square root of the variance, measuring the amount of variation or dispersion of a set of values. It is given by SD(X) = \sqrt{Var(X)}.

16
New cards

Expected Value

The weighted average of all possible values a discrete random variable can take on. For a discrete random variable X, it is E[X] = \sum{i} xi P(X = x_i).

17
New cards

Central Limit Theorem (CLT)

States that the distribution of sample means of a sufficiently large number of independent, identically distributed random variables will be approximately normal, regardless of the original distribution.

18
New cards

Bias of an Estimator

The difference between an estimator's expected value and the true value of the parameter being estimated. It is given by Bias(\hat{\theta}) = E[\hat{\theta}] - \theta.

19
New cards

Probability Mass Function (PMF)

A function that gives the probability that a discrete random variable is exactly equal to some value. For a discrete variable X, P(X=xi) is the probability for each outcome xi.

20
New cards

Probability Density Function (PDF)

A function that describes the relative likelihood for a continuous random variable to take on a given value. For a continuous variable X, $f(x)$ is the PDF such that \int_{-\infty}^{\infty} f(x) dx = 1.

21
New cards

Cumulative Distribution Function (CDF)

A function that gives the probability that a random variable X will take a value less than or equal to x. For a continuous variable, F(x) = P(X \le x) = \int{-\infty}^{x} f(t) dt. For a discrete variable, F(x) = P(X \le x) = \sum{t \le x} P(X=t).

22
New cards

Law of Large Numbers (LLN)

A theorem that states as the sample size grows, the sample mean of a sequence of independent and identically distributed random variables converges to the true expected value of the random variable.

23
New cards

Normal Distribution (Gaussian Distribution)

A continuous probability distribution characterized by a symmetric bell-shaped curve, where the mean, median, and mode are all equal. Its probability density function is f(x | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}.

24
New cards

Correlation Coefficient

A standardized measure of the linear relationship between two random variables, ranging from -1 to 1. It is given by \rho{XY} = \frac{Cov(X, Y)}{\sigmaX \sigma_Y}.

25
New cards

Bernoulli Distribution

A discrete probability distribution for a random variable that takes value 1 with success probability p and value 0 with failure probability 1-p. Its PMF is P(X=k) = p^k (1-p)^{1-k} for k \in {0, 1}.

26
New cards

Binomial Distribution

A discrete probability distribution that models the number of successes in a fixed number of independent Bernoulli trials. Its PMF is P(X=k) = \binom{n}{k} p^k (1-p)^{n-k} where n is the number of trials and p is the success probability.

27
New cards

Ordinary Least Squares (OLS)

A method for estimating the unknown parameters in a linear regression model by minimizing the sum of the squares of the differences between the observed dependent variable and those predicted by the linear approximation.

28
New cards

R-squared (R^2)

A statistical measure that represents the proportion of the variance in the dependent variable that can be explained by the independent variables in a regression model. It is given by R^2 = 1 - \frac{SS{res}}{SS{tot}} where SS{res} is the sum of squares of residuals and SS{tot} is the total sum of squares.