ECON 2300 master flash cards

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/204

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 1:53 AM on 4/20/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

205 Terms

1
New cards

population

complete set of all items that interest an investigator

2
New cards

sample

observed subset, or portion, of a population

sample size denoted as n

3
New cards

parameter

unknown to investigator

numerical measure that describes a specific characteristic of the population

4
New cards

statistic

known to investigator

numerical measure that describes a specific characteristic of the sample.

5
New cards

sampling errors

random differences between sample and population

- cancel out on average

- decrease as sample size grows

6
New cards

nonsampling errors

systematic differences between sample and population

- do not necessarily cancel out on average

- do not necessarily decrease as sample size grows

7
New cards

Margin of error

knowt flashcard image
8
New cards

Margin of error notes

p hat = proportion of sample studied

range of p hat ± ME

only captures sampling error

9
New cards

non response bias

a form of nonsampling error

basically the bias occurring when certain population is more/less likely to respond to survey

10
New cards

summary statistics

summaries of data found. will not be completely accurate

11
New cards

frequency

number of times that a variable takes a certain value in a sample

12
New cards

relative frequency

proportion of times that a variable takes a certain value in a sample

13
New cards

histogram

plot of (relative) frequency by values

14
New cards

some notes on histograms

bins can change size - sometimes necessary to be smaller, bigger, unequal sizing

very important in general

15
New cards

histogram R code

hist(data_name, breaks = (# of bins), xlab = ’x axis label’)

16
New cards

Sample mean

knowt flashcard image
17
New cards

sample median

a cutoff value that is weakly larger than 50% of the sample

<p>a cutoff value that is weakly larger than 50% of the sample</p>
18
New cards

quantile

measure of proportions of data

take any number α between 0 and 1.

− α-quantile = cutoff value that is weakly larger than (100α) % of the sample.

− e.g. median is the 0.5-quantile.

19
New cards

quantile practice

knowt flashcard image
20
New cards

range

maximum observation - minimum observation

less used, we usually use intervals

21
New cards

range R code

range(handspan)

22
New cards

interquartile range

3rd quartile - 1st quartile

less sensitive to outliers

23
New cards

Variance

essentially the average squared difference from the mean.

− division by n − 1 instead of n

− also sensitive to outliers.

<p>essentially the average squared difference from the mean.</p><p>− division by n − 1 instead of n </p><p>− also sensitive to outliers.</p>
24
New cards

Variance R code

var(data_name)

25
New cards

standard deviation

same information as variance

− but more commonly used, since it has the same unit as the data

<p>same information as variance</p><p>− but more commonly used, since it has the same unit as the data</p>
26
New cards

standard deviation R code

sd(data_name)

27
New cards

basic R commands

table

hist

mean

median

quantile

range

IQR

boxplot

var

sd

28
New cards

access a variable in R

use $ symbol → data_set$var_name

eg to find amount of affairs in affairs file, df_aff$affairs

29
New cards

R cross tabulate 2 tables

table command!

table(data_set$var_one, data_set$var_two)

table(df_aff$affairs, df_aff$gender)

30
New cards

scatterplot in R code

two variables, use plot function

→ variables: disp (displacement), mpg (miles per gallon)

plot(mtcars$disp, mtcars$mpg)

31
New cards

covariance

knowt flashcard image
32
New cards

covariance and linear dependence

Zero ⇒ no linear dependence

Positive ⇒ positive linear dependence

Negative ⇒ negative linear dependence

33
New cards

covariance units

product of two variables analyzed

34
New cards

correlation coefficient

knowt flashcard image
35
New cards

correlation coefficient notes

Centers and standardizes each observation

Bounded between -1 and 1 (later in the course).

36
New cards

correlation and linear dependence

Zero ⇒ no linear dependence

Positive ⇒ positive linear dependence

Negative ⇒ negative linear dependence

37
New cards

if we want to correlate between more than 2 variables

we run a regression

38
New cards
<p>linear transformation of sample x<sub>i</sub></p>

linear transformation of sample xi

knowt flashcard image
39
New cards

sample stat vs population parameter table

knowt flashcard image
40
New cards

Expectation

form of population mean

can be replaced w/ pictured formula if looking at whole population

<p>form of population mean</p><p>can be replaced w/ pictured formula if looking at whole population</p>
41
New cards

law of large numbers

often times, as sample size increases, the sample statistic approximates the corresponding population parameter better and better.

→ the most important result in the foundation of statistics

42
New cards

central limit theorem

often times, when sample size is sufficiently large, the difference between the sample statistic and the corresponding population parameter approximately follows the normal (Gaussian) distribution.

→ the second most important result in the foundation of statistics

43
New cards

R code after lecture 3

var

sd

read.csv

$

attach

table(x,y)

boxplot(x~y)

plot(x,y)

cov(x,y)

cor(x,y)

44
New cards

random experiment

an experiment whose outcomes are random

45
New cards

basic outcome

the finest-grained, relevant outcome of a random experiment

  • relevant: depends on what you care about in your random experiment.

  • finest-grained: finest categorized, cannot be further divided in a relevant way.

46
New cards

sample space

set of all possible basic outcomes of a random experiment

47
New cards

event

a subset of the sample space.

⇔ equivalently, a set of basic outcomes in the sample space.

48
New cards

Event format

Captial E, e.g. E1, E2

E1 = {HH, HT } - event 1 is heads flipped first

49
New cards

Realization

Only one event in sample space Ω will be realized
signified as ωrealized ∈ E

50
New cards

Union E1 ∪ E2

the event that at least one of E1 and E2 occur (if not both)

<p>the event that at least one of E1 and E2 occur (if not both)</p>
51
New cards

Intersection of Events: E1 ∩ E2

the event that both E1 and E2 occur

<p>the event that both E1 and E2 occur</p>
52
New cards

Complement of Event: Ec

Event in which E does not occur

E c ≡ Ω\E := {ω ∈ Ω : ω /∈ E }

<p>Event in which E does not occur</p><p>E c ≡ Ω\E := {ω ∈ Ω : ω /∈ E }</p>
53
New cards

Probability P represents

A function of events

In sample space, probability function assigns a number P(E) to E to represent the probability of the event

54
New cards

Axiom 1

0 ≤ P (E ) ≤ 1 for Any Event E

Basically any event must be between 0 and 1

55
New cards

Axiom 2

P (Ω) = 1

The sample space Ω is also a subset of itself

56
New cards

Axiom 3 background: mutual exclusivity

2 Events: mutually exclusive if intersection is empty

E1 ∩ E2 = ∅.

Sequence of events: mutually exclusive in same way

Ei ∩ Ej = ∅, for any i unequal j.

<p>2 Events: mutually exclusive if intersection is empty</p><p>E1 ∩ E2 = ∅.</p><p>Sequence of events: mutually exclusive in same way</p><p>Ei ∩ Ej = ∅, for any i unequal j.</p>
57
New cards

Axiom 3

If E1…En is a set of mutually exclusive events, then probability of several events occurring is just the sum of their probabilities

<p>If E1…En is a set of mutually exclusive events, then probability of several events occurring is just the sum of their probabilities</p>
58
New cards

Classical probability

defined for random experiments in which all basic outcomes are (thought to be) equally likely

<p>defined for random experiments in which all basic outcomes are (thought to be) equally likely</p>
59
New cards

Complement/Logical Negation Rule

P(Ec) = 1 − P(E)

60
New cards

Complement/logical negation proof

knowt flashcard image
61
New cards

Inclusion/Logical Implication Rule

If event A is included in B and logically implies Event B, then A is less than or equal to B

inclusion symbol A ⊆ B

<p>If event A is included in B and logically implies Event B, then A is less than or equal to B</p><p>inclusion symbol A ⊆ B</p>
62
New cards

Events A and B are logically equivalent if

not only A logically implies B,

but also B logically implies A:

then P(A) = P(B)

63
New cards

Union/Logical Addition Rule

P (A ∪ B) = P (A) + P (B) − P (A ∩ B) .

<p>P (A ∪ B) = P (A) + P (B) − P (A ∩ B) .</p><p></p>
64
New cards

Conditional Probability

If B logically implies the sample space (aka is less than or equal to it) the probability of B given A, or P (B| A) is

<p>If B logically implies the sample space (aka is less than or equal to it) the probability of B given A, or P (B| A) is</p>
65
New cards

intuition on conditional probability

in conditional probability we set A as the sample space and no longer care about anything outside of it

66
New cards

Conditional Probability Axioms

knowt flashcard image
67
New cards

Conditional Probability Rules

knowt flashcard image
68
New cards

Exceptions on conditional probability rules

knowt flashcard image
69
New cards

Multiplication Rule Conditional Probability

Multiply both sides of equation by denominator probability to rearrange

<p>Multiply both sides of equation by denominator probability to rearrange</p>
70
New cards

statistical independence

Probability of intersection of events equals the product of both events multiplied, and conditional probability is the probability of the first event in parenthesis

basically shows one event occurs independently of the other

<p>Probability of intersection of events equals the product of both events multiplied, and conditional probability is the probability of the first event in parenthesis</p><p>basically shows one event occurs independently of the other</p>
71
New cards

pairwise vs mutual independence

Pairwise independence means any two events in a collection are independent, while mutual (or collective) independence requires that all events are independent jointly

mutual independence implies pairwise dependence among all items in the group, but not vice versa.

<p><span>Pairwise independence means any two events in a collection are independent, while mutual (or collective) independence requires that </span><strong><em>all</em></strong><span> events are independent jointly</span></p><p><span>mutual independence implies pairwise dependence among all items in the group, but not vice versa. </span></p>
72
New cards

mutual independence example

knowt flashcard image
73
New cards

Law of total probability background - mutually exclusive and collectively exhaustive

knowt flashcard image
74
New cards

Law of total probability

If a sequence of events E1…n is mutually exclusive and collectively exhaustive, the probability of A is the sum of A conditioned over each event multiplied by that event

<p>If a sequence of events E1…n is mutually exclusive and collectively exhaustive, the probability of A is the sum of A conditioned over each event multiplied by that event</p>
75
New cards

Bayes Rule

knowt flashcard image
76
New cards

Bayes Rule Proof

then divide both sides by P(B)

<p>then divide both sides by P(B)</p>
77
New cards

Base Rate Fallacy

forgetting about the base rate when forming beliefs

1% of the population is infected with a virus, test accurately detects it 99.99% of the time - that accuracy is the conditional probability over that 1%

78
New cards

Random Variable

random variables are functions that map out outcomes onto a sample space

<p>random variables are functions that map out outcomes onto a sample space</p>
79
New cards

Random Variable Anatomy

RV - function defined on the sample space

Realization - particular numeric value in R that an RV can take on

Format - {X = x} := {ω ∈ Ω : X (ω) = x} ⊆ Ω to represent the probability that Random Variable X takes on x

80
New cards

Support

the set of all possible realizations of a RV.

Supp (X ) := {X (ω) : ω ∈ Ω}

81
New cards

discrete RV

takes on counting numbers, or select values

{0, 1, 2}, {..., −2, −1, 0, 1, 2, ...}, N

82
New cards

Continuous

takes on all values in a segment

support is continuous,

− e.g. [0, 1], (0, ∞), R.

83
New cards

Mixed

support neither discrete nor continuous

− e.g. {−2, −1} ∪ [0, 1]

84
New cards

Probability Mass Function

A function of a discrete random variable mapping support of X to real line

Plug in realization x to get probability p(x)

support - p : Supp (X ) → R+

real line - R+ := [0, ∞)

pmf - p (x) := P (X = x)

85
New cards

pmf formatting

function = probability if x takes on value x

<p>function = probability if x takes on value x</p>
86
New cards

When writing pmf x

writing down support is also necessary

The pmf is only (explicitly) defined on the support.

Implicitly: outside the support set, all probabilities are zero.

e.g. for the coin flip RV, Supp (X ) = {0, 1}, and you may write:

p (100) = P (X = 100) = 0.

87
New cards

properties of probability mass functions

between 0 and 1

sum of all probabilities over support is 1

<p>between 0 and 1</p><p>sum of all probabilities over support is 1</p>
88
New cards

proof of ii pmf rule

knowt flashcard image
89
New cards

cumulative distribution function

Function of a random variable X representing the cumulative probability of each event - eg probability of X or anything less than it occurring

space F : R → R+

function F (x0) := P (X ≤ x0) , for any x0 ∈ R.

Note: here x0 is not required to be inside Supp (X ).

90
New cards

cdf vs pmf graphs

knowt flashcard image
91
New cards

cdf properties

knowt flashcard image
92
New cards

in discrete variables, you can obtain the CDF from the pmf by

adding the pmf

<p>adding the pmf</p>
93
New cards

Expectation

mathematical expectation is like a weighted average of a random variable

also written as μ or μx

<p>mathematical expectation is like a weighted average of a random variable</p><p>also written as μ or μx</p>
94
New cards

weak expectation from pmf

knowt flashcard image
95
New cards

Function of an RV is

Another RV

<p>Another RV</p>
96
New cards

more on expectation of a pmf

  1. from the example in the screenshot, you can turn the expectation into a function g(x)

  2. Plug in each value of X into g(X) - this finds new support of Supp {1, 4}

  3. Solve and then multiply each value by respective probability

  4. Add to get expectation!

<ol><li><p>from the example in the screenshot, you can turn the expectation into a function g(x)</p></li><li><p>Plug in each value of X into g(X) - this finds new support of Supp {1, 4}</p></li><li><p>Solve and then multiply each value by respective probability</p></li><li><p>Add to get expectation!</p></li></ol><p></p>
97
New cards

General Formula for Functions of RV

X can be a discrete RV

Y = g(X)

  • this means support of Y is {g(X1), g(X2),….}

  • Here we can have several support values be the same thing

pmf of Y shown in screenshot - basically in english we take the new support of Y and multiply by respective probabilities

<p>X can be a discrete RV</p><p>Y = g(X)</p><ul><li><p>this means support of Y is {g(X<sub>1</sub>), g(X<sub>2</sub>),….}</p></li><li><p>Here we can have several support values be the same thing</p></li></ul><p>pmf of Y shown in screenshot - basically in english we take the new support of Y and multiply by respective probabilities</p>
98
New cards

2 ways to compute expectation of a function of an RV

  1. Based on the PMF of Y, where you find the PMF first and then multiply by each probability

  2. By using the Law of the Unconscious Statistician, in which you calculate g(X) at each point and then find probability

<ol><li><p>Based on the PMF of Y, where you find the PMF first and then multiply by each probability</p></li><li><p>By using the Law of the Unconscious Statistician, in which you calculate g(X) at each point and then find probability</p></li></ol><p></p>
99
New cards

In general

E [g (X )]̸ = g (E [X ])

Expectation of a function doesn’t equal the function of an expectation UNLESS g(X) is linear

100
New cards

linearity of expectation/common expectation transformations

E [aX + b] = a · E [X ] + b