1/53
These flashcards cover key concepts and definitions related to probability, decision-making under uncertainty, and statistical inference.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Probability
A numerical value that measures the likelihood that an event occurs.
Addition Rule
A probability rule that states the probability of the union of two events is the sum of their individual probabilities minus the probability of their intersection.
Multiplication Rule
A probability rule that states the probability of the intersection of two events is the product of the probability of one event and the conditional probability of the other event given the first.
Bayes’ Theorem
A formula used to update probabilities based on new evidence.
Frequentist Interpretation
Assumes that probabilities represent proportions of specific events occurring over infinitely identical trials.
Bayesian Interpretation
Assumes that probabilities are subjective beliefs about the relative likelihood of events.
Experiment
A process that leads to one of several outcomes.
Outcome
The result of an experiment.
Sample Space
The set of all possible outcomes of an experiment.
Event
A subset of the sample space.
Mutually Exclusive Events
Events that do not share any common outcomes.
Exhaustive Events
Events that include all outcomes in the sample space.
Venn Diagram
A visual representation of sets and their relationships.
Complement Rule
States that the probability of an event plus the probability of its complement equals 1.
Conditional Probability
The probability of an event given that another event has occurred.
Independent Events
Events where the occurrence of one does not affect the occurrence of the other.
Variance
A measure of the dispersion of a set of values.
Random Variable
A numerical outcome from a probabilistic experiment.
Expected Value
A measure of the central tendency of a random variable.
Discrete Uniform Distribution
A distribution where each outcome is equally likely.
Binomial Distribution
Describes the number of successes in a sequence of independent Bernoulli trials.
Hypergeometric Distribution
Describes draws from a finite population without replacement.
Poisson Distribution
Estimates the number of successes over a specified interval of time or space.
Probability Mass Function (PMF)
Function that gives the probability that a discrete random variable is exactly equal to some value.
Probability Density Function (PDF)
Function that describes the likelihood of a continuous random variable taking on a particular value.
Standard Normal Distribution
A normal distribution with a mean of 0 and a standard deviation of 1.
Sampling Error
The difference between a sample statistic and the corresponding population parameter.
Statistical Inference
The process of drawing conclusions about a population based on sample data.
Central Limit Theorem (CLT)
States that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases.
Unbiased Estimator
An estimator whose expected value equals the true parameter being estimated.
Sampling Distribution
The probability distribution of a statistic obtained from a larger population.
Consistency
The property that ensures that as the sample size increases, the sample mean converges in probability to the population mean.
Precision
As the sample size increases, the sample mean becomes a more reliable estimate of the population mean.
Expected Value of a Random Variable
Calculated as the sum of all possible values, each multiplied by the probability of its occurrence.
Standard Error of the Mean
The standard deviation of the sampling distribution of the sample mean.
Quantiles
Values that divide a probability distribution into segments of equal probability.
Joint Probability Table
A table that displays the probability of two events occurring simultaneously.
Marginal Probability
The probability of a single event occurring without regard to other variables.
Cumulative Distribution Function (CDF)
A function that describes the probability that a random variable takes on a value less than or equal to a specific value.
Hypothesis Testing
A statistical method to determine whether a hypothesis about a population parameter is likely true.
Type I Error
The error of rejecting a true null hypothesis.
Type II Error
The error of failing to reject a false null hypothesis.
Power of a Test
The probability that it correctly rejects a false null hypothesis.
Probability
A numerical value that measures the likelihood that an event occurs. Formula: P(A)=Total number of outcomesNumber of favorable outcomes
Example Problem: If a die is rolled, what is the probability of rolling a 4?
Solution: There is 1 favorable outcome (rolling a 4) out of 6 total outcomes. Thus, P(4)=61.
Addition Rule
A probability rule that states the probability of the union of two events is the sum of their individual probabilities minus the probability of their intersection. Formula: P(A∪B)=P(A)+P(B)−P(A∩B)
Example Problem: If P(A) = 0.3 and P(B) = 0.5, with P(A ∩ B) = 0.1, what is P(A ∪ B)?
Solution: P(A∪B)=0.3+0.5−0.1=0.7.
Multiplication Rule
A probability rule that states the probability of the intersection of two events is the product of the probability of one event and the conditional probability of the other event given the first. Formula: P(A∩B)=P(A)×P(B∣A)
Example Problem: If P(A) = 0.4 and P(B \mid A) = 0.5, what is P(A ∩ B)?
Solution: P(A∩B)=0.4×0.5=0.2.
Conditional Probability
The probability of an event given that another event has occurred. Formula: P(A∣B)=P(B)P(A∩B)
Example Problem: If P(A ∩ B) = 0.02 and P(B) = 0.1, what is P(A | B)?
Solution: P(A∣B)=0.10.02=0.2.
Bayes’ Theorem
A formula used to update probabilities based on new evidence. Formula: P(A∣B)=P(B)P(B∣A)×P(A)
Example Problem: If P(B | A) = 0.9, P(A) = 0.3, and P(B) = 0.5, what is P(A | B)?
Solution: P(A∣B)=0.50.9×0.3=0.54.
Law of Total Probability
Relates marginal probabilities to conditional probabilities with a partition of the sample space. Formula: P(B)=∑P(B∣Ai)P(Ai) where {A_i} is a partition of the sample space.
Example Problem: If P(B | A1) = 0.2, P(B | A2) = 0.5, P(A1) = 0.4, and P(A2) = 0.6, what is P(B)?
Solution: P(B)=(0.2×0.4)+(0.5×0.6)=0.08+0.30=0.38.
Complement Rule
States that the probability of an event plus the probability of its complement equals 1. Formula: P(A)+P(A′)=1
Example Problem: If P(A) = 0.7, what is P(A')?
Solution: P(A′)=1−0.7=0.3.
Expected Value
A measure of the central tendency of a random variable. Formula: E(X)=∑xiP(xi) where x_i are all possible values.
Example Problem: If there are outcomes of 2 with probability 0.3, 4 with probability 0.5, and 6 with probability 0.2, what is E(X)?
Solution: E(X)=(2×0.3)+(4×0.5)+(6×0.2)=0.6+2.0+1.2=3.8.
Standard Error of the Mean
The standard deviation of the sampling distribution of the sample mean. Formula: SE=ns where s is the standard deviation and n is the sample size.
Example Problem: If the standard deviation s = 10 and n = 25, what is SE?
Solution: SE=2510=510=2.
Cumulative Distribution Function (CDF)
A function that describes the probability that a random variable takes on a value less than or equal to a specific value. Formula: F(x)=P(X≤x)
Example Problem: If a random variable X is distributed uniformly between 0 and 1, what is F(0.5)?
Solution: For a uniform distribution, F(0.5)=0.5 (50% of the distribution lies below 0.5).
Power of a Test
The probability that it correctly rejects a false null hypothesis. Formula: Power=1−P(Type II error)
Example Problem: If the probability of a Type II error is 0.2, what is the power of the test?
Solution: Power=1−0.2=0.8.