Rule 1: For any event A, the probability of A is always greater than or equal to 0 and less than or equal to 1
Rule 2: The sum of the probabilities for all possible outcomes in a sample space is always 1
Impossible event: If an event can never occur, its probability is 0
Sure event: Of an event must occur every time, its probability is 1
“Odds in favor of an event”: ratio of the probability of the occurrence of an event to the probability of the nonoccurrence of that event.
Complement: the set of all possible outcomes in a sample space that do not lead to the event
Disjoint or mutually exclusive events: events that have no outcome in common. In other words, they cannot occur together.
Union: events A and B is the set of all possible outcomes that lead to at least one of the two events A and B
Intersection: events A and B is the set of all possible outcomes that lead to both events A and B
Conditional Events: A given B is a set of outcomes for event A that occurs if B has occurred
Expected value: Computed by multiplying each value of the random variable by its probability and then adding over the sample space
Variance: sum of the product of squared deviation of the values of the variable from the mean and the corresponding probabilities
3 Characteristics of a binomial experiment
Binomial probability distribution:
μ = np
σ2 = npq
σ = √npq
3 Characteristics of a geometric experiment
X = the number of independent trials until the first success
Mean: μ = 1/p
Standard Deviation: σ = √1/𝑝(1/𝑝−1)
Let X be a continuous random variable taking values in the range (a, b)
The area under the density curve is equal to the probability
P(L < X < U) = the area under the curve between L and U, where a ≤ L ≤ U ≤ b
The total probability under the curve = 1
The probability that X takes a specific value is equal to 0, i.e., P(X = x0) = 0
Parameter: a numerical measurement describing some characteristic of a population.
Statistic: a numerical measurement describing some characteristic of a sample.
Sampling distribution: the probability distribution of all possible values of a statistic, different samples of the same size from the same population will result in different statistical values
Standard error: standard deviation of the distribution of the statistics.
\
\