1/35
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
probability model/distribution
describes the possible outcomes of a chance process AND the likelihood of those outcomes will occur
can use a table or tree diagram
sum of all probabilities must equal 1
every probability is between 0 and 1, inclusive
random variable
takes numerical values that describe the outcomes of a chance process
values of the sample space
expected value/mean of a discrete random variable
an average of the possible outcomes, but a weighted average in which each outcome is weighted by its frequency
does not have to be a possible outcome; decimals are fine
calculating expected value of discrete variable
μx = E(X) = (x1)(p1) + (x2)(p2) + (x3)(p3) + …
make sure to show at least three terms
first two terms and the last term
median of a discrete random variable
smallest value for which the cumulative probability equals or exceeds 0.5
variance of expected value or a discrete random variable
Var(X) = σx2 = (x1 - μx)2p1 + (x2 - μx)2p2 + (x3 - μx)2p3 + …
make sure to show at least three terms
first two terms and the last term
standard deviation of expected value or discrete random variable
the measure of variability for the center of a discrete random variable
σx = [(x1 - μx)2p1 + (x2 - μx)2p2 + (x3 - μx)2p3 + …]1/2
square root of the variance
adding or subtracting a constant
does not change the shape
does not change the measure of variability
adds/subtracts the constant to the measure of center and location of each point
mean is different, but standard deviation is the same
multiplying or dividing a constant
does not change the shape
multiples/divides measure of center and location of each point by the constant
multiples/divides the measure of spread by the constant
both mean and standard deviation change
mean + standard deviation = multiplied by the constant
variance = multiplied by the constant squared
interpretation of expected value and standard deviation
On average, the (variable) varies from the mean of "x,” by about “y” (units).
independent random variables
when knowing the value of X does not help predict the value of Y
rule for sum of independent variables
mean —> μsum = μx+y = μx + μy
standard deviation —> σsum = σx+y = (σx2 + σy2)1/2
aka square root of the sum of the variances
rule for difference of independent variables
mean —> μdiff = μx-y = μx - μy
standard deviation —> σdiff = σx-y = (σx2 + σy2)1/2
aka square root of the sum of the variances
still ADDING
only for independent variables
continuous random variable
can take any value in an interval on the number line
probability for continuous random variable
the area under the density curve and directly above the values on the horizontal axis that make up the event
probability of a single outcome of continous variable; P(x = a)
is always 0
probability of continous variables are always a range of values
combining two independent normal variables
any sum or difference of independent normal variables is also normally distributed
additive transformations
“y” amount of trial
mean —> μx1+x2+…+xy = y × μx
standard deviation —> σx1+x2+…+xy = [y × σx2]1/2
multiplicative transformations
“y” amount of trial
mean —> μyx = y × μx
standard deviation —> σyx = [y2 × σx2]1/2 = y × σyx
combination
counting when order is not considered
permutation
counting when different orders is considered
multiplication rule
if one event can occur in “m” ways, a second event in “n” ways, and a third event in “r” ways, then the three events can occur in “m × n "× r”
think of a tree diagram
repetition of an Event
if one event with “n” outcomes occurs “r” times with repetition allowed, then the number of ordered arrangements is “nr”
calculating permutations
nPr = n!/(n - r)!
n = number of objects
r = number of positions
calculating combinations
nCr = nPR/r! = n!/[r! × (n - r)!]
n = number of objects
r = number of positions
number of permutations/arrangement of “r” objects
binomial setting
when we perform “n” independent trials of the same chance process and count the number of times that a particular outcome occurs
how to tell if binomal
BINS
B = Binary, can be classified as “success” or “failure”
I = Independent, knowing the outcome of one trial must not tell anything about the outcome of the next
N = Number, the number of trials “n” has already been fixed in advance
S = Same probability, same chance of success of “p” for each trial
binomial random variable
the count of successes X in a binomial setting
x = 0, 1, 2, …, n
calculating binomial probabilities
P(x = k) = nCk × pk × qn-k
n = number of trials
k = number of successes
p = probability of success
q = 1 - p = probability of failure
nCk = n!/[k! × (n - k)!]
Formula for verifying binomial distribution
“y” is a binomial distribution with n = a and p(success) = b
do not actually need to write out BINS, but must clarify the “N” and “S” part
convincing evidence
occurrence of an unlikely or likely event against the assumptions
very small probability = against smth
very large probability = for something
shapes of binomial distribution
can be symmetric or skewed
when p = 0.5, the binomial distribution MUST be symmetric
when p ≠ 0.5. the binomial distribution MUST be skewed
p < 0.5 = right skewed
p > 0.5 = left skewed
mean of a binomial random variable
μx = n × p
n = number of trials
p = probability of success
standard deviation of binomial random variable
σx = (npq)1/2
n = number of trials
p = probability of success
q = probability of failure
formula for describing binomial distribution
The distribution of (variable name) is a binomial distribution (skewed towards__/symmetrically), with a peak around “a”. On average, the (variable name) differs from the mean of “μx” (unit) by about σx (unit), when looking at “n” (unit).
10% condition
n < 0.10N
when taking random sample of size “n” from a population size of “N”, we can use a binomial distribution to model the count of success in the sample is as long as the sample is less than 10% of the population
approximately independent/binomial distribution