Public Health Science Exam 2 UIOWA

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/49

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:07 PM on 3/31/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

50 Terms

1
New cards

Intersection

For two events A and B, the intersection A n B represents the event that both A and B occur

2
New cards

Complement

For event A, the complement of A represents the event that occurs If A does not occur. It is typically denoted A^c or Ā

3
New cards

Properties of Probabilities

• Must be between 0 and 1

• An event which cannot occur has a probability of 0. Such an event is called the NULL EVENT, and is denoted Ø.

• The sum of the probabilities for all of the outcomes in the sample space S must equal 1.

• For any event A, P(A) is the sum of the probabilities for all of the outcomes which comprise A.

• For any event A, P(A^c)= 1-P(A)

• Two events A and B are said to be MUTUALLY EXCLUSIVE if they both cannot occur: i.e., if

A n B = Ø

OR

P(A n B)= 0

4
New cards

Additive Rule of Probability

If events A and B are mutually exclusive, then

P(A u B) = P(A) + P(B)

If A1, A2,.....,Ak represent k mutually exclusive events, then,

P(A1 u A2 u ... u Ak) = P(A1) + P(A2) + ... + P(Ak)

5
New cards

Union

For two events A and B, the union A u B represents the event that A or B occurs

Ex: A occurs without B, B occurs without A, or A and B both occur

6
New cards

Probability of a General Union

For any two events A and B,

P(A u B) = P(A) + P(B) - P(A n B)

7
New cards

Conditional Probability

Refers to the probability of one event occurring given that another event has already taken place

For events A and B, the conditional probability that B will occur given A has already taken place is denoted by P(B|A)

8
New cards

Independent Events

Two events A and B are said to be INDEPENDENT if the occurrence of one event does not alter the probability assignment for the occurrence of the other. Ex. If

P(B|A) = P(B) or P(A|B) = P(A)

The preceding relations are equivalent:

P(B|A) = P(B) holds if and only if P(A|B) = P(A) holds

If two events are not independent, they are said to be DEPENDENT

9
New cards

Multiplicative Rule of Probability

• If events A and B are independent, then

P(A n B) = P(A) x P(B)

• For any two events A and B,

P(A n B) = P(A|B) x P(B) = P(B|A) x P(A)

• If A1, A2, ... , Ak represent k independent events, then,

P(A1 n A2 n ... n Ak) = P(A1) x P(A2) x .... x P(Ak)

10
New cards

Purpose of Screening/ Diagnostic Tests

A screening test is used to determine whether an individual is likely, or unlikely to have a particular disease or condition

Often, those who test positive on a screening test are then subjected to further testing to confirm or deny the diagnosis

Ex. Home pregnancy tests, Pap smears, HIV screening tests

11
New cards

Sensitivity

The probability of obtaining a positive test result given that the individual has the disease

Sensitivity = P(T+|D)

12
New cards

Specificity

The probability of obtaining a negative test result, given that the individual does not have the disease

Specificity = P(T-|D^c)

13
New cards

Sensitivity and Specificity both measure

The probability of the test making the correct classification given the disease status

14
New cards

False Negative

Occurs when a negative test result is obtained for an individual who has the disease

The probability of a false negative is given by

• P(T-|D)

• 1-P(T+|D)

• 1- Sensitivity

15
New cards

False Positive

Occurs when a positive test result is obtained for an individual who does not have the disease

The probability of a false positive is given by

• P(T+|D^c)

• 1- (T-|D^c)

• 1- Specificity

16
New cards

Positive Predictive Value

The probability that the individual has the disease, given that they had a positive test result

PV+ = P(D|T+)

PV+ = sens x prev / sens x prev + (1-spec) x ( 1-prev)

17
New cards

Negative Predictive Value

The probability that the individual does not have the disease, given that they had a negative test result

PV- = P(D^c|T-)

PV- = spec x (1-prev) / spec x (1-prev) + (1-sens) x prev

18
New cards

Prevalence

fraction of a population having a specific disease at a given time

19
New cards

Law of Total Probability

The probability of an event is the sum of its probability across every possible condition

P(B) = P(B|A) x P(A) + P(B|A^c) x P(A^c)

20
New cards

Bayes' Theorem

The probability of an event occurring based upon other event probabilities.

P(A|B) = P(A n B) / P(B)

since

P(A n B) = P(B|A) x P(A)

P(B) = P(B|A) x P(A) + P(B|A^c) x P(A^c)

hence,

P(A|B) = P(B|A) x P(A) / P(B|A) x P(A) + P(B|A^c) x P(A^c)

We see this theorem practiced when we find the positive predictive value and negative predictive value! :) :) :)

21
New cards

Random Variable

A numeric variable that assumes a value based on the outcome of a random experiment

Consider a random experiment with sample space S. A function X, s € S one and only one number X(s) = x, is called a random variable.

22
New cards

Discrete Random Variable

May assume only specific numeric values (often integers)

Ex: # of deaths, # of people with disease

23
New cards

Continuous Random Variable

May assume any value over some interval or continuum

Ex: Height, weight

24
New cards

Probability Mass Function

a mathematical relation that assigns probabilities to all possible outcomes for a discrete random variables

the probability function of a discrete random variable

1. Defines all possible values of the variable

2. Displays the probabilities with which the random variable takes on those values

3. Can sometimes be described using a formula

For discrete random variables the function p(x) is often referred to as the probability mass function (or pmf)

For each value of the random variable, the pmf gives the probability of that value happening

25
New cards

Binomial Coefficient

Tells us how many ways we can arrange a certain amount of objects within a group without regard to order

nCr in the calculator

equation :

nCr = n! / x! (n-x)!

we say "n choose x"

26
New cards

The Binomial Distribution

The binomial distribution arises when the following requirements are met:

1. Each trial has only 2 possible outcomes. These outcomes are often referred to as "successes" and "failures"

2. The trials are independent

3. The probability of a success, p, remains the same from trial to trial

This distribution is given by:

p(x) = P(X=x) = nCx * p^x (1-p)^n-x

27
New cards

Calculating the Probabilities, Mean, Variance, and Standard Deviation for Binomial Distributions

Binomial equation: p(x) = P(X=x)=nCx * p^x (1-x)^n-x

Number of sequences:

The number of sequences in which we have x successes and (n-x) failures is given by nCx part of the binomial equation above

Probability:

The probability associated with each sequence in which we have x successes and (n-x) failures is given by p^x (1-p)^n-x part of the binomial equation above

If Y is distributed Bin (n,p), then:

• the mean of Y, denoted E(Y) for the "expectation of Y," is equal to np.

• the variance of Y, denoted Var(Y) or V(Y), is np(1-p)

•the standard deviation of Y, usually denoted sd(Y) or SD(Y), is the square root of the variance and therefore equals the square root of np(1-p).

28
New cards

Poisson Probability Distribution

The probability distribution of X is the Poisson Distribution and may be denoted as Poisson(λ), Poi(λ), or P(λ)

A discrete probability distribution that applies to occurrences of some event over a specified interval. The random variable x is the number of occurrences of the event in an interval. The interval can be time, distance, area, volume or some similar unit. The probability of the event occurring x times over an interval is given by this formula below:

P(x) = e^-λ * λˣ ÷ x!

e ≈ 2.71828

μ = the mean number of occurrences of the event in the intervals

Requirements:

1. The random variable x is the number of occurrences of an event IN SOME INTERVAL

2. The occurrences must be RANDOM

3. The occurrences must be INDEPENDENT of each other

4. The occurrences must be UNIFORMLY DISTRIBUTED over the interval being used

29
New cards

Poisson Process

a process in which events occur continuously and independently at a constant average rate

Characterized by the following criteria:

1. An event occurs periodically over time

2. The expected number of events in an interval is proportional to the length of the interval

3. Within a single interval, an infinite number of occurrences of the event are theoretically possible

4. The events occur independently, both within the same interval and between consecutive intervals

30
New cards

Calculating Probabilities, Mean, Variance, and Standard Deviation from a Poisson Distribution

Given a Poisson Process, let X count the number of occurrences of the event of interest over a certain interval.

X is said to be a Poisson Random Variable

• Calculating Probabilities

p(x) = P(X=x) = e^-λ * λˣ ÷ x!

• Mean:

Assume that the mean number of occurrences over an interval of this length is denoted by λ

The mean of X is given by μ = λ

• Standard Deviation:

The standard deviation of X is given by σ = √λ

• Variance:

The variance of X is given by σ² = λ

31
New cards

When does the Poisson Distribution approximate the Binomial Distribution?

The Poisson Distribution is sometimes used to approximate the Binomial Distribution when n is large and p is small. One rule of thumb is to use such an approximation when the following requirements are BOTH satisfied:

1. n ≥ 100

2. np ≤ 10

If both requirements are satisfied and we want to use the Poisson Distribution as an approximation to the Binomial Distribution, we need a value for μ. That value can be calculated by using the formula below.

μ = np

Thus, in such situations, if

X ~ Bin(n, p) then X ~ᵃᵖᵖʳᵒˣ~ Poisson(np)

32
New cards

Examples of Poisson Random Variables

• The number of babies delivered in the maternity ward of UIHC on a particular day

• The number of car accidents that occur at a busy intersection in Iowa City during a certain month

• The number of calls received by Quitline Iowa (a statewide smoking cessation telephone counseling hotline) during a specific week

33
New cards

The Additive Property of Poisson Distributions

Suppose that X₁ + X₂ are independent random variables, such that X₁ ~ Poisson(λ₁) and X₂ ~ Poisson(λ₂) then

Y = X₁ + X₂ ~ Poisson (λ₁ + λ₂)

34
New cards

Construct, explain, and interpret frequency tables (discrete data)

• Frequency Tables:

Suppose we had the disease stage from 50 cancer patients. We could summarize the data in the following frequency table.

Stage | Frequency

1 8

2 15

3 11

4 16

Total 50

The frequency is simply the number of observations in each category

• Relative Frequency Tables:

Relative Frequency is the proportion (or percentage) of observations in each category

Example:

Stage | Frequency | Relative Frequency

1 8 0. 16

2 15 0. 30

3 11 0. 22

4 16 0. 32

Total 50 1. 00

35
New cards

Construct, explain, and interpret frequency tables (continuous data)

• For continuous data we may only have one observation at any given value, so we need to do something a little different

• Consider a sample of 25 cancer patients with the following disease free survival times (in months) :

1, 2, 3, 5, 7, 8, 8, 9, 10, 10, 11, 11, 12, 12, 13, 14, 15, 17, 18, 19, 21, 22, 34, 35, 39.

To summarize this data, we form a set of non- overlapping intervals into which all of the data can be grouped

Int. # | Class Int. | Freq. | Relative Freq.

1 0-4 3 0.12

2 5-9 5 0.20

3 10-14 8 0.32

4 15-19 4 0.16

5 20-24 2 0.08

6 25-29 0 0.00

7 30-34 1 0.04

8 35-39 2 0.08

Frequency: The number of observations in an interval

Relative Frequency: The proportion (or percentage) of observations in an interval

36
New cards

Cumulative Relative Frequency

the sum of relative frequency up through, and including, the category of interest

Added as another column in the frequency table

37
New cards

Histograms

• A histogram is a graphical representation of a frequency distribution for discrete or continuous variables

• They are similar to bar graphs, except the bars touch each other

• It is the AREA of the bar that reflects the Relative proportion, rather than the height of the bar

• If the bars are the same width, the height can be the frequency and the area will be appropriate too

38
New cards

Constructing a Histogram

1 • Construct the (relative) frequency distribution using non-overlapping intervals of equal width

2 • Create horizontal and vertical axes for the graph

- Label the vertical axis so as to accommodate all of the numbers in the (relative) frequency distribution

- The vertical scale should begin at zero

- Label the horizontal axis with the endpoints for the intervals, or with the interval numbers

3 • Construct a rectangle over each interval, with the height representing the frequency (relative frequency) for that interval

* See examples of histograms on PowerPoint!!

39
New cards

Probability Density Function

A probability density function of a continuous random variable is a function which satisfies:

1 • The probability that X falls between values a and b, that is P(a ≤ X ≤ b), is equal to the area under the curve between a and b

2 • The function always takes on values greater than or equal to zero

3 • The total area under the curve is equal to 1

In general, calculation of probabilities for continuous random variables requires calculus

For a continuous random variable, the probability of the random variable assuming any specific value is zero

*** See week 8 Wednesday PowerPoint for examples of probability density function graphs

40
New cards

Differences between probability mass and density functions

I don't know ??? 😭😭

41
New cards

The Standard Normal Distribution

• The normal (or Gaussian) distribution is the most common probability distribution for continuous random variables

• The density curve for the normal distribution is called the Normal Curve or the Bell Curve!

• Has wide applicability to many different types of data, and often arises in nature :)

• Widely used for theoretical properties of many statistical methods

• The distribution depends on ONLY TWO PARAMETERS! :) the mean (μ), and the standard deviation (σ), (or Variance, σ²)

• μ = 0

• σ = 1

• A standard normal random variable is generally denoted as Z

Z = X - μ ÷ σ

• That is, Z ~ N(0, 1)

42
New cards

Computing Probabilities for Standard Normal Variables

• The area between a and b under the standard normal density curve provides the probability that Z will assume a value over the interval (a, b) : P(a < Z < b).

43
New cards

Common Critical Values and Percentiles for Standard Normal Curves

P(Z > 1.96) = 0. 025

P(Z < -1.96) = 0. 025

P ( |Z| > 196) = 0.05

P(Z > 1.645) = 0.05

P(Z > 1.28) = 0.10

44
New cards

Calculating Standardized Values from a General Normal Distribution

Z = X - μ ÷ σ

Let X be a normal random variable with mean μ and standard deviation σ

X ~ N(μ, σ²)

Suppose X ~ N(5, 10²)

To solve for P( X > 24.6), we would transform the probability statement as follows:

P(X > 24.6) = P( X-5 ÷ 10 > 24.6 - 5 ÷ 10)

= P( Z > 19.6 ÷ 10)

= P( Z > 1.96)

45
New cards

Inferential Statistics

Deals with methods for making generalizations about a population based on information contained in the sample

46
New cards

Sampling Distribution

• A probability distribution of a statistic obtained through a large number of samples drawn from a specific population

• The sampling distribution reflects which values of the statistic are likely and which values are improbable

• The behavior of a statistic in repeated sampling helps us to characterize the properties of the statistic as an estimator of a parameter

One can view x̅ as a numeric variable that assumes a values based on the outcome of a random experiment: i.e., the process of drawing a sample at random from the population

x̅ is therefore a random variable. It will vary from sample to sample!

• The mean of a statistic is often called the expected value, for example, E(x̅) and the standard deviation of a statistic is often called the standard error, the square root of Var(x̅)

SO we can find E(x̅) by looking at the sampling distribution by combining the the sample with its probability and then adding those all up (slide 12 on week 9 mon PowerPoint)

It is because of this property, that we call x̅ in unbiased estimator of μ

47
New cards

Calculate the Expected Value and Standard Error of the Sample Mean

• The mean of a statistic is often called the expected value, for example, E(x̅) and the standard deviation of a statistic is often called the standard error, the square root of Var(x̅)

To find the standard error of the sample mean:

SE(x̅) = σ ÷ √n

SO we can find E(x̅) by looking at the sampling distribution by combining the the sample with its probability and then adding those all up (slide 12 on week 9 mon PowerPoint)

48
New cards

Central Limit Theorem

• The theory that, as sample size increases, the distribution of sample means of size n, randomly selected, approaches a normal distribution.

(more samples, more accurate your expected value is! This is why it's important to understand the behavior of a statistic in repeated sampling)

A "large" sample is generally considered to be one where n ≥ 30.

Z = x̅ - μ ÷ σ / √n

Requirements :

• Observations must be independently drawn from and be representative of the population

• The Central Limit Theorem applies to the sampling distribution of the mean, not necessarily to the sampling distribution of other statistics

49
New cards

General Concept of a Confidence Interval

A confidence interval looks like

P(-Zα/2 < Z < Zα/2)

If we have a 95% confidence interval it tells us that we are 95% confident that the interval contains μ because the procedure used to construct this interval produces a correct interval estimate 95% of the time

50
New cards

Calculate and Interpret a 95% Confidence Interval for a Sample Mean When Given the Population Variance

ybar +- Zα/2 (σ ÷ √n) to calculate the critical values