Studied by 15 people

5.0(1)

Get a hint

Hint

Looks like no one added any tags here yet for you.

1

Reliability

Estimates evaluate the stability of measures, internal consistency of measurement instruments, and interrater reliability of instrument scores

New cards

2

Validity

the extent to which the interpretations of the results of a test are warranted, which depends on the particular use of the test is intended to serve.

New cards

3

Reliability estimates

used to evaluate:

the stability of measures administered at different times to the same individuals or using the same standard.

The equivalence of sets of items from the same test or of different observers scoring a behaviour or event using the same instrument.

New cards

4

Reliability coefficients

0.00 to 1.00

higher levels indicate higher reliability

New cards

5

Stability

(test-retest reliability)

administering a test at 2 different points in time to the same individuals and determining the correlation or strength of association.

New cards

6

Internal Consistency

gives an estimate of the equivalence of sets of items from the same test.

Cronbach's alpha is most widely used to measure internal consistency

New cards

7

Interrater Reliability

Establishes the equivalence of ratings obtained with an instrument when used by different observers.

Cohens Kappa is used

New cards

8

Exposure Measurement (4 doses)

Available dose:

cumulative vs current

Administrated dose:

the amount that comes in contact

Absorbed dose:

the amount that enters the body

Active dose:

that actually affects the target organ

New cards

9

Ratio

relationship between 2 numbers

numerator:

**NOT NECESSARILY INCLUDED**in denominator eg. (binary) sex ratio

New cards

10

Proportion

relationship between 2 numbers

numerator:

**HAS TO BE INCLUDED**in the denominatorproportion always ranges between 0-1

New cards

11

Calculating the odds

In a population of 100, 25 are diabetic. What are the odds of being diabetic?

probability of an event occurring relative to not occurring

25/75 = 0.33

New cards

12

Calculating the rate

speed of occurrence of an event over time

numerator: # of events observed for a given time

denominator: population in which the events occur 2 in 100 people?2/100 = 0.02

New cards

13

Measuring Prevalence

prevalence rate: the proportion of the population that has a given disease or other attribute at a specified time 2 types:

point prevalence rate

Period prevalence rate

New cards

14# with disease at specific time/ population at same time

Point Prevalence rate

PR:

New cards

15

Incidence rate

the proportion of the population at risk that develops a given disease or other attribute during a specific time period.

IR:# new events during specified time period/ population at risk

New cards

16

Incidence vs prevalence

**Incidence:**

measures frequency of disease onset

what is new

**Prevalence**

measures population disease status

what exists

all may be expressed in any power of 10

per 100, 1,000, 10,000

New cards

17

Relative risk

tells us how many times as likely it is that someone who is ‘exposed’ to something will experience a particular health outcome compared to someone who is not exposed

Tells us about the

__strength of an association__Can be calculated using any measure of disease occurrence:

**Prevalence Incidence rate**

New cards

18

Calculation of relative risk

New cards

19

Random error

error due to chance

New cards

20

systematic error

error due to unrecognizable source *can have both random and systematic error

New cards

21

precision vs accuracy

A measurement scale/ tool with a high precision is reliable with high accuracy is valid

New cards

22

High precision is _____ HIgh accuracy is _____

reliable valid

New cards

23

Insufficient Precision

could be:

The measurement tool is not precise enough (a ruler in cm is not precise when meaningful differences are in millimeter)

Two (independent) interviewers rate the same person differently using the same scale (inadequate training?)

The same interviewer rates the same person differently

New cards

24

sources of measurement error

interviewer or observer

record abstracting (random error)

biased overestimation or underestimation

Participants

recall

random or systematic

New cards

25

Classification of participants

2 types:

Non-differential (the same in all study groups)

Usually weakens associations – i.e. brings effect estimates (RR, OR, AR) closer to the null value (but not always)

Differential (different in different study groups)

Effect estimates may change in any direction, depending on the particular error

New cards

26

Non-differential Misclassification of 10%

10% of all exposed cases and exposed controls are misclassified as unexposed & vice versa (10% of unexposed cases and unexposed controls are misclassified as exposed)

brings OR,RR, AR closer to the null

New cards

27

Differential Misclassification

20% of unexposed cases (but not controls) are misclassified as exposed

New cards

28

Reducing measurement error

Little (or nothing) can be done to fix information bias once it has occurred.

Information bias must be avoided through careful study design and conduct Information bias cannot be “controlled” in the analysis.

New cards

29

Case control vs cohort

Case control: case vs control Cohort: exposed vs unexposed

New cards