1/65
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Heisenberg principle
The idea that the system and practice of measurement exerts a psychological influence on the people being measured, and thereby affects operating results.
Hierarchy Rule
the most serious crime is the only crime reported. UCR
conceptualization
the process by which we identify what we mean by a concept.- the product is a specific, agreed upon meaning for a concept
concept
construct derived by mutual agreement from mental images (conceptions)
operationalization
the process of developing an operational definition
operational definition
definition in terms of specific operations, measurement instruments, or procedures
reliability
requires that the indicator gives the same result each time the same thing is measured
4 types of reliability
- stability: reliability across time ( E.G. test - retest method, parallel forms method)
- representative reliability
- internal consistency reliability (split half method)
- equivalence reliability (intercoder/ interrater reliability)
how to improve reliability
- conceptualize clearly
- increase the level of measurement
- use multiple indicators
- use pretests and pilot studies
- use established measures
- training of research workers/ interviewers
validity
measures the extent to which the measure actually reflects the real meaning of the concept it is supposed to measure ( we can never be sure about validity, but we can identify measures that are more valid than others)
Types of validity
- face validity
- criterion related validity
- concurrent validity
- predictive validity
- convergent validity
- construct validity
face validity
does the measure make sense?
criterion related validity
compare the results of the measure to some trustworthy alternative measure
concurrent validity
indicator must be associated with a preexisting indicator that is judged to be valid
predictive validity
where an indicator predicts future events that are logically related to a construct
convergent validity
do all of the indicators operate in a similar fashion?
construct validity
based on the logical relationships among variables
threats to internal validity
history
maturation
instrumentation
statistical regression
selection bias
experimental mortality
causal time order
threats to external validity
generalizability
construct validity
statistical conclusion validity
2 qualities of variables. (response categories)
- exhaustive (must be able to classify every observation in terms of one of the attributes
- mutually exclusive ( must be able to classify every observation in terms of one and ONLY one attribute)
generalizability
Extent to which research results apply to a range of individuals not included in the study. (external validity)
internal validity
A measure of the trustworthiness of a sample of data. Internal validity looks at the subject, testing, and environment in which the data collection took place.
external validity
Degree to which results of an experiment can be applied to real-life situations.
levels of measurement: precision
- nominal
- ordinal
- interval
- ratio
Levels of measurement: Type
-continuous variables- have an infinite number of values that flow through a continuum. (can be divided into many smaller increments)
- discrete variables- have relatively fixed set of separate values (contain distinct categories)
Nominal Measures
indicate only that there is a difference between categories (classification)
Ordinal measures
indicate only that there is a difference plus the categories can be ordered or ranked (classification and rank order)
interval measures
indicate categorical differences that can be rank ordered, but they also specify the distance between categories (classification, rank order, and equal intervals)
Ratio measures
include everything in the previous 3 levels, plus there is a true zero- makes proportions/ ratios possible (classification, rank order, equal intervals, and non variable zero)
dichotomous variables
limited to two values (e.g. sex, outcome of trail)
spuriousness
X is a factor of Y, but there is a third factor of Z that causes X and Y. Unlikely in experiments with random assignment.
control variables
Variables in an experiment that are kept the same throughout the experiment.
random assignment
Assigning participants to experimental and control conditions by chance, thus minimizing preexisting differences between those assigned to the different groups.
different classic experimental designs
double blind
post test only
factorial
Solomon four group design
3 components of classical experiment
Research design with three components: pre- and posttests; experimental and control groups; random assignment to groups
EPSEM
Equal Probability of Selection Method- everyone in population has the same change of getting picked
3 elements of causality
temporal section, concomitant variation and non-spurious association
compensation
A conscious or subconscious over emphasizing of a characteristic to offset a real or imagined deficiency involves substituting a strength for a weakness
3 types of policing
watchmen
service
legalistic
quasi experimental designs
(no randomization)
- non equivalent group designs (matching. experimental and control groups do not utilize randomization. selection bias is a threat to internal validity)
- time series designs (measurements are taken over time. interrupted time series. interrupted time series with matching. interrupted time series with removed treatment. interrupted time series with switching replications)
Probability samples
- Homogenous (same) populations vs heterogeneous (different) populations
- 2 important elements - (representativeness and randomness)
- every person has an equal or known chance of selection
- no selection bias when using probability sampling technique
sample bias
systematic differences between the sample and the population due to sampling procedures
sampling error
random error due to the fact that the entire population was not sampled
confidence interval (CI)
range of values within which parameter is estimated to lie
Confidence level (CL)
- estimated probability that a marameter lies within a given CI
selection bias
Errors in the selection and placement of subjects into groups that results in differences between groups which could effect the results of an experiment.
sampling distribution
theoretical distribution of some statistics that would occur if we were to draw an infinite number of same sized samples
sampling frame
list of elements in our population.
problems with sampling frames - ( ineligibles, inaccurate information, missing information, duplicates)
sampling
process of selecting observations/ elements
2 reasons sampling is used
- to generalize to a population of interest
- less expensive and more effective than a census- used to represent the target population
sample unit/ element
unit that provides the basis of analysis
probability sampling techniques: simple random sample
A sample selected in such a way that every element in the population or sampling frame has an equal probability of being chosen. Equivalently, all samples of size n have an equal chance of being selected.
probability sampling techniques: systematic random sampling
Members of the sample are chosen at a specific time or item (number) interval. (For example, every minute an item is chosen, or every tenth person in line is chosen)
probability sampling techniques: stratified sampling
(gives greater degree of representativeness)
A type of probability sampling in which the population is divided into groups with a common attribute and a random sample is chosen within each group
probability sampling techniques: multi-stage cluster sampling
(very efficient)
Divide population into large clusters and randomly sample clusters. Then randomly sample smaller clusters within those large clusters. Then, if needed, sample again from those clusters and continue until the appropriate number of participants is chosen.
non probability sampling
snowball sampling
convenience sampling
quota sampling
purposive sampling
4 types of surveys
- mail/ self administered
- telephone
- face to face
internet
advantages of mail/ self administered surveys
- cheap
- anonymity
-avoids interviewer bias
- more sensitive data provided
- advance cash incentives
disadvantages of mail/ self administered surveys
- no interviewer to clarify questions
- intended individual may not fill out survey
- reading and vocab problems
- length of survey can not be long
mail surveys: Tailor designed method
used to increase response rates. 5 contacts
- pre notice post card
- questionnaire itself
- reminder post card
- replacement questionnaire
- special final contact
advantages of telephone surveys
- quick, clean and efficient
- good response rates
- interviewer can clarify questions
- presence of supervisors
- CATI and TACASI- greater sense of privacy
disadvantages of Telephone surverys
- cant be too long
- poor contact rates
- reduces anonymity
- potential for interviewer bias
- break offs and hang ups
- random digit dialing (RDD)
advantages to face surveys
- highest response rate
- longest questionnaires
- non verbal communication
- can ask complex questions
- probing
disadvantages to face surveys
-very expensive
- interviewer bias
- least likely to obtain sensitive info- no privacy
- training of interviewers
- CASI and ACASI
Advantages of computer surveys
- inexpensive
- impersonal
- fast results
- video audio aids
disadvantages of computer surveys
-non coverage of population
- probability sampling impossible
- poor response rates
- unstandardized presentation of questions ( lack of technical uniformity)