1/124
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
systematic empiricism
way of knowing
systematic gathering and evaluation of empirical evidence
science! the best way of knowing
ways of knowing
tenacity, authority, reason, empiricism, systematic empiricism
erroneous beliefs
____ are because of lack of understanding, false claims, confirmation biases, conspiracies, fraudulent data
risks of erroneous beliefs
risk to others (individuals and broader population) when spreading false rhetoric
hypothesis
a tentative proposition about the causes or outcome of an event or, more generally, about how variables are related
theory
a set of formal statements that specifies how and why variables or events are related
tenacity
way of knowing
believing smth because it's what we've always believed
closing oneself off to info that is inconsistent w/threatens a firmly held belief
bad bc u could be wrong this whole time and nvr change mind
authority
way of knowing
relying on others as a source of knowledge/belief (parents, social circle, social media)
more likely to do so when others are viewed as credible, experienced, trustworthy
bad bc they could be wrong and u would blindly trust
reason
way of knowing
using logical, rational arguments to arrive at knowledge
forming judgements based on facts or premises
bad bc ppl can come to diff conclusions through logic, pure logic can lead to conclusions wo empathy
empiricism
way of knowing
acquiring knowledge directly through observation and experience
bad bc generalizing that your experiences will be applied the same way for others will be incorrect
distal cause
a cause of an event that is distant from the result in a series of interrelated events (ex: why am i poor? cause: capitalism and lack of generational wealth)
proximal cause
a cause of an event that is immediate to the result in a series of interrelated events (ex: why am i poor? cause: i dont have a job)
where ideas come from
personal ideas, daily events/real world problems
prior research/theory (case studies, research, re examining issues)
measurement development
special populations (clinical: psychopathologic populations)
exceptions
serendipity/chance
physiological prod
physiological prod
disrupting ordinary states of consciousness using (ab)normal substances (ex: coffee, lsd)
good research questions
____ are interesting, testable, falsifiable, abstract (should not include a direction)
good hypotheses
____ are operationalized (how to define what I want to measure?), have a direction, don't repeat the research question, based on sound reasoning, specific
qualitative research
nonstatistical analysis of data
multiple/fluid realities (diff ppl have diff perspectives)
acknowledges subjectivity
data collection methods= in depth interviews, focus groups, direct observation, record review/archival
qualitative analysis
aka grounded theory
1. coding
2. iteration
3. validity
coding
step 1 of grounded theory where you develop the code as you collect data= constant comparing
open coding
1st step of coding
break data down into meaningful units and create as many categories as you need to capture content of responses
axial coding
2nd step of coding
organize open codes and relate categories to each other
selective coding
3rd step of coding
identify central theme and develop theory based on axial codes
iteration
step 2 of grounded theory where you explore theories generated during coding in future data collection, cycle between interpretation and observation
validity (qualitative)
step 3 of grounded theory where you read theory to participants and see how well theory fits raw data
types: descriptive, interpretive, theoretical
descriptive validity
type of validity for qualitative research
are the descriptions of the data accurate?
interpretive validity
type of validity for qualitative research
is interpretation of the data accurate?
theoretical validity
type of validity for qualitative research
does the data match a higher theory?
advantages of qualitative
can get diff kinds of data (phenomenological/fact based, holistic, detailed)
can develop more new measures
may be better for examining diff cultures
limitations of qualitative
subjectivity= not always replicable
no control group bc don't rly need one lol
usually small sample size so lacks generalizability
costly in time and training
combining research approaches
start with qualitative, move to quantitative
Quantitative research
relying primarily on numerical data/analysis to describe and understand behavior
approaches: correlational, experimental, descriptive
how change of time is measured: cross sectional, longitudinal, cohort sequential
correlational
quantitative approach
measure 2 or more variables to see if they covary
can try to establish temporal precedence OR control for third variables but cant do both
nothing is manipulated
experimental
quantitative approach
manipulate IVs to determine effect on DVs
random assignment
descriptive
quantitative approach
investigate/describe only one variable
no relationships determined
cross sectional design
how change of time is measured
different participants at the same time (ex: group of ppl ages 5-10)
limitations: cohort effects
longitudinal design
how change of time is measured
same participants but at different times eventually (ex: group of 5 year olds....later tested again when they are 10)
limitations: costly, attrition (how to keep track of everyone?)
cohort effects
limitation to cross sectional design
can't only look at one variable without other things affecting it (ex: if age is the variable there will also be generational effects)
cohort sequential design
how change of time is measured
mix of cross sectional and longitudinal!
2 groups of participants at one time, test them again later at another time (ex: groups of 5 and 10 yrs olds....later tested again at 10 and 15)
confounding variable
when an extraneous variable varies systematically with IV and you don't know whether its the IV or _____ causing the effect (ex: does sunshine make plant grow? or could it be water, space to grow, air quality etc)
not the same as 3rd variable or mediator
third variable
when an extraneous variable could be the real cause behind why X and Y are related
mediator
an extraneous variable that provides a causal link in the sequence btwn IV and DV
moderator
an extraneous variable that affects the strength or direction of the relationship btwn IV and DV
operationalizing
turning an abstract concept into a measurable variable
conceptual variable
abstract idea one is interested in testing/examining
can't be directly observed
measured variable
concrete translation of abstract into something that can be defined
how to operationalize
1. come up with conceptual definition
2. think about how to measure the definition through cognitive/affective/behavioral/physiological measures
cognitive measures
operationalizing a concept
things you think (ex: op. interpersonal attraction: list out all the things you like about them)
affective measures
operationalizing a concept
things you feel (ex: op interpersonal attraction: rate 1-10 how much you like them)
behavioral measures
operationalizing a concept
how you behave (ex: op. interpersonal attraction: distance you sit from them, frequency of looking at them)
physiological measures
operationalizing a concept
how your body reacts (ex: op. interpersonal attraction: measuring heart rate)
composite measures
using a combination of multiple items/questions/measures to assess the same construct
ex: using a bunch of self report scales (how much do you like them? how much time do you spend w them? how often do you fight (reverse coded)? -> avg scores of total)
reliability
consistency of a measurement (across time/among items/btwn observers)
ex: reliability of first score on test--> true score (actual ability, can't ever know it)/actual score (the grade number) --> the proportion of actual score that reflects true score and not error
just bc a measure is reliable doesn't mean it is valid! but if a measure is unreliable it definitely is not valid
error
random or systematic reasons for "true score" to be affected so that it is diff from "actual score"
random error
variability due to random things, tends to even out across groups of people
systematic error
consistent error that pushes everyone in the same direction
sources of error
time sampling, content/item sampling, errors across diff scorers
time sampling error
source of error
error dependent on the specific time test is occurring such as temperature in the room, how hungry you are, if the experimenter is in a weird mood
item sampling error
source of error
error of measurement instrument like poorly worded questions/instructions, different interpretations
ways of assessing reliability
test-retest, alternate/equivalent forms, inter-rater, internal reliability/consistency
test-retest reliability
assessing reliability by seeing consistency between one measurement time pt and another
estimate of time sampling error
procedure: ppl take the same measurement at different times, correlate both scores
limitations: can't measure systematic error, not as useful when construct changes over time (ex: state self esteem), time consuming, retesting effects where participants might try to be consistent OR try to be different, still think of answers from the first time, get bored and not be as into answering etc
alternate/equivalent forms reliability
assessing reliability by seeing extent to which responses to 2 versions of the same measure are consistent
estimate of content sampling error
procedure: give a set of a measure at a time, and then give a diff set of a similar measure at a diff time (both measures assessing the same thing) and see if scores and answers correlate (or give all the measures at the same time and see if scores correlate)
limitations: need to generate large amt of items (50+...), assumes that both forms are equivalent
inter-rater reliability
assessing reliability by seeing extent to which responses are judged consistently by 2+ raters
estimate of errors across diff raters
usually for behavior ratings, open ended responses
procedure: multiple raters will assess the same thing, see how consistently ppl respond to the same info
internal reliability/consistency
assessing reliability by seeing extent to which items within a measure correlate with each other
estimate of content sampling error
different types of internal reliability: item-total, split half, cronbach's alpha
procedure: ppl take assessment with multiple questions, see if items correlate w each other (item-total, split half)
item-total reliability
internal reliability type
correlation of each individual item with total mean score
split half reliability
internal reliability type
correlation of total score of half the assessment with the other half
cronbach's alpha
internal reliability type
average of all possible split half reliabilities (ex: six item scale-> avg of 123 with 456 OR 124 with 356 OR 125 with 346 etc)
most used reliability
how to inc reliability
have more measurements (more items in scale, larger sample size, more data in general)
use a better measurement tool
include better instructions
validity
what is the purpose for which the test is being used
requires accumulation of evidence, continuous and not just one score
construct validity= main one
construct validity
extent to which instrument (measured variable) assessed what we want to measure (conceptual variable)
face, content, criterion, predictive, convergent, discriminant
face validity
type of construct validity
degree to which the measure appears on the surface to assess the construct (ex: for depression-> measure that is the question "i feel depressed" has high face validity)
can be desirable or undesirable based on study (un: if you don't want participants to know what is being tested)
can throw in filler items to make it less clear what is being measured and have less face validity
content validity
type of construct validity
degree to which the entire range of indicators is included in the instrument (both content and type of response)
(ex: depression measure with qs abt cognitive, affect, behavior, and physiological)
can do this by having a v good conceptual definitions and identifying main parts of definition-> including items that assess those parts
and having experts evaluate the items!
criterion validity
type of construct validity
correspondence btwn new measure and some objective (not self report!) criterion variable
ex: demographics, behavior based, knowledge based, objective/easily measurable
can either be in the present (concurrent) or the future (predictive)
predictive validity
type of criterion validity where an objective comparison occurs at least 2 weeks in the future
ex: what is the predictive validity btwn SAT scores predicting one's GPA during first year of college?
convergent validity
type of construct validity
degree to which an instrument is either positively or negatively associated with measures of related constructs
what matters is strength of the relationship, not direction!
opposite of discriminant validity
discriminant validity
type of construct validity
degree to which an instrument is weakly related/unrelated to measures of dissimilar constructs
ex: openness to experience is not related to internalized misogyny :)
opposite of convergent validity
how to assess a question
1. understand the question
2. process/remember info needed for the question
3. translate info into form required to answer question
4. provide answer!
designing survey questions
keep sentence structure simple and short
avoid double barreled questions (2+ ideas assessed in 1 statement)
avoid double negatives
use "disagree" instead of "not agree"
use a simple rating scale that yields the most meaningful response possible
don't start each item with the same words/phrase
have a lot of items for more reliability
keep instructions clear
types of rating scales
general/direct, semantic differentials, indirect
general/direct rating scale
giving people a question that directly answers a measure, direct way to answer like #, slider, likert
semantic differential scale
direct scale but with opposite words at each end of q instead of # or slider
indirect scale
likert scale (exclusively agree/disagree) where qs indirectly assess a measure
context effects
issue with ordering items in survey where context/order that qs are in may affect responses
(ex: putting "how happy are you with your love life" right before "how happy are you with life in general" will have a smaller correlation than putting the two a bit separately from each other)
Counterbalancing
a method used in a repeated measures design where half the participants do the conditions in one order while the other half does it in a diff order
or also randomizing the order of questions for a normal correlation study
doesn't fully eliminate order effects but helps control for them by distributing them across diff participants
content analysis
an analysis of the different types of content found within or represented by a set of data
used in qualitative research
can yield numeric or non numeric information
good theories
_________________ are parsimonious (explainable with few variables), evidence‐based, all parts being logically consistent, falsifiable
discrete variable
quantitative variables where no immediate values are possible in between two adjacent ones (ex: # of children can be 0, 1, 2 but not .87)
continuous variable
quantitative variables where immediate values are possible in between two adjacent ones (ex: blood alcohol level can be 4.1 and 4.2 but also 4.1232354723568)
probability sampling
a method for selecting a survey sample where each member of the population has a chance of being selected into the sample and the probability of being selected can be specified
typically preferred in surveys
simple random, stratified random, cluster (single & multistage)
nonprobability sampling
a method for selecting a survey sample where each member of the population either does not have a chance of being selected into the sample, the probability of being selected cannot be determined, or both
scientifically useful
convenience, quota, self-selected, purposive (expert & snowball)
simple random sampling
a type of probability sampling where each member of the sampling frame has an equal probability of being chosen at random to participate in the survey (ex: 1000 sample size and 33000 sampling frame= probability of each student is 1/33)
stratified random sampling
a type of probability sampling where a sampling frame is divided into groups (called strata; singular = stratum), and then within each group random sampling is used to select the members of the sample
cluster sampling
a type of probability sampling where units (e.g., geographic regions, schools, departments) that contain members of the population are identified. These units—called "clusters"—are then randomly sampled
single or multistage
single cluster sampling
a type of cluster sampling where all the participants in the randomly selected clusters are chosen to participate in the survey= only one stage
multistage sampling
a type of cluster sampling where there are two or more stages to select progressively smaller samples.
convenience/haphazard sampling
a type of nonprobability sampling where members of a population are selected nonrandomly for inclusion in a sample, on the basis of convenience
might also be potential biases occurring when doing this
quota sampling
a type of nonprobability sampling where a sample is nonrandomly selected to match the proportion of one or more key characteristics of the population
self-selected sampling
a type of nonprobability sampling where participants place themselves in a sample, rather than being selected for inclusion by a researcher
purposive sampling
a type of nonprobability sampling where researchers select a sample according to a specific goal or purpose of the study, rather than at random
expert or snowball
expert sampling
a type of purposive sampling where researchers identify experts on a topic and ask them to participate
snowball sampling
a type of purposive sampling where people contacted to participate in a survey are asked to recruit or to provide contact information (names, locations) for other people who meet the criteria for survey inclusion.