research methods test 1

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/124

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

125 Terms

1
New cards

systematic empiricism

way of knowing

systematic gathering and evaluation of empirical evidence

science! the best way of knowing

2
New cards

ways of knowing

tenacity, authority, reason, empiricism, systematic empiricism

3
New cards

erroneous beliefs

____ are because of lack of understanding, false claims, confirmation biases, conspiracies, fraudulent data

4
New cards

risks of erroneous beliefs

risk to others (individuals and broader population) when spreading false rhetoric

5
New cards

hypothesis

a tentative proposition about the causes or outcome of an event or, more generally, about how variables are related

6
New cards

theory

a set of formal statements that specifies how and why variables or events are related

7
New cards

tenacity

way of knowing

believing smth because it's what we've always believed

closing oneself off to info that is inconsistent w/threatens a firmly held belief

bad bc u could be wrong this whole time and nvr change mind

8
New cards

authority

way of knowing

relying on others as a source of knowledge/belief (parents, social circle, social media)

more likely to do so when others are viewed as credible, experienced, trustworthy

bad bc they could be wrong and u would blindly trust

9
New cards

reason

way of knowing

using logical, rational arguments to arrive at knowledge

forming judgements based on facts or premises

bad bc ppl can come to diff conclusions through logic, pure logic can lead to conclusions wo empathy

10
New cards

empiricism

way of knowing

acquiring knowledge directly through observation and experience

bad bc generalizing that your experiences will be applied the same way for others will be incorrect

11
New cards

distal cause

a cause of an event that is distant from the result in a series of interrelated events (ex: why am i poor? cause: capitalism and lack of generational wealth)

12
New cards

proximal cause

a cause of an event that is immediate to the result in a series of interrelated events (ex: why am i poor? cause: i dont have a job)

13
New cards

where ideas come from

personal ideas, daily events/real world problems

prior research/theory (case studies, research, re examining issues)

measurement development

special populations (clinical: psychopathologic populations)

exceptions

serendipity/chance

physiological prod

14
New cards

physiological prod

disrupting ordinary states of consciousness using (ab)normal substances (ex: coffee, lsd)

15
New cards

good research questions

____ are interesting, testable, falsifiable, abstract (should not include a direction)

16
New cards

good hypotheses

____ are operationalized (how to define what I want to measure?), have a direction, don't repeat the research question, based on sound reasoning, specific

17
New cards

qualitative research

nonstatistical analysis of data

multiple/fluid realities (diff ppl have diff perspectives)

acknowledges subjectivity

data collection methods= in depth interviews, focus groups, direct observation, record review/archival

18
New cards

qualitative analysis

aka grounded theory

1. coding

2. iteration

3. validity

19
New cards

coding

step 1 of grounded theory where you develop the code as you collect data= constant comparing

20
New cards

open coding

1st step of coding

break data down into meaningful units and create as many categories as you need to capture content of responses

21
New cards

axial coding

2nd step of coding

organize open codes and relate categories to each other

22
New cards

selective coding

3rd step of coding

identify central theme and develop theory based on axial codes

23
New cards

iteration

step 2 of grounded theory where you explore theories generated during coding in future data collection, cycle between interpretation and observation

24
New cards

validity (qualitative)

step 3 of grounded theory where you read theory to participants and see how well theory fits raw data

types: descriptive, interpretive, theoretical

25
New cards

descriptive validity

type of validity for qualitative research

are the descriptions of the data accurate?

26
New cards

interpretive validity

type of validity for qualitative research

is interpretation of the data accurate?

27
New cards

theoretical validity

type of validity for qualitative research

does the data match a higher theory?

28
New cards

advantages of qualitative

can get diff kinds of data (phenomenological/fact based, holistic, detailed)

can develop more new measures

may be better for examining diff cultures

29
New cards

limitations of qualitative

subjectivity= not always replicable

no control group bc don't rly need one lol

usually small sample size so lacks generalizability

costly in time and training

30
New cards

combining research approaches

start with qualitative, move to quantitative

31
New cards

Quantitative research

relying primarily on numerical data/analysis to describe and understand behavior

approaches: correlational, experimental, descriptive

how change of time is measured: cross sectional, longitudinal, cohort sequential

32
New cards

correlational

quantitative approach

measure 2 or more variables to see if they covary

can try to establish temporal precedence OR control for third variables but cant do both

nothing is manipulated

33
New cards

experimental

quantitative approach

manipulate IVs to determine effect on DVs

random assignment

34
New cards

descriptive

quantitative approach

investigate/describe only one variable

no relationships determined

35
New cards

cross sectional design

how change of time is measured

different participants at the same time (ex: group of ppl ages 5-10)

limitations: cohort effects

36
New cards

longitudinal design

how change of time is measured

same participants but at different times eventually (ex: group of 5 year olds....later tested again when they are 10)

limitations: costly, attrition (how to keep track of everyone?)

37
New cards

cohort effects

limitation to cross sectional design

can't only look at one variable without other things affecting it (ex: if age is the variable there will also be generational effects)

38
New cards

cohort sequential design

how change of time is measured

mix of cross sectional and longitudinal!

2 groups of participants at one time, test them again later at another time (ex: groups of 5 and 10 yrs olds....later tested again at 10 and 15)

39
New cards

confounding variable

when an extraneous variable varies systematically with IV and you don't know whether its the IV or _____ causing the effect (ex: does sunshine make plant grow? or could it be water, space to grow, air quality etc)

not the same as 3rd variable or mediator

40
New cards

third variable

when an extraneous variable could be the real cause behind why X and Y are related

41
New cards

mediator

an extraneous variable that provides a causal link in the sequence btwn IV and DV

42
New cards

moderator

an extraneous variable that affects the strength or direction of the relationship btwn IV and DV

43
New cards

operationalizing

turning an abstract concept into a measurable variable

44
New cards

conceptual variable

abstract idea one is interested in testing/examining

can't be directly observed

45
New cards

measured variable

concrete translation of abstract into something that can be defined

46
New cards

how to operationalize

1. come up with conceptual definition

2. think about how to measure the definition through cognitive/affective/behavioral/physiological measures

47
New cards

cognitive measures

operationalizing a concept

things you think (ex: op. interpersonal attraction: list out all the things you like about them)

48
New cards

affective measures

operationalizing a concept

things you feel (ex: op interpersonal attraction: rate 1-10 how much you like them)

49
New cards

behavioral measures

operationalizing a concept

how you behave (ex: op. interpersonal attraction: distance you sit from them, frequency of looking at them)

50
New cards

physiological measures

operationalizing a concept

how your body reacts (ex: op. interpersonal attraction: measuring heart rate)

51
New cards

composite measures

using a combination of multiple items/questions/measures to assess the same construct

ex: using a bunch of self report scales (how much do you like them? how much time do you spend w them? how often do you fight (reverse coded)? -> avg scores of total)

52
New cards

reliability

consistency of a measurement (across time/among items/btwn observers)

ex: reliability of first score on test--> true score (actual ability, can't ever know it)/actual score (the grade number) --> the proportion of actual score that reflects true score and not error

just bc a measure is reliable doesn't mean it is valid! but if a measure is unreliable it definitely is not valid

53
New cards

error

random or systematic reasons for "true score" to be affected so that it is diff from "actual score"

54
New cards

random error

variability due to random things, tends to even out across groups of people

55
New cards

systematic error

consistent error that pushes everyone in the same direction

56
New cards

sources of error

time sampling, content/item sampling, errors across diff scorers

57
New cards

time sampling error

source of error

error dependent on the specific time test is occurring such as temperature in the room, how hungry you are, if the experimenter is in a weird mood

58
New cards

item sampling error

source of error

error of measurement instrument like poorly worded questions/instructions, different interpretations

59
New cards

ways of assessing reliability

test-retest, alternate/equivalent forms, inter-rater, internal reliability/consistency

60
New cards

test-retest reliability

assessing reliability by seeing consistency between one measurement time pt and another

estimate of time sampling error

procedure: ppl take the same measurement at different times, correlate both scores

limitations: can't measure systematic error, not as useful when construct changes over time (ex: state self esteem), time consuming, retesting effects where participants might try to be consistent OR try to be different, still think of answers from the first time, get bored and not be as into answering etc

61
New cards

alternate/equivalent forms reliability

assessing reliability by seeing extent to which responses to 2 versions of the same measure are consistent

estimate of content sampling error

procedure: give a set of a measure at a time, and then give a diff set of a similar measure at a diff time (both measures assessing the same thing) and see if scores and answers correlate (or give all the measures at the same time and see if scores correlate)

limitations: need to generate large amt of items (50+...), assumes that both forms are equivalent

62
New cards

inter-rater reliability

assessing reliability by seeing extent to which responses are judged consistently by 2+ raters

estimate of errors across diff raters

usually for behavior ratings, open ended responses

procedure: multiple raters will assess the same thing, see how consistently ppl respond to the same info

63
New cards

internal reliability/consistency

assessing reliability by seeing extent to which items within a measure correlate with each other

estimate of content sampling error

different types of internal reliability: item-total, split half, cronbach's alpha

procedure: ppl take assessment with multiple questions, see if items correlate w each other (item-total, split half)

64
New cards

item-total reliability

internal reliability type

correlation of each individual item with total mean score

65
New cards

split half reliability

internal reliability type

correlation of total score of half the assessment with the other half

66
New cards

cronbach's alpha

internal reliability type

average of all possible split half reliabilities (ex: six item scale-> avg of 123 with 456 OR 124 with 356 OR 125 with 346 etc)

most used reliability

67
New cards

how to inc reliability

have more measurements (more items in scale, larger sample size, more data in general)

use a better measurement tool

include better instructions

68
New cards

validity

what is the purpose for which the test is being used

requires accumulation of evidence, continuous and not just one score

construct validity= main one

69
New cards

construct validity

extent to which instrument (measured variable) assessed what we want to measure (conceptual variable)

face, content, criterion, predictive, convergent, discriminant

70
New cards

face validity

type of construct validity

degree to which the measure appears on the surface to assess the construct (ex: for depression-> measure that is the question "i feel depressed" has high face validity)

can be desirable or undesirable based on study (un: if you don't want participants to know what is being tested)

can throw in filler items to make it less clear what is being measured and have less face validity

71
New cards

content validity

type of construct validity

degree to which the entire range of indicators is included in the instrument (both content and type of response)

(ex: depression measure with qs abt cognitive, affect, behavior, and physiological)

can do this by having a v good conceptual definitions and identifying main parts of definition-> including items that assess those parts

and having experts evaluate the items!

72
New cards

criterion validity

type of construct validity

correspondence btwn new measure and some objective (not self report!) criterion variable

ex: demographics, behavior based, knowledge based, objective/easily measurable

can either be in the present (concurrent) or the future (predictive)

73
New cards

predictive validity

type of criterion validity where an objective comparison occurs at least 2 weeks in the future

ex: what is the predictive validity btwn SAT scores predicting one's GPA during first year of college?

74
New cards

convergent validity

type of construct validity

degree to which an instrument is either positively or negatively associated with measures of related constructs

what matters is strength of the relationship, not direction!

opposite of discriminant validity

75
New cards

discriminant validity

type of construct validity

degree to which an instrument is weakly related/unrelated to measures of dissimilar constructs

ex: openness to experience is not related to internalized misogyny :)

opposite of convergent validity

76
New cards

how to assess a question

1. understand the question

2. process/remember info needed for the question

3. translate info into form required to answer question

4. provide answer!

77
New cards

designing survey questions

keep sentence structure simple and short

avoid double barreled questions (2+ ideas assessed in 1 statement)

avoid double negatives

use "disagree" instead of "not agree"

use a simple rating scale that yields the most meaningful response possible

don't start each item with the same words/phrase

have a lot of items for more reliability

keep instructions clear

78
New cards

types of rating scales

general/direct, semantic differentials, indirect

79
New cards

general/direct rating scale

giving people a question that directly answers a measure, direct way to answer like #, slider, likert

80
New cards

semantic differential scale

direct scale but with opposite words at each end of q instead of # or slider

81
New cards

indirect scale

likert scale (exclusively agree/disagree) where qs indirectly assess a measure

82
New cards

context effects

issue with ordering items in survey where context/order that qs are in may affect responses

(ex: putting "how happy are you with your love life" right before "how happy are you with life in general" will have a smaller correlation than putting the two a bit separately from each other)

83
New cards

Counterbalancing

a method used in a repeated measures design where half the participants do the conditions in one order while the other half does it in a diff order

or also randomizing the order of questions for a normal correlation study

doesn't fully eliminate order effects but helps control for them by distributing them across diff participants

84
New cards

content analysis

an analysis of the different types of content found within or represented by a set of data

used in qualitative research

can yield numeric or non numeric information

85
New cards

good theories

_________________ are parsimonious (explainable with few variables), evidence‐based, all parts being logically consistent, falsifiable

86
New cards

discrete variable

quantitative variables where no immediate values are possible in between two adjacent ones (ex: # of children can be 0, 1, 2 but not .87)

87
New cards

continuous variable

quantitative variables where immediate values are possible in between two adjacent ones (ex: blood alcohol level can be 4.1 and 4.2 but also 4.1232354723568)

88
New cards

probability sampling

a method for selecting a survey sample where each member of the population has a chance of being selected into the sample and the probability of being selected can be specified

typically preferred in surveys

simple random, stratified random, cluster (single & multistage)

89
New cards

nonprobability sampling

a method for selecting a survey sample where each member of the population either does not have a chance of being selected into the sample, the probability of being selected cannot be determined, or both

scientifically useful

convenience, quota, self-selected, purposive (expert & snowball)

90
New cards

simple random sampling

a type of probability sampling where each member of the sampling frame has an equal probability of being chosen at random to participate in the survey (ex: 1000 sample size and 33000 sampling frame= probability of each student is 1/33)

91
New cards

stratified random sampling

a type of probability sampling where a sampling frame is divided into groups (called strata; singular = stratum), and then within each group random sampling is used to select the members of the sample

92
New cards

cluster sampling

a type of probability sampling where units (e.g., geographic regions, schools, departments) that contain members of the population are identified. These units—called "clusters"—are then randomly sampled

single or multistage

93
New cards

single cluster sampling

a type of cluster sampling where all the participants in the randomly selected clusters are chosen to participate in the survey= only one stage

94
New cards

multistage sampling

a type of cluster sampling where there are two or more stages to select progressively smaller samples.

95
New cards

convenience/haphazard sampling

a type of nonprobability sampling where members of a population are selected nonrandomly for inclusion in a sample, on the basis of convenience

might also be potential biases occurring when doing this

96
New cards

quota sampling

a type of nonprobability sampling where a sample is nonrandomly selected to match the proportion of one or more key characteristics of the population

97
New cards

self-selected sampling

a type of nonprobability sampling where participants place themselves in a sample, rather than being selected for inclusion by a researcher

98
New cards

purposive sampling

a type of nonprobability sampling where researchers select a sample according to a specific goal or purpose of the study, rather than at random

expert or snowball

99
New cards

expert sampling

a type of purposive sampling where researchers identify experts on a topic and ask them to participate

100
New cards

snowball sampling

a type of purposive sampling where people contacted to participate in a survey are asked to recruit or to provide contact information (names, locations) for other people who meet the criteria for survey inclusion.