PSY301 Exam 2 Saponjic

0.0(0)
Studied by 3 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/101

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:24 PM on 3/23/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

102 Terms

1
New cards

reliable really just means..

consistency

2
New cards

3 main reliability measures

test-retest, interrater, and internal

3
New cards

test-retest reliability (4)

reliability measure where the same test is administered on two occasions to determine consistency, "scores at T1 should = T2", scatterplot can display, good = r of .7 or greater

4
New cards

interrater reliability (4)

reliability measure where consistent scores are obtained no matter who measures/observes, scatterplot can display, good = r of .7+ AND percent agreement of 85%+

5
New cards

how can you increase interrater reliability? (3)

through practice, training, and clear instructions

6
New cards

percent agreement vs correlation in interrater reliability

percent agreement- the exact percentage of identical ratings between raters, categorical/qualitative measurements (nominal)

correlation- how correlated the patterns are and how interchangeable different raters are overall, scaled/quantitative measurements (ordinal, interval, continuous)

7
New cards

In interrater reliability, percent agreement is to ___ and ___, as correlation is to ___ and ___

categorical and qualitative, scaled and quantitative

8
New cards

internal reliability/consistency (3)

reliability measure that determines how consistently different items on a test measure the same construct, related to construct validity, good = Cronbachs Alpha is .70+

9
New cards

internal reliability/consistency example

Are ALL questions on a self-esteem scale acually related to self-esteem?

10
New cards

What other concept is internal reliability/consistency related to?

Content validity

11
New cards

Cronbachs Alpha, 4 dif levels

.9= excellent, .8= good, .7= acceptable, .5 and under= unacceptable

12
New cards

validity

extent to which something actually measures what it's supposed to measure, essentially accuracy

13
New cards

2 subjective ways to asses validity

face validity and content validity

14
New cards

face validity

does it look like what your trying to measure?

15
New cards

content validity (2Qs)

does the measure contain all the parts your theory says it should contain? does it cover all aspects of the constrcut?

16
New cards

How does Content Validity differ from internal reliability/consistency?

Content validity is a matter of item RELEVANCE, while Internal reliability is a matter of item CONSISTENCY

17
New cards

3 empirical measures of validity

Criterion, convergent, and divergent

18
New cards

Criterion validity (3)

do ppls scores correlate w/ other key behaviors/variables we would expect them to correlate w/?, measure should predict behavioral outcome, known groups paradigm

19
New cards

2 types of Criterion-related validity

concurrent and predictive

20
New cards

Concurrent validity (3)

type of criterion-related validity, compares new tests results with an established "gold-standard" test results

21
New cards

Predictive Validity

type of Criterion-related validity, measures how well a test predicts a future later-measured outcome (that it should predict well)

22
New cards

Predictive Validity example

SAT scores predicting future college GPA

23
New cards

Criterion validity example

a depression inventory's scores should positively correlated with depression diagnoses

24
New cards

Known Groups Paradigm

method for establishing criterion validity by comparing scores with distinct groups already knwon to differ on teh variable

25
New cards

known-groups paradigm example

new depression inventory is administered to a group of diagnosed ppl and group of non-diagnosed ppl

26
New cards

Convergent Validity

scores on 2 different measures, that measure the same thing, are consistent

27
New cards

Divergent validity (3)

aka discriminant validity, ensures a test intended to measure 1 thing doesn't accidentally measure another unrelated thing, scores shouldn't correlated with an unrelated concept

28
New cards

How can a measure be reliable but not valid?

When it consistently produces the SAME INCORRECT results, bc measuring wrong construct or systematic errors

29
New cards

Reliable but invalid measure example

A scale that consistently reads 150 over and over but the person actually weighs 200

30
New cards

How can a measure be valid but not reliable?

when it accurately hits the target construct ON AVERAGE but scores are inconsistent across trials

31
New cards

valid but unreliable measure example

A scale that reads 200, 195, 205 when the person is really 200

32
New cards

surveys vs polls

surveys typically involve multiple questions, while polls are usually only 1 question (aiming to gain frequency info about support on an issue)

33
New cards

Open-ended questions pros and cons

pros- rich source of info and better for qualitative research

cons- very broad and hard to code+analyze

34
New cards

Forced-choice questions pros and cons

pros- easy to code+analyze

cons- least info (nominal) and no opportunity for elaboration/detail

35
New cards

Likert scale (4)

the type of forced-choice Q we use on our lab survey, can be between 3-10, prefferably 5-7, use anchors

36
New cards

6 main things to consider when writing well-worded questions

simplicity, leading questions?, double barreled questions?, negations?, floor+ceiling effects?, question order effects?

37
New cards

double barreled questions

Qs that ask about 2+ things but only allow for 1 answer, AVOID

38
New cards

negations (3)

Qs that use negative words/phrasing, reverses statements meaning which can be confusing, words like "no, never, doesn't"

39
New cards

floor+ ceiling effects

considering the effects that the range can have on response accuracy, can skew data inaccurately like if only 3 options may cluster at max/min even tho they feel more in-between

40
New cards

question order effects

the context of the prior questions can influence responses, Ex: domestic violence Q before spanking kids Q results versus opposite order

41
New cards

How can you control for question order effects?

by making different versions of the survey w/dif orders to see if the results differ and proceed accordingly

42
New cards

response sets

tendency for participants to answer Qs in specific consistent patterns, disrupts data reducing survey validity

43
New cards

response set examples (3)

Acquiescence (agreeing with everything), fence-sitting (middle ground), extremity, etc

44
New cards

The tendency for response-sets, like acquiescence and fence-sitting, to occur in data does what?

weakens construct validity

45
New cards

Faking good

Giving answers to inflate how "good" they are, aka socially desirable responding

46
New cards

3 main threats to response accuracy

response sets, faking good, and faking bad

47
New cards

3 main problems with behavioral observations

observer bias, participant reaction bias, and experimenter bias

48
New cards

observer bias

tendency of observers to see what they expect to see, Confirmation bias influenced data recording

49
New cards

Participant reaction bias (3)

aka reactivity, ppl act different when they know they're being observed, 3 main aspects: participant expectancies, participant reactance, and evaluation apprehension

50
New cards

participant expectancies (4)

when they behave in the way they feel they're expected to, demand characteristics, most common+problematic

51
New cards

demand characteristics

cues in an experiment that tell the participant what behavior is expected, think weapons effect

52
New cards

weapons effect

the tendency for aggression to increase bc the mere presence of weapons (even pics of them)

53
New cards

participant reactance

participant acts the opposite of how they think the experimenter wants them to react

54
New cards

evaluation apprehension

ppl feel apprehensive about being evaluated/don't want to be judged as bad, think social desirability

55
New cards

experimenter bias

researcher expectations skew the results of the study, often by making biased observations+treating subjects differently

56
New cards

observer bias + experimenter bias (3)

same things one is just for experiments, both make biased observations but only experimenters treat subjects differently

57
New cards

4 ways of reducing these biases

double-blind procedure, anonymity, cover story, and unobtrusive measures

58
New cards

double-blind procedure (2)

experimental procedure to reduce bias, where neither the experimenter nor the subject knows what group the subjects in

59
New cards

anonymity

reduces bias by making responses untraceable to spec person or condition

60
New cards

cover story (deception)

A false description of the purpose of a study given to participants, used to maintain psychological realism and protect from reactivity

61
New cards

unobtrusive measures (3)

ways of observing people so they do not know they are being studied, ensuring natural behavior is observed, like naturalistic observation

62
New cards

Is observation ethical?

it depends on spec situation

63
New cards

When do we not need consent to observe ppl?

if observed in public place and anonymity is protected

64
New cards

random sampling vs random assignment

Random sampling- how participants are selected as a sample of their population

Random assignment- participants are randomly assigned to dif groups

65
New cards

random sampling affects ___ validity, while random assignment affects ___ validity

external, internal

66
New cards

3 types of probabilistic samples

simple random, stratified random, and cluster

67
New cards

simple random sampling (def+3)

every member of the population has equal probability of being selected, sampling frame=list of everyone in pop, tecs= systematic+random # table

68
New cards

stratified random sampling

Population divided into subgroups (strata) and random samples taken from each strata

69
New cards

stratified random sampling example

A researcher wants to study GPA of SDSU students, Separates students into majors, Randomly selects from the majors

70
New cards

cluster sampling

when random sampling isn't possible, divide pop into clusters and randomly select clusters

71
New cards

cluster sampling example

A researcher wants to survey math performance of students, She divides the entire population into clusters by school district, then selects entire school districts randomly for her research.

72
New cards

systematic sampling technique

type of simple random, every nth individual on pop list is selected

73
New cards

systematic sampling technique example

every 5th person who walks into the grocery store

74
New cards

multistage sampling technique

good type of cluster sampling, select sub-clusters within clusters

75
New cards

multistage sampling technique example

randomly select five hospitals from county, then randomly select 50 health care workers from each of the 5 hospitals

76
New cards

oversampling

A form of probability sampling, type of stratified random sampling in which the researcher intentionally overrepresents one or more groups.

77
New cards

Oversampling example

10% of sample is prisoners when they're only 2% of population

78
New cards

When working with a probability sample, you ___ ___ how much ___ ___ is in sample data

can estimate, sampling error

79
New cards

sampling error

samples aren't going to match pop exactly, this = the margin of error (E) which is used to create confidence intervals

80
New cards

margin of error

accounts for the percentage difference in accuracy that is due to sampling error, confidence intervals

81
New cards

margin of error equation

ME = 2√(s^2/n)((N-n)/N)

really 1.96 not 2

82
New cards

Confiedence interval

shows 95% probability that the average ___ is x +/- ME (sample average plus/minus margin of error)

[x-ME, x+ME] = confidence interval

83
New cards

When are non-probability samples used?

when it's impossible, impractical, or unnecessary to obtain a probability sample

84
New cards

non-probability sample limitations (3)

researchers have no way of knowing the probability of a spec case being sampled, how representative the sample is, or Margin of error

85
New cards

non-probabilistic sampling example

animal research- animals raised for research so not representative, also most research conducted on college campuses

86
New cards

non-probabilistic sample decreases ___ validity (___)

external, generalizability

87
New cards

non-probability sampling types

convenience, quota, purposive (snowball), and self-selection

88
New cards

convenience sampling (3)

non-prob type, Researchers uses whatever Ss are readily available, ex: class survey

89
New cards

quota sampling (3)

non-prob convenience subtype, Researcher takes steps to ensure that certain kinds of Ss are obtained in particular proportions, ex: 50 men + 50 women

90
New cards

purposive sampling

non-prob type, researcher uses judgement to decide which respondents to include in the sample, aka snowball sampling

91
New cards

purposive sampling example

interviewing only expert wine tasters for product feedback

92
New cards

self-selection sampling

sampling only those who volunteer

93
New cards

Random assignment is only used when?

w/ experimental designs

94
New cards

bivariate correlation

associations that involve exactly two variables

95
New cards

strong correlations allow us to...

predict behavior

96
New cards

What makes a study correlational?

having 2 measured variables, none manipulated

97
New cards

How do we quantitatively describe associations?

correlational strength/coefficient= r

98
New cards

levels of correlational strength quantified

small= 0.0-0.3

medium= 0.31-0.7

large= 0.71-1.0

ALL +/-

99
New cards

associations between 2 quantitative variables (describing and graphing)

scatterplot and correlation coef (r)

100
New cards

associations when 1 variable is categorical (describing & graphing)

scatterplot or bar graph, t-test (average group difference)