Advanced Research Methods

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/84

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

85 Terms

1
New cards

What is the main goal of psychology as a science?

To understand the mind by studying observable behavior scientifically.

2
New cards

What makes psychology a science?

It relies on systematic observation, measurement, hypothesis testing, and statistical analysis.

3
New cards

What is the difference between basic and applied research?

Basic research expands theoretical knowledge; applied research solves practical problems.

4
New cards

Define a hypothetical construct.

An unobservable mental concept (e.g., intelligence, motivation, memory).

5
New cards

Define an operational definition.

A specific, measurable way of defining a construct (e.g., motivation measured by persistence time).

6
New cards

What three components must align in measurement?

Hypothetical construct + Operational definition + Measurement tool.

7
New cards

What is a latent variable?

A hidden psychological trait inferred from observable indicators.

8
New cards

What is an experimental method?

Manipulates an independent variable to observe its causal effect on a dependent variable.

9
New cards

What is a survey method?

Collects self-reported data through questionnaires to identify relationships (correlational).

10
New cards

What is an observational method?

Systematically watching and recording behavior in natural or lab settings

11
New cards

What is an interview method?

Direct questioning to gather qualitative data (structured, semi-structured, or unstructured).

12
New cards

Which two methods cannot occur together?

Experimental and survey (cannot manipulate variables in a survey).

13
New cards

Define independent variable (IV).

The factor the researcher manipulates to examine its effect.

14
New cards

Define dependent variable (DV).

The measured outcome affected by the IV.

15
New cards

What are the scales of measurement (lowest → highest)?

Nominal → Ordinal → Interval → Ratio.

16
New cards

Give an example of each scale.

Nominal: gender

Ordinal: satisfaction rank

Interval: temperature (°C)

Ratio: height, reaction time

17
New cards

Why is the distinction between interval & ratio important?

Ratio scales have a true zero → allow ratio statements (“twice as fast”).

18
New cards

Difference between descriptive and inferential statistics?

Descriptive → summarize sample data.

Inferential → use sample data to infer about population.

19
New cards

What are measures of central tendency?

Mean, median, mode.

20
New cards

When should you use the median instead of the mean?

When data are skewed or have outliers.

21
New cards

What are measures of variability?

Range, variance, standard deviation, quartile deviation.

22
New cards

Formula for variance?

Var = Σ(x – M)² / N

23
New cards

Formula for standard deviation?

SD = √Var

24
New cards

What percentage of data fall within ±1 SD in a normal distribution?

~68%. (95% within ±2 SD; 99.7% within ±3 SD)

25
New cards

Formula for z-score?

z = (x – M) / SD

26
New cards

Why are z-scores useful?

They standardize data → compare scores across different measures.

27
New cards

What is correlation?

When two variables vary together; one increases as the other increases/decreases.

28
New cards

What is causation?

When one variable directly influences another.

29
New cards

List Mill’s three criteria for causality.

(1) Temporal precedence, (2) Covariation, (3) Elimination of other causes.

30
New cards

What is a spurious correlation?

An apparent relationship caused by an unseen third variable.

31
New cards

Example of spurious correlation?

Ice cream sales and drowning rates (both due to summer heat).

32
New cards

Define reliability.

Consistency of measurement results across time or items.

33
New cards

Define validity.

Accuracy — whether the tool measures what it claims to measure

34
New cards

Relationship between reliability & validity?

A test can be reliable but not valid; cannot be valid if not reliable.

35
New cards

What is the experimenter effect?

Researcher expectations unintentionally influence results.

36
New cards

How can experimenter effects be reduced?

Use automation or double-blind design.

37
New cards

What is the double-blind procedure?

Neither participant nor experimenter knows who’s in which condition.

38
New cards

What is informed consent?

Participants are told the study’s nature and their rights before participating.

39
New cards

When is deception allowed in research?

Only when necessary and justified → must include debriefing

40
New cards

What is debriefing?

Explaining true purpose of study to participants after participation.

41
New cards

Ethical principles (APA):

Beneficence, Respect, Justice, Integrity, Fidelity & Responsibility.

42
New cards

What are language limitations in research?

When participants cannot communicate (e.g., infants, animals) or language causes bias.

43
New cards

What is social desirability bias?

Participants answer in ways they think are socially acceptable.

44
New cards

How to reduce language or response bias?

Use nonverbal measures or anonymous responses.

45
New cards

When do you use a bar chart vs a histogram?

Bar chart → qualitative (categorical) data; Histogram → continuous numeric data.

46
New cards

What’s the difference between a frequency table and relative frequency table?

Frequency shows counts; relative frequency shows proportions/percentages.

47
New cards

Difference between inductive and deductive reasoning?

Inductive: data → theory (bottom-up).

Deductive: theory → hypothesis → test (top-down).

48
New cards

Why is theory important in research?

It organizes findings, generates predictions, and guides data interpretation.

49
New cards

How do you report descriptive stats in APA?

“M = , SD = ”; italicize symbols.

50
New cards

What is required when using AI tools like ChatGPT in writing?

Must cite properly (APA 7 reference: ChatGPT, OpenAI, 2025)

51
New cards

What is a between-subjects design?

Different participants experience different conditions (e.g., Group A = music | Group B = silence)

52
New cards

What is a within-subjects design?

The same participants experience all conditions → controls for individual differences

53
New cards

Main advantage of within-subjects?

Reduces variability due to individual differences → higher statistical power.

54
New cards

Main disadvantage of within-subjects?

Order or carry-over effects (fatigue, practice, memory).

55
New cards

How do you control order effects?

Use counterbalancing (randomize the order of conditions).

56
New cards

Define random assignment.

Each participant has equal chance of being in any condition → minimizes confounding variables.

57
New cards

What are confounding variables?

Uncontrolled factors that co-vary with IV and can produce false results.

58
New cards

Difference between random sampling and random assignment?

Sampling = who gets selected; Assignment = who goes into which group.

59
New cards

Define population.

Entire group of interest (e.g., all university students).

60
New cards

Define sample.

Subset of population used in the study.

61
New cards

Why sample instead of testing whole population?

Practical limits (time, cost) — samples approximate population behavior.

62
New cards

What is random sampling?

Every member of population has equal chance of inclusion.

63
New cards

What is stratified sampling?

Population divided into strata (e.g., gender, major) → random sample drawn from each.

64
New cards

What is convenience sampling?

Participants chosen based on easy access (e.g., students in class).

65
New cards

What bias can convenience sampling create?

Low generalizability → limits external validity.

66
New cards

Define internal validity.

The degree to which the study truly shows a cause-effect relationship (free from confounds).

67
New cards

Define external validity.

Extent to which results generalize to other people, settings, or times.

68
New cards

Define construct validity.

Whether the test actually measures the theoretical construct intended.

69
New cards

Define ecological validity.

Whether the setting and task represent real-life situations.

70
New cards

How to improve internal validity?

Use random assignment, control extraneous variables, keep procedures consistent

71
New cards

Types of reliability?

Test-retest, inter-rater, internal consistency (Cronbach’s α).

72
New cards

What is test-retest reliability?

Consistency of scores across time.

73
New cards

What is inter-rater reliability?

Agreement between different observers or judges.

74
New cards

What is internal consistency?

Extent to which items on a scale measure the same construct.

75
New cards

Can something be reliable but not valid?

Yes — a bathroom scale always 5 kg off is consistent (reliable) but inaccurate (invalid).

76
New cards

Can something be valid but unreliable?

No — lack of consistency prevents true accuracy.

77
New cards

Example: “Students take a memory test in silence and with music, one week apart.” → Design?

Within-subjects design (counterbalance order).

78
New cards

Example: “Group A studies with music; Group B in silence.” → Design?

Between-subjects design.

79
New cards

If participants are all psychology majors, what’s the limitation?

Low external validity due to convenience sampling.

80
New cards

If participants guess the purpose of the study, what threat occurs?

Demand characteristics → threat to internal validity.

81
New cards

What is risk–benefit analysis?

Weigh potential harm against expected scientific or social benefit.

82
New cards

Role of IRB (Institutional Review Board)?

To evaluate and approve research proposals for ethical compliance.

83
New cards

When must deception be justified?

When truth would alter behavior and no alternative exists; must debrief later.

84
New cards

Why does correlation not equal causation?

Third variables and reverse causality can explain the relationship

85
New cards

Why use both inductive and deductive reasoning in science?

Induction builds theory from data; deduction tests predictions → iterative refinement.