Research Methods Exam #1

0.0(0)
studied byStudied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/58

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 3:03 AM on 2/10/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

59 Terms

1
New cards

Four ways of knowing

  • Intuition

  • Authority

  • Rationalism

  • Empiricism

2
New cards

Wilhelm Wundt

  • Set up the first lab to study conscious experience

    • Essentially, measuring reaction time

    • Wanted to show that the conscious experience can be studied/measured

  • Loved to muse about consciousness and what it means to exist

3
New cards

William James (functionalism)

  • Believed consciousness is an ever-changing flow of images and sensations

  • Concerned with how the mind functions to adapt to the environment

    • Admired Darwin and his theory of natural selection

4
New cards

Watson and Skinner (behaviorism)

  • Psychology MUST study observable behavior objectively

  • Watson – Little Albert

  • Skinner – Animals

5
New cards

Uncritical acceptance

Tendency to believe positive or flattering descriptions of yourself

6
New cards

Dunning-Kruger Effect

People with low ability at a task overestimate their ability (don’t know enough to know how bad they are)

7
New cards

Confirmation bias

Tendency to search for, interpret, and remember information that confirms one's preconceptions

8
New cards

Where do research questions come from?

9
New cards

Theory

  • Layperson definition – a hunch about how something works or why something happens

  • Scientific definition – is a coherent explanation or interpretation of one or more phenomena

10
New cards

Hypothesis

Testable hunch or educated guess about behavior

11
New cards

Null Hypothesis

  • Predicts no effect or no difference between groups/conditions.

  • Acts as the “default” assumption to be tested

12
New cards

Alternative Hypothesis

  • Predicts there is an effect or a difference.

  • What the researcher expects or is testing for.

13
New cards

Operational Definition

  • States the exact procedures used to represent a concept.

  • Allows abstract ideas to be tested in real-world terms.

  • This is a working definition of a variable (capable of being objectively measured) and is vital when building hypotheses

14
New cards

Conceptual Definition

15
New cards

Independent Variable

  • The variable that the researcher thinks will influence or predict the outcome

    • Sometimes it is manipulated (e.g., type of therapy: CBT vs. no treatment).

    • Sometimes it is a pre-existing difference (e.g., sex, smoker vs. nonsmoker).

16
New cards

Dependent Variable

  • The variable that is measured to see if it changes because of the IV.

    • Think of it as the effect or outcome.

    • Example: Do exam scores (DV) differ depending on whether students had coffee (IV) that morning?

17
New cards

Confounds/Extraneous Variables

  • An unmeasured third variable that influences, or “confounds,” the relationship between an independent and a dependent variable by suggesting the presence of a spurious correlation

  • Simple definition: hidden third variables

18
New cards

Basic research

  • A research question that focuses on understanding fundamental principles and theories without immediate practical application

    • Example: How many items (digits) can be stored in our short-term memory?

19
New cards

Applied research

  • A research question that aims to solve specific, practical problems using psychological principles and theories.

    • Example: Which intervention best slows memory deterioration in aging populations?

20
New cards

Correlational research

  • Measure two variables, see if they relate

  • Correlation ≠ causation

21
New cards

Random assignment

  • Each participant has an equal chance at being assigned to all conditions

  • Randomization spreads out extraneous and confounds (known or unknown)

22
New cards

Single-blind experiments

Participants don’t know which group they belong to (treatment VS placebo)

23
New cards

Double-blind experiments

Participants don’t know which group they belong to (treatment VS placebo)

Experiment practitioners don’t know which group they are treating (treatment VS placebo)

24
New cards

Lab study

  • High internal validity

  • Low external validity

25
New cards

Field study

  • High external validity

  • Low internal validity

26
New cards

Belmont Report Principles

Justice

  • Fair distribution of risks & benefits

  • Equitable burdens

  • Equal access to research findings

Beneficence

  • Do good, minimize harm

  • Maximize benefits to individuals & society

  • Avoid unnecessary risks

Respect for Persons

  • Informed consent & confidentiality

  • Respect for autonomy

  • Cultural sensitivity

27
New cards

Informed consent

A process where participants are fully informed before agreeing.

  • Requires clear, understandable language (no jargon).

  • Participation must be voluntary and free from coercion.

28
New cards

Anonymity

the state of being unknown or unidentified

29
New cards

Confidentiality

involves protecting sensitive information from unauthorized access

30
New cards

International Review Board (IRB) Composition

  • Every IRB must have at least 5 members, from diverse backgrounds.

  • Must include scientists, non-scientists, and a community member unaffiliated with the institution

31
New cards

IRB Levels of Research Risk - Exempt

Low risk, standard measures, existing data

32
New cards

IRB Levels of Research Risk - Expedited

Minimal risk (cognitive testing, memory testing, blood draws)

33
New cards

IRB Levels of Research Risk - At-Risk

  • Greater than minimal risk

  • Reviewed by full board of IRB members

  • Used for at-risk populations (e.g. children) or invasive methods

34
New cards

Conflicting interests between groups

An unavoidable ethical conflict

35
New cards

Use of deception

An unavoidable ethical conflict - some deception is necessary for valid results

36
New cards

Wakefield Study Issues

  • Sample size: only 12 children

  • Invasive procedures

  • Conflict of interest

  • Manipulated patient records

37
New cards

Science is probabilistic

  • Physics: laws (apples fall)

  • Psychology: tendencies (not everyone fits)

  • Findings are probabilistic, notabsolute

  • This is why replication is essential

38
New cards

Red flags in science

  • Overclaiming ('proves')

  • Unfalsifiable ideas

  • Cherry-picked data

  • No replication / single-study claims

39
New cards

Constructs

Abstract psychological ideas that require careful definitions to measure

40
New cards

Psychometrics

This is the science of measuring psychological constructs

  • How do we make a good scale

  • How do we test reliability and validity

  • How do we model constructs

41
New cards

Levels of measurement

  • Nominal

  • Ordinal

  • Interval

  • Ratio

42
New cards

Reliability

  • Reliability refers to the consistency of a measure

43
New cards

Test-Retest Reliability

  • This test how consistent the construct is across time

  • Examples: IQ tests, big 5 personality tests

44
New cards

Internal Consistency

  • refers to the consistency of people’s responses across items on a multiple-item measure (e.g., there is more than one question to respond to)

  • no unrelated items in the scale

  • Cronbach's Alpha: measure of internal consistency

    • assesses reliability

45
New cards

Interrater Reliability

  • Inter-rater reliability is the extent to which different observers are consistent in their judgments

46
New cards

Validity

  • Validity is the extent to which the scores from a measure represent the variable they are intended to

47
New cards

Face Validity

the extent to which a measurement method appears “on its face” to measure the construct of interest

48
New cards

Content Validity

The extent to which a measure covers the entire construct of interest

49
New cards

Criterion Validity

The extent to which people’s scores on a measure are correlated with an outcome (known as criteria) that one would expect them to be correlated with

50
New cards

Concurrent Validity

The extent to which a measure correlates with other measures or outcomes assessed at the same time.

51
New cards

Predictive Validity

The extent to which a measure predicts future outcomes or behaviors

52
New cards

Discriminant Validity

the extent to which a measure does not correlate with measures of conceptually distinct variables

53
New cards

Convergent Validity

The extent to which a measure correlates with other measures of the same construct

54
New cards

Demand Characteristics

Cues in an experimental setting that influence participants' behavior by suggesting the experimenter's expectations, potentially biasing the results.

(participants guess purpose)

55
New cards

Experimenter Expectancy Effect

How a researcher's expectations can inadvertently influence the behavior of participants and the outcomes of an experiment.

Subtle influence

56
New cards

Type 1 Error

  • incorrectly supported the hypothesis

  • false positive

57
New cards

Type 2 Error

  • incorrectly rejected the hypothesis

  • false negative

58
New cards

Explain the difference between basic and applied research. Provide a personally relevant example.

Basic: A research question that focuses on understanding fundamental principles and theories without immediate practical application
Example: How does social media impact interpersonal relationships among teenagers?

Applied research: A research question that aims to solve specific, practical problems using psychological principles and theories.
Example: What strategies can be implemented to improve student engagement in online courses?

59
New cards

Explain the difference between reliability and validity. Can you have one without the other? Provide an example.

Reliability: refers to the consistency of a measure
Validity: the extent to which the scores from a measure represent the variable they are intended to

You can have one without the other. For example, a scale can consistently read one weight and be wrong (invalid).