Chapter 2 – Psychological Research (OpenStax)

Why is Research Important

  • Importance of research in psychology: informs understanding of behavior and mental processes; guides decision-making in education, policy, and practice.

  • Use of research information: evidence-based conclusions, better public policies, and improved interventions.

  • The Process of Scientific Research: knowledge is advanced through a cycle of proposing hypotheses, conducting studies, analyzing results, and creating or modifying theories based on findings; this often involves iterative refinement and replication to establish reliability.

Approaches to Research

  • Clinical or Case Studies: involve detailed study of one person or a small group; provide rich, in-depth information but have limited generalizability.

  • Naturalistic Observations: observing subjects in their natural environment without intervention; example: Jane Goodall’s chimpanzee studies; highlights real-world behavior but less control over variables.

  • Surveys: collect self-reported data from a sample; useful for broad patterns but rely on honest reporting; issues include sampling and wording effects.

  • Archival Research: examining existing records or data; described as the simplest form of research approach.

  • Longitudinal vs. Cross-sectional Research:

    • Longitudinal: follows the same group over years or more; strengths include observing development and change over time; weaknesses include attrition/drop-out and time requirements.

    • Cross-sectional: compares different age groups at one point in time; faster and cheaper but may be confounded by cohort effects.

The Scientific Method and Hypotheses

  • Scientific method in psychology: proposing hypotheses, conducting empirical investigations, and building or revising theories based on results.

  • A good hypothesis should be:

    • testable and falsifiable (able to be proven wrong)

    • often stated as an if-then statement

    • Freuds’ concepts of id, ego, and superego are not falsifiable, illustrating why falsifiability matters.

  • Example of a testable hypothesis in psychology: H: ext{If } X ext{ is manipulated, then } Y ext{ will change in a specified direction.}

Inductive and Deductive Reasoning in Research

  • Inductive reasoning: derives general conclusions from specific observations.

  • Deductive reasoning: tests a theory by deriving specific predictions from it.

  • Circular relationship: inductive and deductive processes reinforce each other in scientific inquiry; theories guide observations, which in turn refine theories.

  • Figure references: a cycle showing how hypotheses lead to research, which leads to results that inform and modify theories.

Examples and Figures Mentioned

  • Figure 2.2 Trephination: historical practice illustrating early approaches to understanding behavior and brain function.

  • Figure 2.3 D.A.R.E. program: popular in schools but research suggests it is ineffective; prompts discussion on translating findings into public policy.

  • Figure 2.4 Psychological research relies on both inductive and deductive reasoning (circular relationship).

  • Figure 2.5 Scientific knowledge is advanced by the scientific method (proposing hypotheses, conducting research, modifying theories).

Hypotheses and Concepts of Testing

  • What is a good hypothesis? Should be testable and falsifiable; Freud’s id/ego/superego example shows non-falsifiability.

  • Hypotheses are often formulated as if-then statements to specify expected relations.

Approaches to Research: Detailed Designs

  • Clinical or Case Studies: provide detailed information about a single case; not generalizable but can generate hypotheses.

  • Naturalistic Observations: observe natural behavior (Jane Goodall example); high ecological validity but less control over variables.

  • Surveys:

    • Random Sampling: participants should be randomly sampled from the target population to ensure representativeness; large samples improve generalizability.

    • Wording Effects: subtle changes in wording can shift responses (e.g., government prohibition vs. government forbiddance).

    • Dewey Defeats Truman (1948): historical example illustrating how sampling or question phrasing can mislead results.

  • Archival Research: examining existing records; described as the simplest form of archival data collection.

Longitudinal vs Cross-Sectional Studies: Pros and Cons

  • Longitudinal studies: track the same individuals over time; strengths include analyzing development and change; drawbacks include attrition and longer durations.

  • Cross-sectional studies: compare different ages/groups at one time; quicker and cheaper but susceptible to cohort effects.

Analyzing Findings: Correlation and Causation

  • Correlational Research: examines how two variables vary together; correlation does not imply causation.

  • Illusory Correlations: perceiving a relationship where none exists, often reinforced by confirmation bias.

  • Causality in research requires experimental manipulation and control of confounding variables.

  • The Experimental Hypothesis: a specific testable prediction about how manipulating one or more factors will affect behavior or mental processes.

  • Variables in experiments:

    • Independent Variable (IV): the factor that is manipulated; the presumed cause.

    • Dependent Variable (DV): the factor that is measured; the presumed effect.

    • Confounding Variable: another factor that could influence the DV, potentially biasing results.

  • Selecting and Assigning Experimental Participants: ensuring groups are comparable and that assignment minimizes preexisting differences.

  • Interpreting Experimental Findings: use statistical analysis to determine whether group differences are meaningful beyond chance.

  • Reliability and Validity: reliability = consistency of results; validity = whether the measurement actually measures what it intends to measure. Cultural differences may affect validity.

  • Relationship: Validity implies reliability; however, reliability does not necessarily imply validity. (In other words, a measure can be consistently wrong.)

Correlation and Its Visualization

  • Correlation Coefficient R: measures the strength and direction of the relationship between two variables.

    • Range: R \in [-1.00, +1.00]

    • R = 0 indicates no linear relationship; R = +1 is a perfect positive relationship; R = -1 is a perfect negative relationship.

  • Scatterplots: graphical representation of the strength and direction of correlations; data points closer to a straight line indicate stronger relationships.

    • Examples:

    • Positive correlation: weight and height (as one increases, the other tends to increase).

    • Negative correlation: tiredness and hours of sleep (more sleep, less tired).

    • No correlation: shoe size and hours of sleep.

  • Correlation ≠ Causation: association does not reveal which variable causes the other.

Illusory Correlations and Confirmation Bias

  • Illusory correlations: perceived relationships that do not exist; common with selective attention or biased interpretation.

  • Confirmation bias: tendency to search for, interpret, and recall information that confirms preconceptions while ignoring contrary evidence.

Conducting and Interpreting Experiments

  • Experiment: researchers vary one or more factors (IV) to observe effects on behavior or mental processes (DV); aim to control other factors by random assignment.

  • Manipulating factors of interest and holding others constant helps isolate effects of the IV.

  • Example focus: determining which child behaviors increase after exposure to violent television content.

What hypothesis are you going to test? Experimental Formulation

  • Hypotheses can come from careful observation or a literature review.

  • Example: Are children more likely to display violent behaviors after viewing violent TV programming?

  • Experimental Hypothesis: a specific, testable prediction about the effect of the IV on the DV.

Experimental Design: Groups and Definitions

  • Experimental Group: receives the treatment (e.g., views violent TV program).

  • Control Group: does not receive the treatment (e.g., views nonviolent TV program).

  • Operational Definitions: explicit description of how variables are measured and manipulated (e.g., what counts as "violent behaviors": punching, toy guns, kicking).

Controlling Bias and Ensuring Rigor

  • Experimenter Bias: researchers’ expectations may influence results.

  • How to mitigate bias: single-blind or double-blind procedures.

  • Double-Blind Procedure: both participants and experimenters are unaware of who receives the treatment or placebo.

  • Placebo: an inactive substance or condition given to control group.

  • Placebo Effect: observed effects due to participants’ expectations rather than the treatment.

Experimental Variables: Definitions and Design Considerations

  • Independent Variable: factor manipulated to observe its effect.

  • Dependent Variable: measured outcome.

  • Confounding Variable: extraneous variable that could influence results.

  • In an experiment, manipulations of the IV are expected to cause changes in the DV.

Practice and Application

  • Example practice item: Can people taste the difference between red and yellow Starburst during a blind taste test when their nose is closed? (Illustrates hypothesis formation, groups, and variables.)

Selecting and Assigning Experimental Participants

  • Participants are the subjects of psychological research.

  • Population: all individuals to be studied (e.g., all FSCJ students).

  • Sample: a subset of the population; must be randomly selected and large enough to generalize results.

  • Population vs Sample distinction is crucial for generalizability and external validity.

  • Random Sampling: each person in the population has an equal chance of being selected.

  • Random Assignment: after sampling, participants are assigned to experimental and control groups by chance to minimize preexisting differences.

Variability in Sampling and Group Assignment

  • Large populations may require practical sampling strategies; samples should still be representative.

  • Figure 2.18 summarizes that researchers may work with a large population or a sample group that is a subset of the population.

  • Quasi-experimental designs: when random assignment is not possible or ethical (e.g., comparing smokers vs non-smokers); these designs have limitations for causal inference.

Analyzing Experimental Findings: Statistics and Inference

  • Statistical Analysis: determines whether differences between groups are meaningful and not due to random chance.

  • Confounds and limitations must be acknowledged; replication strengthens reliability of findings.

Reporting Research

  • Peer-reviewed journal articles: critical review process ensures quality and replicability; reviews and potentially conducts related research to verify findings.

  • Replication: essential for establishing reliability and generalizability of results; each replication adds evidence for the original findings.

  • APA Style (2019): recommended for psychology majors; includes guidelines for writing and attribution; APA also endorses the use of the singular "they" as a gender-neutral pronoun.

  • References: proper citation format (example shown):

    • AuthorLastName, FirstInitial., & AuthorLastName, FirstInitial. (Year). Title of article. Title of Journal, Volume(Issue), Page Number(s). https://doi.org/number

Ethical Considerations in Psychological Research

  • Ethics: research must be ethical throughout design, conduct, and review; distinguishes human participants and animal subjects.

  • IRB: Institutional Review Board; committee of administrators, scientists, and community members that reviews proposals for research involving human participants.

  • Informed Consent: informs participants about what to expect, potential risks, and implications; participation must be voluntary; data handling is confidential.

  • Deception: may be used in some studies to maintain integrity, but must be followed by full debriefing at study conclusion where participants receive complete and truthful information about the study.

  • Debriefing: post-study explanation addressing deception, study purpose, and participants’ questions.

  • Research involving animals: protected by IACUC (Institutional Animal Care and Use Committee); ensures humane treatment and welfare of animals.

Reliability, Validity, and Threats to Integrity

  • Reliability: consistency of results under similar conditions; measured by statistical analyses such as inter-rater reliability (agreement among observers).

  • Validity: accuracy of the instrument in measuring what it is intended to measure; cultural differences can affect validity.

  • A measure that is valid is also reliable; however, a reliable measure need not be valid.

Special Topics and Real-World Relevance

  • Vaccinations and public health: Figure 2.19 notes that some people still believe vaccines cause autism; research led to retractions of some studies due to financial conflicts of interest; emphasizes ethical and scientific standards in publishing and public policy.

Ethics in Animal Research and Human Research: Summary

  • IRB oversees human research ethics; informed consent and risk disclosure are core components.

  • Deception requires debriefing; participant welfare is prioritized.

  • IACUC protects animal welfare in research settings.

Quick Reference: Key Terms and Concepts

  • Population vs. Sample

  • Random Sampling vs. Random Assignment

  • Independent Variable (IV) vs. Dependent Variable (DV)

  • Confounding Variable

  • Operational Definition

  • Reliability vs. Validity

  • Inter-rater Reliability

  • Correlation vs. Causation

  • Illusory Correlations and Confirmation Bias

  • Quasi-experimental Design

  • Statistical Significance and Replication

  • Peer Review and APA Style

  • IRB and Informed Consent

  • Debriefing and Deception

  • IACUC and Animal Research

Notes on Figures and Examples from the Text

  • Figure 2.12: Scatterplots illustrate strength and direction of correlations (positive, negative, none).

  • Figure 2.17: In an experiment, IV manipulations are expected to yield DV changes.

  • Figure 2.18: Illustration of population vs. sample in research.

  • Figure 2.19: Vaccination-autism discussion and issues surrounding publication and retraction.

Formulas and Notation

  • Correlation coefficient: R \in [-1, 1] with interpretation:

    • R = +1.00: perfect positive correlation

    • R = -1.00: perfect negative correlation

    • R = 0.00: no linear relationship

  • Hypothesis form (example): H: \text{If } X \text{ is manipulated, then } Y \text{ will change (direction specified).}

  • Key relationships:

    • Independence: manipulation of the IV to observe effect on the DV

    • Control variables: holding constant to isolate IV effects