Chapter 2: Psychological Research – Study Notes

Why research matters

  • To validate claims through a systematic process and empirical verification, countering reliance on intuition, anecdotal evidence, or baseless assumptions. Research provides objective evidence to support or refute ideas, thereby building a more reliable body of knowledge.

Reasoning in research

  • Deductive reasoning (top-down): Starts with a general theory or premise and predicts specific results. It moves from general principles to specific conclusions, often used to test hypotheses (e.g., "All living things need water; a plant is a living thing; therefore, this plant needs water"). It is a logical analysis process.

  • Inductive reasoning (bottom-up): Draws general conclusions from specific observations. It moves from specific pieces of data to broad generalizations, often used to formulate new theories (e.g., "Every raven I've seen is black; therefore, all ravens are black"). This approach is empirically based.

  • Science typically uses both, cycling from an initial theory to deductive hypotheses, gathering empirical data, and then using inductive reasoning to refine or develop new theories based on observations. This iterative process is fundamental to the scientific method.

Key terms

  • Theory: A well-developed set of ideas that proposes an explanation for observed phenomena. A good theory is well-supported by evidence, is falsifiable, and can generate testable hypotheses. It organizes observations and predicts future events. Theories are not mere guesses; they are robust frameworks built on extensive data.

  • Hypothesis: A tentative, testable, and falsifiable statement about the relationship between two or more variables, often expressed as an "if-then" statement. It is a specific prediction about how the world will behave if the underlying theory is correct (e.g., "If students study an additional hour per day, then their test scores will improve").

Types of Research

  • Not all research is experimental; "empirical" simply means that data was collected by researchers through direct observation or experimentation, regardless of the research design.

Case studies

  • Focus on one individual, group, or unique event to gain rich, in-depth detail and understanding. They are particularly useful for studying rare phenomena or providing insights into complex psychological processes. However, conclusions drawn from case studies have limited generalizability to the wider population due to the unique nature of the subject.

Naturalistic observation

  • Involves observing behavior in its natural setting without intervention to capture genuine, unfiltered actions. This method yields high ecological validity, as behavior is seen in a real-world context. A significant challenge is observer bias, where researchers' expectations influence their perceptions of what they observe. To mitigate this, multiple observers can be used (inter-rater reliability), and strict operational definitions for behaviors are established.

Surveys

  • Gather large amounts of data from a representative sample of a population by asking questions, typically through questionnaires or interviews. They are efficient for collecting information on attitudes, beliefs, and behaviors across many individuals. However, responses can be influenced by social desirability bias (people answering in a way they think is socially acceptable) or simply by people lying. The depth of information per person is typically limited compared to case studies.

Archival research

  • Uses existing records or data sets (e.g., historical documents, medical records, public statistics) to answer research questions. It is cost-effective and often allows for the study of phenomena over long periods. A key limitation is that researchers have no control over the original data collection methods, and the content cannot be modified or tailored to specific research needs.

Timing in research

  • Cross-sectional: Compares multiple groups of individuals (e.g., different age groups) at one single point in time. This method is relatively fast but can be susceptible to cohort effects, where differences between groups are due to their distinct generational experiences rather than the variable being studied (e.g., age).

  • Longitudinal: Measures the same group of individuals repeatedly over an extended period. This method allows researchers to observe changes and developments within individuals over time. A significant risk is attrition, where participants drop out of the study, potentially biasing the results if those who leave differ systematically from those who remain.

Correlations

  • Describes the relationship between two or more variables, indicating how they change together. This relationship is measured by a correlation coefficient, denoted as r, which ranges from -1 to +1.

    • An r value close to +1 indicates a strong positive correlation: as one variable increases, the other also increases (e.g., height and weight often increase together).

    • An r value close to -1 indicates a strong negative correlation: as one variable increases, the other decreases (e.g., the amount of time spent watching TV might negatively correlate with academic grades).

    • An r value close to 0 indicates a weak or no linear relationship.

  • Important: Correlation does not imply causation. Even if two variables are strongly related, it does not mean one causes the other. Confounding variables (or third variables) can create the false impression of a direct causal link (e.g., ice cream sales and drownings are positively correlated, but a third variable—summer heat—causes both).

  • Issues: People often fall prey to illusory correlations (seeing relationships that do not exist, often due to an overemphasis on two co-occurring unusual events) and confirmation bias (selectively noticing and remembering information that aligns with one's existing beliefs while ignoring contradictory evidence).

Cause-and-effect

  • Established only through carefully designed experiments, which involve directly manipulating one variable and measuring its impact on another.

    • An Independent Variable (IV) is the factor that researchers actively manipulate or control (e.g., amount of a drug, type of therapy).

    • A Dependent Variable (DV) is the outcome or effect that is measured; it is hypothesized to change in response to the IV manipulation (e.g., symptom reduction, test performance).

  • Experimental group: The group of participants who receive the experimental manipulation or treatment for the IV.

  • Control group: The group that does not receive the experimental manipulation, serving as a baseline for comparison to assess the effect of the IV.

  • To avoid bias (both experimenter and participant): blinding techniques are used.

    • Single-blind study: Participants are unaware of whether they are in the experimental or control group, helping to control for demand characteristics and placebo effects.

    • Double-blind study: Both participants and the researchers who interact with them are unaware of group assignments, further reducing bias.

  • Operational definitions: Precise, measurable descriptions of how abstract concepts (e.g., "aggression," "intelligence") will be measured and manipulated in a study. For example, "aggression" might be operationally defined as the number of times a child hits a doll within a 10-minute period.

Selecting participants

  • Random sampling: The gold standard for recruiting participants, where every member of the target population has an equal chance of being selected for the study. This ensures the sample is representative of the larger population, thereby allowing for the generalization of findings. A population is the entire group of individuals that the researcher is interested in, while a sample is a subset of the population actually studied.

  • Random assignment (not to be confused with random sampling) is crucial in experimental designs, where participants are randomly assigned to either the experimental or control group to ensure groups are equivalent at the start of the study, thus minimizing pre-existing differences.

Results and reporting

  • Statistical significance (p < 0.05): A statistical measure indicating that the observed results are unlikely to have occurred by chance. A p-value less than 0.05 suggests there is less than a 5% probability that the findings are due to random variation, leading researchers to reject the null hypothesis.

  • Findings are subjected to peer review—a rigorous process where other experts in the field evaluate the research methodology, results, and conclusions for quality, validity, and accuracy before publication in scientific journals. This process ensures scientific rigor and credibility.

Recognizing good science

  • Reliability: Refers to the consistency and reproducibility of a measure over time or across different situations, yielding similar results each time. Types include:

    • Test-retest reliability: Consistent results when the same test is given to the same individuals on different occasions.

    • Inter-rater reliability: Consistent results when different observers or raters score the same behavior.

    • Internal consistency: Consistency of results across items within a single test.

  • Validity: Refers to the extent to which a measure accurately assesses what it is intended to measure. Types include:

    • Construct validity: The extent to which a test measures the theoretical construct it purports to measure.

    • Internal validity: The extent to which an experiment establishes a cause-and-effect relationship between the IV and DV, free from confounding variables.

    • External validity: The extent to which research findings can be generalized to other situations, people, and contexts (i.e., real-world applicability).

  • A valid measure is always reliable, but a reliable measure is not necessarily valid (e.g., a broken scale consistently reads 5 pounds too low; it's reliable but not valid).

Ethics in research

  • Human subjects: All research involving human participants is governed by IRBs (Institutional Review Boards). IRBs review research proposals to ensure the protection of participants' rights and well-being. Key principles include:

    • Informed consent: Participants must be fully informed about the study's purpose, procedures, potential risks, and benefits before agreeing to participate, and they must explicitly consent.

    • Voluntary participation: Participants must be free to decline participation or withdraw at any time without penalty.

    • Risk/benefit analysis: Potential risks (physical, psychological, social) must be minimized and clearly outweighed by the potential benefits of the research.

    • Confidentiality/Anonymity: Participant data must be kept private, either through confidentiality (identifying information is not shared) or anonymity (no identifying information is collected).

    • Debriefing: If deception is used, participants must be fully informed about the true nature of the study afterward.

  • Animal subjects: Research involving animals is overseen by IACUC (Institutional Animal Care and Use Committee) to ensure humane treatment. Guidelines are based on the "3 Rs":

    • Replacement: Using non-animal alternatives whenever possible.

    • Reduction: Minimizing the number of animals used.

    • Refinement: Modifying procedures to minimize pain and distress.