CH10 Simple Experiments Cards

Chapter 10: Simple Experiments

Page 1: Title

Chapter 10: Simple Experiments

Page 2: Today's Plan

  • 2 Variables Refresher: An overview of the key concepts of independent and dependent variables in experimental research.

  • Revisiting Causality with a Focus on Internal Validity: Exploring the relationship between causation and various forms of validity in experiments.

  • Design Types and Considering Pretest/Posttest, Posttest Only: Discussion on the different experimental designs, including their strengths and weaknesses, and how pretesting can influence the results.

  • Interrogating Causal Claims: Methods to critically analyze causal claims made in psychological research.

Page 3: Learning Objectives

  • Establish causation through experiments: Understanding how to apply three primary criteria: covariance (the extent to which variables change together), temporal precedence (the order of cause and effect), and internal validity (ensuring no alternative explanations for results).

  • Identify variables: Clearly delineate between independent variables (IVs), dependent variables (DVs), and control variables in experimental designs.

  • Classify experiment designs: Recognize the differences between independent-groups designs (different subjects for different conditions) and within-groups designs (same subjects across all conditions).

  • Evaluate threats to internal validity: Factors such as design confounds (uncontrolled variables that might influence results), selection effects (biases in sample selection), and order effects (the impact of sequence of conditions) will be analyzed.

  • Interrogate experimental design using four validities: Focus on construct, external, statistical, and internal validity to assess the quality of the research.

Page 4: Variables Review

Page 5: Experimental Variables

  • Experiment: Involves the manipulation of one or more variables and the measurement of their effects using random assignment to conditions.

  • Manipulated Variable: The research assigns different levels to this variable (e.g., comparing different notetaking methods, like computer versus longhand).

  • Measured Variable: This variable records the outcomes of the manipulation (e.g., number of anagrams solved by participants).

  • Independent Variable (IV): The variable that is purposely manipulated in the experiment.

  • Dependent Variable (DV): The variable that is measured to assess the effects of the IV.

Page 6: Control Variables

  • Control Variable: A variable that is held constant throughout the experiment to eliminate its potential impact on the DV.

Clarification:

  • Variable: Changes or is measured during the study.

  • Constant: A variable that does not change throughout the duration of the study.

Page 7: Simple Experiment Example 1: Taking Notes

  • Study by Pam Mueller & Daniel Oppenheimer (2014): This influential study compared the effectiveness of taking notes on laptops versus notebooks in classroom settings.

  • Method: Participants watched TED Talks, took notes using their assigned method, engaged in a filler activity, and were subsequently quizzed on the material.

Page 8: Example Q&A

  • Independent variables count?: Focus on identifying and distinguishing between the independent and dependent variable in the context of this study.

  • Causal claims analysis: Critically analyze the causal claims stemming from the experiment's findings.

Page 9: Simple Experiment Example 2: Motivating Babies

  • Study with 100 babies aged 13-18 months: Investigated the influence of motivation on behavior through two conditions: "effort" (where babies had to press a button to access toys) versus "no-effort" (toys readily available).

  • Methodology: Behavior was recorded based on the time spent with toys and the number of attempts made to press the button.

Page 10: Example Q&A

  • Identifying independent variables: Questions aimed at discerning how independent variables are established in this observational context.

  • Causal claims evaluation: Further scrutiny of the causal relationships presented in the findings.

Page 11: Revisiting Causality

  • A detailed examination of internal validity and its significance in establishing true causation in experimental designs.

Page 12: Why Experiments Support Causal Claims

  • Establishing three crucial criteria: Covariance (the correlation between the IV and DV), temporal precedence (the timing of the IV in relation to changes in the DV), and internal validity (ensuring there are no third-variable confounds).

  • Careful evaluation of potential confounds is essential.

Page 13: Experiments Establish Covariance

  • The necessity of comparison groups, which include both control groups (no treatment) and treatment groups (receiving the IV).

Page 14: Experiments Establish Temporal Precedence

  • Importance of demonstrating that changes in the DV occur after manipulations of the IV, solidifying the causal direction.

Page 15: Internal Validity

  • The critical need for ruling out third-variable explanations (e.g., alternative explanations for the outcomes in the notetaking study that don't involve the IV directly).

Page 16: Design Confounds

  • Definition and impact of systematic variability that can lead to alternative explanations for the results obtained in the study.

Page 17: Selection Effects

  • An exploration of how participants systematically differ across levels of the IV, potentially skewing results.

Page 18: Avoiding Selection Effects

  • Utilize matched groups to ensure participants are comparable based on certain characteristics, particularly in small sample sizes.

Page 19: Notetaking Study Internal Validity

  • A comprehensive breakdown of the study's methodology to highlight strengths and weaknesses related to internal validity.

Page 20: Internal Validity Considerations

  • Consideration of participant demographics and preferences that may influence the results and their generalizability.

Page 21: Design Types

  • Distinction between between-subjects designs (independent groups) and within-subjects designs (repeated measures).

Page 22: Design Types Explained

  • Independent-groups Design: Different participants are assigned to various levels of the IV.

  • Within-groups Design: Same participants are subjected to all levels of the IV, enhancing reliability.

Page 23: Posttest-Only Design

  • An overview of designs that only test participants after they have been exposed to the IV.

Page 24: Pretest/Posttest Design

  • Discusses the framework involving random assignment to groups tested both before and after the intervention.

Page 25: Which Design Is Better?

  • Emphasizes the context-dependent nature of deciding which experimental design to adopt based on the research question at hand.

Page 26: Within-Groups Designs

  • Types include repeated-measures (same subjects under all conditions) and concurrent-measures (comparing different conditions in one trial).

Page 27: Advantages of Within-Groups Designs

  • Notable for controlling individual differences as each participant acts as their own control.

Page 28: Between vs. Within Comparison

  • A detailed overview of the advantages and disadvantages inherent in both types of designs to inform methodological choice.

Page 29: Evaluating Internal Validity in Within-Groups Designs

  • Analysis of order effects and methods to mitigate their influence on the results.

Page 30: Counterbalancing for Order Effects

  • Detailed discussion of full versus partial counterbalancing techniques to manage presentation order variations.

Page 31: Latin Square Design

  • Examination of how to utilize partial counterbalancing within factorial designs to control for order effects.

Page 32: Disadvantages of Within-Groups Designs

  • Considerations of potential order effects and demand characteristics that may arise as participants become aware of the experimental manipulation.

Page 33: Pretest/Posttest Design Clarification

  • Differences in exposure to the IV juxtaposed between different design frameworks to highlight implications on results.

Page 34: Interrogating Causal Claims

  • Validity considerations are outlined, highlighting the critical nature of assessing construct, external, statistical, and internal validity in research findings.

Page 35: Construct Validity Explained

  • Involves the quality of variable measurement and the effectiveness of manipulation within the experiment.

Page 36: External Validity Analysis

  • Generalizability to other populations and settings is discussed, highlighting factors that influence applicability.

Page 37: Statistical Validity Considerations

  • Discussion on the concepts of statistical significance and effect sizes, emphasizing the importance of rigorous statistical analysis in experimental research.

Page 38: Effect Size Measurement

  • Understanding and calculating effect sizes through Pearson's r and Cohen's d, providing insight into the practical significance of research findings.

Page 39: Internal Validity Questions

  • Critical considerations for ensuring internal validity throughout the experimental process are outlined, emphasizing the rigor required in designing experiments.

Page 40: Practice Questions

Pages 41-49: Clicker Questions

  • Interactive questions designed to reinforce learning and assess understanding of experimental concepts.

Page 50: Discussion Questions on Simple Experiments

  • Clarification of misconceptions related to control groups, the necessity of pretests, and the difference between random samples versus random assignment.

robot