Research Methods: Foundations and Applications - Notes

Research Methods: Foundations and Applications

Lecturer Information

  • Nicole Traynor

  • Associate Lecturer, School of Psychology

  • Western Sydney University

  • Email: n.traynor@westernsydney.edu.au

Acknowledgment of Country

  • Recognition of Aboriginal cultural protocol and traditional lands.

  • Western Sydney University acknowledges:

    • Darug peoples

    • Eora peoples

    • Dharawal (Tharawal) peoples

    • Wiradjuri peoples

  • Appreciation for support on traditional lands in Greater Western Sydney and beyond.

  • Mrs Janice Bruny - Tree of Knowledge.

MODULE OUTLINE

  1. Part 1:

    • Types of Research Methods in Psychology

    • The Scientific Method

  2. Part 2:

    • Variables in Psychology

    • Reliability and Validity

  3. Part 3:

    • Ethics in Psychological Research

    • Cultural Considerations in Psychological Research

Variables in Psychology

Defining and Measuring Variables
  • Variable:

    • Definition: Any factor, event, situation, behavior, or characteristic that can vary.

  • Operational Definition:

    • Definition: The set of procedures used to measure or define a variable.

    • Example: Aggression, pain can be operationally defined through specific procedures.

Types of Variables
  • Independent Variable (IV):

    • Definition: The variable that is varied or manipulated by the researcher.

  • Dependent Variable (DV):

    • Definition: The response being measured in an experiment.

  • Confounding Variable:

    • Definition: An uncontrolled variable that is confused or confounded with the effects of the independent variable, potentially impacting the study results.

Reliability and Validity

Reliability
  • Reliability:

    • Definition: Refers to the consistency and stability of a measure or research method over time. A reliable measure produces the same results under consistent conditions.

  • Types of Reliability:

    • Test-retest reliability:

    • Refers to consistency of results when the same test is administered at different points in time.

    • Inter-rater reliability:

    • Refers to the level of agreement among different raters or observers measuring the same phenomenon.

    • Internal consistency:

    • Refers to the extent to which items within a test or measure are consistent in assessing what they are intended to measure.

Validity
  • Validity:

    • Definition: Refers to the degree to which a test accurately measures the construct it aims to assess, or a study effectively addresses the hypothesis it intends to evaluate.

  • Types of Validity:

    • Construct validity:

    • Refers to whether a test measures the theoretical construct it claims to measure.

    • External validity:

    • Refers to the extent to which results from a study can be generalized to, or have relevance for, settings, people, times, or measures other than the one used in the study.

    • Internal validity:

    • Refers to the accuracy about the cause-and-effect relationship between variables.

Internal Validity Threats
  • Threats to Internal Validity:

    • Confounding Variables:

    • Uncontrolled factors that may influence both the independent and dependent variables, potentially skewing results.

    • Placebo Effects:

    • Changes in outcomes that occur due to participants' expectations rather than the treatment itself.

    • Experimenter Expectancy Effects:

    • Occur when a researcher's expectations about the outcome of the study influence participants’ responses or behaviors.

    • Demand Characteristics:

    • Cues in an experiment that suggest to participants how they should behave, potentially affecting the validity of the outcome measurement.

Part 2 Summary

  • Overview of Key Concepts:

    • What a variable is.

    • Independent variable (IV).

    • Dependent variable (DV).

    • Confounding variables.

    • Importance of reliability and validity in research.

  • Introduction to Part 3: Ethical and Cultural Considerations in Psychological Research.