Evaluating Research

Key Learning Goals

  • Articulate the importance of replication in research.

  • Recognise common flaws in the design and execution of research.

2.6 Evaluating Research

  • Not all published research is error-free; scientists can make mistakes.

  • Replication: Repeating a study to verify results.

    • Helps identify and eliminate inaccurate findings (Pashler & Harris, 2012; Simons, 2014).

  • Contradictory results may arise from replication, leading to scientific advancements.

  • The replication crisis in psychology shows a significant number of studies cannot be replicated:

    • Only 36% confirmed original findings in one study (Open Science Collaboration, 2015).

    • Another study found 39% replication success (Bohannon, 2015).

  • Causes of the replication crisis include poor study design and implementation.

  • Awareness of methodological issues enhances research evaluation skills.

2.6.1 Bias in Research

  • Bias: Systematic error affecting measurement in scientific investigations (Krishna et al., 2010; Sica, 2006).

    • Essential for researchers to control bias while designing studies for clear variable relationships.

  • Types of bias affecting research validity:

    • Selection Bias: Participants sampled do not represent the population; influences the outcome by differences between compared groups.

    • Importance of indicating participant selection/exclusion criteria in reports (Kazerooni, 2001).

Sampling Bias

  • A subset of selection bias; occurs when certain participants are under/over-represented in the sample (McCready, 2006).

    • Example: Online invitations may skew towards socioeconomically privileged individuals.

  • Affects accuracy and representation of collected data, impacting scientific merit of research.

    • Selection bias leads to non-random samples; sampling bias may not achieve proper randomisation.

    • Selection bias affects external validity, while sampling bias influences internal validity.

2.6.2 Measurement Bias

  • Measurement Bias: Systematic errors during data collection (Krishna et al., 2010).

    • Essential for empirical studies to exhibit internal validity (accurate measurement).

  • Common measurement biases include:

    • Instrument Bias: Faulty instruments lead to incorrect data (communication barriers, calibration issues).

      • Example: Poorly designed survey questions yielding irrelevant data.

    • Insensitive Measure Bias: Instruments fail to detect significant variables (Hsu et al., 2008).

    • Experimenter Bias: Bias due to expectations influencing results.

      • Researchers may unconsciously sway participant responses in favor of their hypothesis.

      • Mitigate through blinding or non-disclosure of hypotheses to researchers.

2.6.3 Distortions in Self-Report Data

  • Self-report measures (interviews/questionnaires) collect data on beliefs/behaviors but rely on honesty from participants.

  • Social desirability bias may occur when participants feel pressured to respond acceptably, especially on sensitive topics.

    • Minimise bias by forming rapport with participants and ensuring anonymity for honest responses.

robot