Module 3B Internal and External Validity
Internal and External Validity
Introduction to Validity
- Validity in research design assesses the accuracy of study results and their applicability to external populations.
- It's not just about testing methods but also about experimental and research design.
Types of Validity
- Two main types:
- Internal validity
- External validity
Internal Validity
- Definition: Whether the findings are valid within the study itself.
- Key Question: Was the experimental treatment the true cause of performance change, or were there other factors?
- Example: Comparing two training forms and determining if the performance improvement was due to the training difference or some other factor within the research design.
Threats to Internal Validity
History
- Definition: Events occurring to participants outside of the experimental treatment, either before, during, or after the study.
- Examples: Participant injuries, changes in diet (e.g., during the course of the research study).
Maturation
- Definition: The natural aging or developmental changes in participants during the study.
- Special Concern: Youth research due to puberty-related changes.
- Puberty can cause large changes in performance independent of the experimental treatment.
Testing
- Definition: Repeated exposure to a test can alter performance.
- Buckner et al. paper: Repeatedly performing a test improves performance due to skill acquisition not only force generating capacity.
- Example: Back squat testing and training.
- If the test (one-rep max back squat) is the same as the training exercise (repeated back squats), improvements may be due to both increased force generation and improved motor performance.
Instrumentation
- Definition: Changes in measurement tools or processes during the study.
- Example: Switching from one type of caliper to another or using different methods to measure body composition, like switching to BOD POD.
Statistical Regression
- Definition: Extremely high performers tend to regress toward the average high performance, and low performers regress toward the average low performance.
- Impact: Affects the interpretation of results and needs consideration.
Selection
- Definition: How participants are selected can threaten internal validity.
- Bias: Selecting a group known to respond favorably can skew results.
- Issue: Cannot determine if results are due to treatment or the specific population's response.
Experimental Mortality
- Definition: Dropout rate of participants during the study.
- Impact: High dropout rates reduce the precision and generalizability of results.
- Mitigation: Researchers often recruit more participants than the minimum required sample size to account for potential dropouts.
- Reasons for Dropout: Injury, work commitments, changes in competitive schedule (especially in longitudinal studies).
Controlling Threats
- For a study to be truly experimental, these threats must be controlled.
- However, over-controlling internal validity can negatively impact external validity.
External Validity
- Definition: The extent to which the results of a study can be generalized to other populations or settings.
- Key Question: Are the findings unique to the sample studied, or can they apply to other groups?
- Example: Can findings from rugby union players be applied to Australian rules football, soccer, or other field-based team sports?
Ecological Validity
- Definition: Research conducted in settings that closely mimic real-world applied practice.
- Challenge: Much research, especially in strength and conditioning, lacks ecological validity.
- Example: Velocity Based Training (VBT).
- Many VBT studies use Smith machines, while real-world training often uses free weights for better sports performance transfer.
Threats to External Validity
Interaction Effect of Testing
- Definition: Participants interact with the test, gain knowledge, and then try to perform better during post-testing.
- Impact: This can reduce our ability to generalize that the experimental treatment caused any change in performance because participants have specific knowledge of the test.
Interaction Effect of Selection Bias and Experimental Treatment
- Definition: A biased sample selected to react in a certain way will not respond like a general population.
- Example: Highly skilled athletes may not respond to a sports performance intervention the same way as lower-skilled athletes.
Reactive Effects of Experimental Setting
- Definition: The experimental setting impacts performance outcomes.
- Example: Testing in a very hot environment may not be generalizable to populations not exposed to that climate.
- Consideration: Factors like heat acclimation affect result transferability.
Multiple Treatment Interference
- Definition: Repeated experiments on the same group of participants can affect subsequent experiments.
- Impact: Prior experiments alter participant responses in later experiments.
- Example: Comparing plyometric training vs. high-intensity interval training (HIIT) and following up with cycling vs. running based training using the same participants.