Design of Experiments and Analysis of Variance
Elements of a Designed Experiment
Completely Randomized Design: Single Factor
Definition: A design where experimental units are randomly assigned to treatments, ensuring independence and homogeneity among subjects.
Characteristics:
- One factor or independent variable considered.
- Analyzed using one-way Analysis of Variance (ANOVA).
Example: Study assessing consumer preference for bottled water brands.
- Factors: Bottled water brands (e.g., Brand A, B, C).
- Dependent Variable: Taste preference rated on a scale of 1-10.
ANOVA F-Test
Purpose: Tests the equality of means among two or more population groups.
- Involves one nominal independent variable and one interval/ratio scaled dependent variable.
Partitioning Total Variation:
- Total variation is divided into:
- Between Groups Variation (due to treatment)
- Within Groups Variation (error).
Test Statistic: F = MST / MSE
- MST: Mean Square for Treatment
- MSE: Mean Square for Error.
Degrees of Freedom:
- Numerator: ν1 = k - 1
- Denominator: ν2 = n - k
- Where k = number of groups and n = total sample size.
ANOVA Summary Table:
- Structure includes sources of variation, degrees of freedom, sum of squares, mean square, and the F statistic for treatment and error.
Critical Value for F-Test
- Determine critical value using F-distribution based on degrees of freedom
- If computed F > critical value, reject the null hypothesis (H0).
Conditions for Valid ANOVA F-test
- Samples must be randomly selected from k treatment populations.
- All k populations must approximately follow a normal distribution.
- All k population variances must be equal.
Multiple Comparisons of Means Methods
Purpose: To compare treatment means when ANOVA indicates treatment differences.
Tukey Method: Requires balanced design, involves pairwise comparison.
Bonferroni Method: Effective for balanced or unbalanced designs, pairwise comparisons.
Scheffé Method: Allows for general contrasts of means suitable for balanced or unbalanced design.
Experimentwise Error Rate (EER)
- Represents the risk associated with making at least one Type I error during multiple mean comparisons in ANOVA.
Practical Steps for Conducting an ANOVA
- Confirm the design is completely randomized.
- Check assumptions of normality and equal variances.
- Create an ANOVA summary table detailing variability due to treatments and error.
- If treatment means differ (based on F-test), conduct multiple comparisons as needed.
- Assess results carefully, considering potential Type II errors if the null hypothesis is not rejected.