Analysis of Variance One-Way:
The total variation present in a set of observable quantities may under certain circumstances be partitioned into a number of disjoint components associated with the nature of classification of the data. This systematic methodology by which one can partition the causes of variation into several components is called analysis of variance.
Let us consider an example of yield of paddy. Suppose the yield is carried out using three kinds of seeds. So, the yield variation occurs due to variation of seed and also due to some random error (the position of seed was suitable for germination). This is a classic example of one-way layout of ANOVA.
- The observations recorded were independent
- Parent population from which observations were taken have normal distribution.
- Homogeneity of variances in the different treatment groups i.e. the variance of all the treatment groups are equal.
One Way ANOVA:
The main theory behind CRD is One-way ANOVA.
Let us consider the following Example.
A spots analyst wanted to know if the physical weight of players differs due to different training strategy of 5 different clubs. For this purpose, he gathered 5 groups from each of the 5 clubs.
Here Weight of a player is influenced by a single treatments/Factor, Club, A and the factor has 5 levels.
A (Factor): Clubs from which a professional football player plays for.
Level 1 (Cowboys): players from the Dallas Cowboys
Level 2 (Packers): players from the Green Bay Packers
Level 3 (Broncos): players from the Denver Broncos
Level 4 (Dolphins): players from the Miami Dolphins
Level 5 (Niners): players from the San Francisco Forty Niners
So, there are 5 treatments and ith treatment (where i=1, 2, 3, 4, 5) is replicated ri =17 times. i.e., it can look upon as a similar setup like ANOVA one may fixed effect model, where a single factor has 5-levels and each level consists of ri =17
For ith level, let there be ni observations,
We represent the observations in the following array data.
One-way fixed effect model is given by:
yij = response corresponding to jth observation of ith level of A
µi = effect due to ith level of A.
αi = additional effect due to i-th level of A
µ= general effect
eij = error in model
Re-parameterization essentially leads to separation of error and exact effect due the factor A
Generally, the two hypotheses are considered as:
H0: α1 = α2 = α3= α4= α5 v/s H1: at least one inequality
The following two hypotheses can be described as:
H0: there is no difference between mean weights of players from different clubs
H1: at least one of the mean weights is different from another
One -way ANOVA using R:
Do not choose Excel formal for this case as it doesn’t read the levels of a factor
Consider the following code and output for illustration:
> player_weight <- read_excel(“E:/mathematicacity/player weight.xlsx”,
+ col_types = c(“numeric”, “text”))
 “Weight” “Club”
Note that R cannot read its levels. (Giving output as NULL)]
#Choosing dataset (if the dataset is in .csv Format)
my_data <- read.csv(file.choose())
Preview of the data:
71 more columns
# Show the levels
Output:  "Weight" "Club"  "Broncos" "Cowboys" "Dolphins" "Niners"  "Packers"
#Estimation of Model
model1 <- aov(Weight ~ Club, data = my_data)
|Df||Sum sq.||Mean Sq.||F value||Pr(>F)|
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The output includes the columns F value and Pr(>F) corresponding to the p-value of the test. From the p-value we cannot reject the Null Hypothesis at 5% level of Significance.
So, we can conclude that at 5% level of Significance the physical weight of plyers does not differ significantly due to different training strategy of 5 different clubs.
But if the Null hypothesis was rejected then we can perform a paired comparison with the help of Tukey test and can find out which club’s training strategy differs significantly. The R code for this is:
TukeyHSD(model1, conf.level = 0.95)
We will consider an example in the Next portion of 2-way ANOVA so that we can perform the Tukey test.]
##Checking ANOVA assumptions
# 1. Homogenity of variances
leveneTest(Weight ~ Club, data = my_data)
Output: Levene's Test for Homogeneity of Variance (center = median) Df F value Pr(>F)
group 4 0.0956 0.9836
From the output above we can see that the p-value is not less than the significance level of 0.05. This means that there is no evidence to suggest that the variance across groups is statistically significantly different. Therefore, we can assume the homogeneity of variances in the different treatment groups.
# 2. Normality
# Extract the residuals
aov_residuals <- resid(model1 )
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
Shapiro-Wilk normality test data: aov_residuals W = 0.94462, p-value = 0.001161
From the p-value and the plot we can see that the data does not violate the Normality assumption.