# Difference between revisions of "ANOVA"

Line 7: | Line 7: | ||

• The t-test is a powerful statistical test that can be used to test differences between two means. | • The t-test is a powerful statistical test that can be used to test differences between two means. | ||

+ | |||

• The null hypothesis claims that there is no difference between the terms we are testing. | • The null hypothesis claims that there is no difference between the terms we are testing. | ||

+ | |||

• The object of our testing is to either validate or reject the null hypothesis. | • The object of our testing is to either validate or reject the null hypothesis. | ||

+ | |||

• The p-value is the probability of obtaining a result at least as extreme as a given data point, under the null hypothesis. | • The p-value is the probability of obtaining a result at least as extreme as a given data point, under the null hypothesis. | ||

+ | |||

• A Type I Error occurs when we falsely reject the true null hypothesis. | • A Type I Error occurs when we falsely reject the true null hypothesis. | ||

Line 20: | Line 24: | ||

• cases are independent | • cases are independent | ||

+ | |||

• Distributions are normal | • Distributions are normal | ||

+ | |||

• Variance of data in groups is homogeneous | • Variance of data in groups is homogeneous | ||

+ | |||

The one way ANOVA test compares several groups of observations, all of which are independent but possibly with different group means. Two way ANOVA studies the effects of two factors separately (their main effect) and together (their interaction effect). | The one way ANOVA test compares several groups of observations, all of which are independent but possibly with different group means. Two way ANOVA studies the effects of two factors separately (their main effect) and together (their interaction effect). | ||

Line 46: | Line 53: | ||

■ robust design | ■ robust design | ||

+ | |||

■ increases statistical power | ■ increases statistical power | ||

+ | |||

In addition a two way ANOVA | In addition a two way ANOVA | ||

■ looks at interaction between factors | ■ looks at interaction between factors | ||

+ | |||

■ reduces random variability | ■ reduces random variability | ||

+ | |||

■ can look at effect on second variable after controlling the first variable | ■ can look at effect on second variable after controlling the first variable | ||

+ | |||

Line 58: | Line 70: | ||

■ if null hypothesis is rejected, we know at least one group differs from others, but with a one way ANOVA and multiple groups, it may be difficult to determine which group is different | ■ if null hypothesis is rejected, we know at least one group differs from others, but with a one way ANOVA and multiple groups, it may be difficult to determine which group is different | ||

+ | |||

■ assumptions need to be fulfilled | ■ assumptions need to be fulfilled | ||

+ | |||

## Revision as of 06:25, 29 February 2008

**ANOVA**

**DESCRIPTION**

Some definitions:

• The t-test is a powerful statistical test that can be used to test differences between two means.

• The null hypothesis claims that there is no difference between the terms we are testing.

• The object of our testing is to either validate or reject the null hypothesis.

• The p-value is the probability of obtaining a result at least as extreme as a given data point, under the null hypothesis.

• A Type I Error occurs when we falsely reject the true null hypothesis.

*WHAT IS ANOVA?*

ANOVA tests hypotheses that are made about differences between two or more means. If independent estimates of variance can be obtained from the data, ANOVA compares the means of different groups by analyzing comparisons of variance estimates. There are two models for ANOVA, the fixed effects model, and the random effects model (in the latter, the treatments are not fixed).

ANOVA makes some assumptions fundamental to the theory:

• cases are independent

• Distributions are normal

• Variance of data in groups is homogeneous

The one way ANOVA test compares several groups of observations, all of which are independent but possibly with different group means. Two way ANOVA studies the effects of two factors separately (their main effect) and together (their interaction effect).

**HISTORY**

ANOVA was initially suggested by the British statistician Sir Ronald Aylmer Fisher in the 1920s. He was English, and was educated at Harrow and Cambridge. He was very interested in genetics. ANOVA uses Fisher's F-distribution as part of the test of statistical significance. Some of his famous papers include "On the mathematical foundations of theoretical statistics", published in the Philosophical Transactions of the Royal Society in 1922, and "Applications of Student's distribution" , published in 1925.

**PRINCIPAL USE**

It is possible to use the t-test to compare more than two means, but this method raises the rate of type I errors. ANOVA (Analysis of variance) is used to test differences among multiple means without increasing the Type I error rate.

As the number of groups increases, the number pair comparisons increases substantially and calculations become overwhelming very quickly. If we test enough pairs, we begin to make observations that are less significant, until we find p values that are insignificant. ANOVA puts all the data into one F number and gives us one P to test the null hypothesis.

**ADVANTAGES**

■ robust design

■ increases statistical power

In addition a two way ANOVA

■ looks at interaction between factors

■ reduces random variability

■ can look at effect on second variable after controlling the first variable

**SHORTCOMINGS**

■ if null hypothesis is rejected, we know at least one group differs from others, but with a one way ANOVA and multiple groups, it may be difficult to determine which group is different

■ assumptions need to be fulfilled

**EXAMPLES IN INFORMATICS**

Rennie CA, Hannan S, Maycock N, Kang C. Age-related macular degeneration: what do patients find on the internet? J R Soc Med. 2007 Oct;100(10):473-7.

Internet sites were scored for technical information, quality, and SMOG (Simple Measure of Gobbledygook) using one-way ANOVA tests.

Petrovecki M, Rahelic D, Bilic-Zulle L, Jelec V. Factors influencing medical informatics examination grade--can biorhythm, astrological sign, seasonal aspect, or bad statistics predict outcome? Croat Med J. 2003 Feb;44(1):69-74.

This is an interesting study (though probably one with limited academic value). It looked at how "pseudoscientific variables" such as zodiac sign or biorhythm cycles affected a medical informatics exam grade.

382 second-year undergraduate students at the Rijeka University School of Medicine in the period from 1996/97 to 2000/01 academic year were asked to fill out an anonymous questionnaire about their attitude toward learning medical informatics after taking a Medical Informatics exam.

The answer: general learning capacity and computer habits correlated with exam grades, but there was no correlation between grades and zodiac signs, biorhythms, students sex, or time of year when exam was taken (so I guess my zodiac sign and the fact that I once lived in Finchley, London, the same place where R.A. Fisher was born, had nothing to do with my selection of this study). However, the authors also came up with this masterfully understated statement -- "Inadequate statistical analysis can always confirm false conclusions".

**REFERENCES**

http://digital.library.adelaide.edu.au/coll/special//fisher/18pt1.pdf

http://digital.library.adelaide.edu.au/coll/special/fisher/43.pdf