Statistical Significance: Accounting for Errors in Measurement
By Daniel Palazzolo, Ph.D.
(printable version here)
We need to determine the statistical significance of a relationship when we use sample data because sample data contain error. We must therefore develop a null hypothesis, H0, which states that there is no relationship between the variables, i.e. the differences between the means or proportions between the two variables in the sample are non-existent, they equal zero (Perhaps the differences we see in the sample means or proportions are just a result of sampling error). In order to substantiate the hypothesis, we want to reject the null hypothesis, so we ask, what is the probability that the differences in the sample are not equal to zero?
In order to test the null hypothesis, we construct a 95% confidence interval. This will allow us to determine if the null hypothesis for a relationship, measured by the differences in the sample data, is true or false. If the test statistic is greater than 2SE, or zero is not contained within the confidence interval, then we can say (with 95% confidence) that the differences in the sample are real, i.e. they are greater than zero, for the population as a whole. We reject the null hypothesis. Or, using the probability value (p-value), if p is less than .05, we can also reject the null hypothesis. If, on the other hand, the test statistic is less than 2SE, and zero falls within the confidence interval, and if the probability of getting the test statistic by chance is greater than .05, then we do not reject the null hypothesis.
The types of statistics we use, naturally, depend on the level of measurement for the variables. We can use the test statistic or the p-value to test the null hypothesis for two sample means and chi-square or the p-value to test the null hypothesis for relationships involving nominal/ordinal variables.