We need to determine the statistical significance of a relationship when we use sample data because sample data contain error. We must therefore develop a null hypothesis, H_{0}, which states that there is no relationship between the variables, i.e. the differences between the means or proportions between the two variables in the sample are nonexistent, they equal zero (Perhaps the differences we see in the sample means or proportions are just a result of sampling error). In order to substantiate the hypothesis, we want to reject the null hypothesis, so we ask, what is the probability that the differences in the sample are not equal to zero?
In order to test the null hypothesis, we construct a 95% confidence interval. This will allow us to determine if the null hypothesis for a relationship, measured by the differences in the sample data, is true or false. If the test statistic is greater than 2SE, or zero is not contained within the confidence interval, then we can say (with 95% confidence) that the differences in the sample are real, i.e. they are greater than zero, for the population as a whole. We reject the null hypothesis. Or, using the probability value (pvalue), if p is less than .05, we can also reject the null hypothesis. If, on the other hand, the test statistic is less than 2SE, and zero falls within the confidence interval, and if the probability of getting the test statistic by chance is greater than .05, then we do not reject the null hypothesis.
The types of statistics we use, naturally, depend on the level of measurement for the variables. We can use the test statistic or the pvalue to test the null hypothesis for two sample means and chisquare or the pvalue to test the null hypothesis for relationships involving nominal/ordinal variables.
Data 
Mean or Proportion 
Distribution 
Measure of Dispersion 
Test statistic 
Level of significance 
Population 
Population Mean 
Normal Curve (fixed) 
Standard Deviation 
Z scores 
95% ( + 2Z) 
Sample 
One Mean/One Proportion 
TDistribution (depends on n D of F) 
Standard Error 
tratio (Z when n is large) 
95% ( + 2SE) 
Sample 
Two Means 
TDistribution (depends on n D of F) 
Standard Error of Difference 
tratio (Z when n is large) or pvalue 
95% ( + 2SE) 
Sample 
Two Proportions 
TDistribution (depends on n) D of F) 
Standard Error of Difference 
tratio (Z when n is large) or pvalue 
95% ( + 2SE) 
Sample 
Two ordinal and/or nominal variables 
ChiSquare: depends on (r1) (c1) 

Chisquare or pvalue 
95% (Critical Value) 
Back to "Statistical Significance: Accounting for Errors
in Measurement" or the Political Science main page.
Copyright 2010