Significance Test (1 of 2)
A significance test is performed to determine if an observed value
of a
statistic differs enough from a
hypothesized value of a
parameter to draw
the inference that the hypothesized value of the parameter is not the
true value. The hypothesized value of the parameter is called the
"null hypothesis." A significance test
consists of calculating the probability of obtaining a statistic as
different or more different from the null hypothesis (given that the
null hypothesis is correct) than the statistic obtained in the
sample. If this probability is sufficiently low, then the difference
between the parameter and the statistic is said to be "statistically
significant."
Just how low is sufficiently low? The choice is somewhat arbitrary
but by convention
levels of 0.05 and 0.01
are most commonly used.
For instance, an experimenter may hypothesize that the size of a
food reward does not affect the speed a rat runs down an alley. One
group of rats receives a large reward and another receives a small
reward for running the alley. Suppose the mean running time for the
large reward were 1.5 seconds and the mean running time for the small
reward were 2.1 seconds.