Chapter 9

1. What is the purpose of hypothesis testing?

2. Why is the following explanation incorrect?

The probability value is the probability of obtaining a statistic as different from the parameter specified in the null hypothesis as the statistic obtained in the experiment. The probability value is computed assuming that the null hypothesis is true.

 

3. Why might an experimenter hypothesize something (the null hypothesis) that he or she does not believe is true?

4. State the null hypothesis for :
a. An experiment comparing the mean effectiveness of two methods of psychotherapy.
b. A correlational study on the relationship between exercise and cholesterol.
c. An invesitgation of whether a particular coin is a fair coin.
d. A study comparing a drug with a placebo on the amount of pain relief. (A one-tailed test was used.)

5. Assume the null hypothesis is that µ = 20 and that the graph shown below is the sampling distribution of the mean (M). Would a sample value of M= 24 be significant at the .05 level?



6. A researcher develops a new theory that predicts that introverts should differ from extraverts in their perfomance on a psychomotor task. An experiment is conducted and the difference between introverts and extraverts is not significant by conventional levels in that the probability value is 0.12. What should the experimeter conclude about the theory?

7. A researcher hypothesizes that the lowering in cholestorol associated with weight loss is really due to exercise. To test this, the researcher carefully controls for exercise while comparing the cholesterol levels of a group of subjects who lose weight dieting with a control group that does not diet. The difference between groups in cholesterol is not significant. Can the researcher claim that weight loss has no effect? What statistical analysis could the researcher use to make his or her case more strongly.

8. A significance test is performed and p = .20. Why can't the experimenter claim that the probability that the null hypothesis is true is .20

9. Why would it be wrong according to the "Classic Neyman-Pearson" view of hypothesis testing to write in a research report: "The effect was significant, p = .0082."

10. For a drug to be approved by the FDA, the drug must be shown to be safe and effective. If the drug is significantly more effective than a placebo, then the drug is deemed effective. What do you know about the effectiveness of a drug once it has been approved by the FDA (assuming that there has not been a Type I error)?

11. What Greek letters are used to represent the Type I and Type II error rates.

12. What levels are conventionally used for significance testing?

13. When is it valid to use a one-tailed test. What is the advantage of a one-tailed test? Give an example of a null hypothesis that would be tested by a one-tailed test.

14. If the probability value obtained in a significance test of the null hypothesis that µ1 - µ2 = 0 is .033, what do you know about the 95% confidence interval on µ1 - µ2?

15. Distinguish between probability value and significance level?

16. Suppose a study were conducted on the effectiveness of a class on "How to take tests." The SAT scores of an experimental group and a control group were compared. (There were 100 subjects in each group.) The mean score of the experimental group was 503 and the mean score of the control group was 499. The difference between means was found to be significant, p = .037. What do you conclude about the effectiveness of the class?

17. Is it more conservative to use an level of .01 or an level of .05? Would beta be higher for an alpha of .05 or for an alpha of .01?

18. Why is "Ho: "M1 = M2" not a proper null hypothesis?

19. An experimenter expects an effect to come out in a certain direction. Is this sufficient basis for using a one-tailed test? Why or why not?

20. Some people claim that no independent variable is ever so weak as to have absolutely no effect and therefore the null hypothesis is never true. For the sake of argument, assume this is true and comment on the value of significance tests.

21. How do the Type I and Type II error rates of one-tailed and two-tailed tests differ.

22. A two-tailed probability is .03. What is the one-tailed probability if the effect were in the specified direction? What would it be if the effect were in the other direction?