Null Hypothesis Significance Testing: What you should know (Part 4)

Effect sizes in NHST

Typically, researchers only report p values, and omit mentioning effect sizes. Both p values and effect sizes are important in their own ways. Mentioning one does not preclude the other- one can mention effect size with the p value. In fact, reporting both p values and effect sizes generally provide more information than either one of them alone. Even when effect sizes are mentioned, confidence intervals must be provided to help determine the precision with which the effect has been estimated.

In medical research, once a treatment has been determined as superior, ethical considerations forbid researchers from using inferior treatment(s) to determine a precise effect size. One must wonder, “how far can one continue an experiment to be ‘sure enough’?”- when do we know that one treatment is definitely superior? For any medical treatment there would be a finite number of patients treated. While a small number would receive the new treatment as part of a clinical trial, the rest would receive the ‘standard of care’ treatment (the ‘best’ or ‘standard’ treatment). If the number of patients in the clinical trial is very small, the decision of which treatment is ‘superior’ is more likely to be incorrect. Based on this incorrect decision, all subsequent patients will receive the wrong treatment. On the contrary, if too many patients are administered the new treatment (and the new treatment is superior), then all other patients would have been unnecessarily mistreated. One suggestion is to minimize the total number of patients receiving the poorer treatment-both during the trial and thereafter.

Effect sizes are mostly useful for research that is aimed at decisions with some immediate practical consequences. When research is intended to test an existing theory, even small differences would increase one’s confidence that theory has some validity. Similarly, when developing a theory, the value of the result lies in stimulating thinking which may be subsequently tested by further research- the direction, not effect size is more important. This is particularly true of research where small variations are made, and the direction of change noted. If the change is an improvement, more changes of the same kind (or greater magnitude of change) are made, else the direction of change is reversed.

Given the tendency of researchers to arrive at conclusions from a single study (as opposed to serial experimentation as suggested by Fisher), spurious effects may be taken as true without further testing. It also possible that reporting effect sizes may lead readers to overestimate the importance of results.

Advertisement

1 thought on “Null Hypothesis Significance Testing: What you should know (Part 4)

  1. Pingback: Confidence Intervals: The basics | communitymedicine4all

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.