Background: Researchers face many decisions when designing experiments. How many participants should be recruited? What should the duration of the study be? Should exclusion criteria be implemented for participants who fail to follow instructions? We provide an overview of bootstrap resampling and Monte Carlo simulation, while demonstrating how these techniques can be used to guide experimental design decisions.
Methods: Bootstrap resampling and Monte Carlo simulation are used to demonstrate how various aspects of experimental design – sample size, number of measurements, and exclusion criteria – influence the variability of experimental results. A toolbox for implementing resampling techniques is also provided.
Results: Larger sample sizes decrease sampling variability, resulting in better statistical power via less variable effect sizes. In contrast, more measurements per experimental condition decrease measurement variability, resulting in better statistical power via larger effect sizes.
Finally, selecting appropriate exclusion criteria can also increase statistical power by increasing the average observed effect size.
Conclusions: When designing an experiment, it is important to consider not only sampling variability, but also measurement variability, which is not accounted for when estimating statistical power from sample size alone. Researchers can use simulation techniques — which are flexible enough to be applied to any study design or analysis plan — to make more informed experimental design decisions.