Forty-four research articles published in the occupational therapy literature were examined to determine the effect of design quality on study outcome. All 44 of the studies examined involved a parallel-groups comparison design. Twenty-two of the studies included random assignment of subjects to various groups, and the remaining 22 investigations used some nonrandom method to determine subject allocation. A standardized metric (i.e., effect size) was used to determine the effect of the independent variable in the 44 studies. The data analysis revealed that effect-size values were not significantly different between studies that involved random assignment and those not involving random assignment. The argument is made that such design characteristics as random assignment should be examined as moderator variables in any attempt to synthesize findings from multiple studies. Such an approach would treat the design used in a study as one of many possible variables that could influence the outcome. This approach would modify the a priori assumption that one research design is inherently superior to another regardless of the research question or context.