Drawing inferences on a causal relationship between a particular intervention and the observed outcome requires to conduct a clinical experiment which controls for study conditions and systematic errors (bias). This is best to be achieved by randomization in which known and unknown biological risk factors are distributed equally among treatment arms. Trauma and orthopedic surgery, however, occupies an exceptional position in clinical medicine. Random allocation of subjects is often considered difficult because of the tight time frame between patient presentation and the urgent need for surgical treatment, and the dependence of operative results upon technical skills. Evidence of a true treatment effect does not only depend on design issues (i.e., randomized or non-randomized treatment assignment), but on both the prior probability of efficacy and the observed effect size as well. Even though our knowledge of the efficacy of osteosynthesis comparing with, let's say, plaster immobilization or (fictive) placebo therapy is hardly supported by randomized trials, the biologically plausible principle of stable operative fixation of fracture fragments has established itself as the scientific basis to propagate surgical rather than other treatment options. Thus, the efficacy of a medical intervention can be well demonstrated without randomization. Regarding the ultimate goals of stabilization, pain removal, and mobilization, osteosynthesis of a pertrochanteric fracture fits these principles in terms of an all-or-none effect (so called level Ic evidence): without the intervention, effects will not be observed. On the other hand, endpoints such as healing and infection rates or duration of rehabilitation may be severely influenced by confounding factors (e. g., concomitant diseases, age, or gender). Under these circumstances, the goal of quantifying treatment effects of different interventions (i. e., interlocking nails, plates, K-wires) and of discriminating these effects from bias might be solved more reliably by a randomized than by a non-randomized trial. Obviously, the need for randomization relies on the choice of the main endpoint of interest. The postulated overestimation of treatment effects by nonrandomized trials has been proven only for methodologically weak investigations. In contrast, high quality studies led to comparable findings regardless of randomization. In conclusion, there are thinkable alternative designs to randomized trials in trauma surgery, accounting for selected clinical questions and objectives. It must be emphasized that these designs will require a similarly rigorous planning (i. e., study protocols, ethics, sample size considerations) and analysis of the results.