Based on X similar to N-d(theta, sigma X2I(d)), we study the efficiency of predictive densities under alpha-divergence loss L-alpha for estimating the density of Y similar to N-d(theta, sigma Y2I(d)). We identify a large number of cases where improvement on a plug-in density are obtainable by expanding the variance, thus extending earlier findings applicable to Kullback-Leibler loss. The results and proofs are unified with respect to the dimension d, the variances sigma X2 and sigma Y2, the choice of loss L-alpha; alpha is an element of (-1, 1). The findings also apply to a large number of plug-in densities, as well as for restricted parameter spaces with theta is an element of Theta subset of Double-struck capital R-d. The theoretical findings are accompanied by various observations, illustrations, and implications dealing for instance with robustness with respect to the model variances and simultaneous dominance with respect to the loss.