The Powerball method, via incorporating a power coefficient into conventional optimization algorithms, has been considered in accelerating stochastic optimization (SO) algorithms in recent years, giving rise to a series of powered stochastic optimization (PSO) algorithms. Although the Powerball technique is orthogonal to the existing accelerated techniques (e.g., the learning rate adjustment strategy) for SO algorithms, the current PSO algorithms take a nearly similar algorithm framework to SO algorithms, where the direct negative result for PSO algorithms is making them inherit low-convergence rate and unstable performance from SO for practical problems. Inspired by this gap, this work develops a novel class of PSO algorithms from the perspective of biased stochastic gradient estimation (BSGE). Specifically, we first explore the theoretical property and the empirical characteristic of vanilla-powered stochastic gradient descent (P-SGD) with BSGE. Second, to further demonstrate the positive impact of BSGE in enhancing the P-SGD type algorithm, we investigate the feature of theory and experiment of P-SGD with momentum under BSGE, where we particularly focus on the effect of negative momentum in P-SGD that is less studied in PSO. Particularly, we prove that the overall complexity of the resulting algorithms matches that of advanced SO algorithms. Finally, large numbers of numerical experiments on benchmark datasets confirm the successful reformation of BSGE in perfecting PSO. This work provides comprehension of the role of BSGE in PSO algorithms, extending the family of PSO algorithms.