This paper deals with different models of random walks with a reinforced memory of preferential attachment type. We consider extensions of the Elephant Random Walk introduced by Schutz and Trimper (Phys Rev E 70:044510(R), 2004) with stronger reinforcement mechanisms, where, roughly speaking, a step from the past is remembered proportional to some weight and then repeated with probability p. With probability 1 - p, the random walk performs a step independent of the past. The weight of the remembered step is increased by an additive factor b >= 0, making it likelier to repeat the step again in the future. A combination of techniques from the theory of urns, branching processes and alpha-stable processes enables us to discuss the limit behavior of reinforced versions of both the Elephant Random Walk and its alpha-stable counterpart, the so-called Shark Random Swim introduced by Businger (J Stat Phys 172(3):701-717, 2004). We establish phase transitions, separating subcritical from supercritical regimes.