Predicting extreme events is one of the major goals of streamflow forecasting, but models that are reliable under such conditions are hard to come by. This stems in part from the fact that, in many cases, calibration is based on recorded time series that do not comprise extreme events. The problem is particularly relevant in the case of data-driven models, which are focused in this work. Based on synthetic and real world streamflow forecasting examples, two main research questions are addressed: 1) would/should the models chosen by established practice be maintained were extreme events being considered and 2) how can established practice be improved in order to reduce the risks associated with the poor forecasting of extreme events? Among the data-driven models employed in streamflow forecasting, Support Vector Regression (SVR) has earned the researchers’ interest due to its good comparative performance. The present contribution builds upon the theory underlying this model in order to illustrate and discuss its tendency to predictably underestimate extreme flood peaks, raising awareness to the obvious risks that entails. While focusing on SVR, the work highlights dangers potentially present in other non-linear regularized models. The results clearly show that, under certain conditions, established practices for validation and choice may fail to identify the best models for predicting extreme streamflow events. Also, the paper puts forward practical recommendations that may help avoiding potential problems, namely: establishing up to what return period does the model maintain good performances; privileging small λ hyperparameters in Radial Basis Function (RBF) SVR models; preferring linear models when their validation performances are similar to those of non-linear models; and making use of predictions made by more than one type of data-driven model.