This work compares two machine learning approaches that use the reject option method, i.e., the possibility to abstain from outputting a prediction in case the model is not confident about it, thus rejecting the sample. We demonstrated the usability of such a model for predicting patients' mortality risk in ICU while maintaining an adequate level of accepted samples, using data from the MIMIC-IV database. Two strategies have been compared: a single-model classifier, trained to directly predict the mortality rate of a patient in ICU and rejecting the sample if the confidence of the model is below a certain threshold, and a double-model boosted classifier which considers training a preceding model for determining whether a sample is predictable, i.e. a model classifies if the sample is going to be correctly predicted, and, only if it is, a second model outputs a mortality prediction, otherwise the sample is rejected since its prediction will likely be random. The hypothesis is that the second strategy could give better results than the first one, considering a trade-off between the error rate and the amount of rejected samples. We found that the two models are confident about two different classes: the Classifier-onlyModel is more confident to include in its predictions and to classify ICU staying instances in which the patient deceases, whereas the Boosted Reject Option Model considers those cases more difficult to predict, thus rejects them.