Outlier detection is one of the key issues in any data-driven analytics. In this paper, we propose Bi-super DEA, a super DEA-based method that constructs both efficient and inefficient frontiers for outlier detection. In evaluating its predictive performance, we develop a novel predictive DEA procedure, PDEA, which extends the conventional DEA approaches that have been primarily used for in-sample efficiency estimation, to predict outputs for the out-of-sample. This enables us to compare the predictive performance of our approach against several popular outlier detection methods including the parametric robust regression in statistics and non-parametric k-means in data mining. We conduct comprehensive simulation experiments to examine the relative performance of these outlier detection methods under the influence of five factors: sample size, linearity of production function, normality of noise distribution, homogeneity of data, and levels of random noise contaminating the data generating process (DGP). We find that, somewhat surprisingly, Bi-super CCR consistently outperforms Bi-super BCC in detecting outliers. Under the linearity, normality and homogeneity conditions, the parametric robust regression method works best. However, when the DGP violates these conditions, Bi-super DEA emerges as the better choice due to its distribution-free property. Our results shed light on the conditions that each method excels or fails and provide users with practical guidelines on how to choose appropriate methods to detect outliers.