Matrix factorizations and their multi-linear extensions, known as tensor factorizations are widely known and useful methods in data analysis and machine learning for feature extraction and dimensionality reduction. Recently, new approaches to factorization models appeared - tensor network (TN) factorizations. They reduce storage, computational complexity, and aim to help with curse of dimensionality in decomposing multi-way data. Tensor train (TT) is one of the most popular TN models used in wide-range areas, such as quantum physics or chemistry. In this study, we improved TTs for classification tasks by combining the fundamental TT model with randomized decompositions and extending it to a distributed version according to the MapReduce paradigm. As a result, the proposed approach is not only scalable but also much faster than competing algorithms, and is able to perform large-scale dimensionality reduction, e.g. in classification tasks.