During the last decades, technology has created many opportunities to develop the world in diverse ways. Specifically, the growth of Artificial Intelligence (AI) can potentially change every aspect of human social interactions. In the educational field, numerous AI systems have been created using intelligent methods in order to develop novel teaching and learning solutions for the educational context. Most of these applications are powered by Machine Learning (ML) systems and algorithms, which use substantial amounts of data to work. Data reflects social reality and its existing power structures and biases, which is why, in order to guarantee ethical applications, it seems important to analyse issues such as the ones that are being considered in FATE. The development of ethical AI systems have been analysed by many researchers. However, especially in the context of education, biased systems can threaten the learning experience, perpetuating and supporting existing discriminatory practices. If we want to use AI for social good, algorithms must be fair and transmit positive and respectful values to students, while they develop their curricular capacities and psychological aspects. There is a chance to develop social-sensitive and 'activist' educational applications to mitigate social existent biases and empower the creation of a fairer and more respectful social world. This work will focus on how AI algorithms influence the learning experience of students as well as the threats the algorithms may provoke to that learning experience, specifically in the context of the discriminative data, which can perpetuate actual biased representations of social groups present in the data.