The growing implementation of robots in societal contexts necessitates a deeper exploration of the dynamics of trust between humans and robots. This exploration should expand beyond traditional viewpoints that primarily emphasize the influence of robot performance. In the burgeoning area of social robotics, fine-tuning a robot's personality traits is increasingly recognized as a crucial element in shaping users' experiences during human-robot interaction (HRI). Research in this field has led to the creation of trust scales that encompass various trust dimensions in HRI. These scales include aspects related to performance as well as moral dimensions. Our previous study revealed that breaches of moral trust by robots impact human trust more negatively than performance trust breaches, and humans take retaliatory approaches in response to morality breaches by robots. In the present study, our main aim was to explore if trust loss and retaliation tendencies differ based on the identity of the teammates following the violations of these different trust aspects. Through multiple versions of an online search task, we examined our research questions and found that breaches of morality by robotic teammates cause a significantly higher trust loss in humans compared to human teammates. These findings highlight the importance of a robot's morality in determining how humans view a robot's trustworthiness. For effective robot design, robots must meet ethical and moral standards, which are higher than the ethical and moral standards expected from humans.