The rapid growth in autonomous technology has made it possible to develop intelligent systems that can think and act like humans and can self-govern. Such intelligent systems can make ethical decisions on behalf of humans by learning their ethical preferences. When considering ethics in the decision-making process of autonomous systems that represent humans for ethical decision-making, the main challenge is agreement on ethical principles, as each human has its own ethical beliefs. To address this challenge, we propose a hybrid approach that combines human ethical principles with automated negotiation to resolve conflicts between autonomous systems and reach an agreement that satisfies the ethical beliefs of all parties involved.