Peer assessment is a process in which students rate their peers which has many benefits for both the assessor and the assessed. It actively engages students, increases motivation by giving a sense of ownership of the assessment process, encourages autonomy and critical analysis skills, broadens their understanding of the topic, enhances problem-solving and self-assessment abilities as well as develops soft skills. Peer assessment is also beneficial to the teachers as it reduces the strain of the repetitive grading process thus opening more resources for teaching and the development of course materials, especially in large courses with hundreds of students. Sometimes, peer assessment is the only viable option, as in MOOCs (Massive Open Online Courses). Peer assessment has long been studied at all education levels and is gaining traction in recent years, where MOOCs and even the Covid pandemic, which instigated the development of digital competencies, had a positive influence. However, peer assessment is still a demanding process to carry out, but one that can be assisted by modern Internet technology. This paper has a twofold contribution: we present the peer assessment module of our open-source automated programming assessment system Edgar which has been heavily used and developed for the last six years, while peer assessment has been used for the previous three years. Additionally, we present a methodology used and a two-year case study of peer assessment of open-ended assignments in the undergraduate Databases course where 500+ students per season had to provide an entityrelationship model for a given domain and assess their peers' submissions. We discuss our grading methodology, provide in-depth data analysis, and present the students' opinions of the process acquired through an anonymous questionnaire. We find that the process is both demanding in terms of the design of assignments and assessment questionnaires and rewarding in the assessment phase, where the students' grades turned out to be of high quality.