Students taking computing courses develop skills by applying programming to various problems. In the past decade, courses have started to move from manually graded assessments to those which can be automated. A typical motivation for the use of automated assessment is to enable scaling, which is of particular importance for courses taught with high enrollment. However, automated assessment provides additional advantages such as reproducibility and rapid feedback. We have developed an automated assessment platform that performs a combination of static and dynamic analysis to evaluate student work. Our focus has been not on scaling but rather on serving student educational outcomes, both at the student level (e.g., providing feedback according to teaching best practices) and program level (e.g., ensuring that students across semesters meet the same standard). In this paper, we report on the design, development, and introduction of an automated assessment tool to improve instruction. The use of this tool has been aligned with a course on data structures & algorithms taught at the sophomore level. We discuss the development of the tool, the techniques that it uses, and evaluate its impact on grade accuracy.