Introduction: Traditional vehicle safety assurance frameworks are challenged by Automated Driving Systems (ADSs) that enable dynamic driving tasks to be performed without active involvement of a human driver. Further, an ADS's driving functionality can be changed during in-service operation, using software updates developed using Machine Learning (ML). Learnings from real-world cases will be a key input to reforming current regulatory frameworks to assure ADS safety. However, ADSs are yet to be deployed in mass volumes, and limited data are available regarding their in-service safety performance. Method: To overcome these limitations, a collective case study was undertaken, drawing upon three relevant real-world cases involving automated control systems that were a causative factor in major transport safety incidents. Results: A range of findings were identified, which informed recommendations for reform. The study found some assurance processes, decisions and oversight were not commensurate with risk or safety integrity levels, including a lack of independence with reviews and approvals for safety-critical system components. Two cases were also impacted by conflict or bias with regulatory approvals. Other commonalities included a lack of safeguards to ensure systems were not operated outside their design domain, and a lack of system redundancy to ensure safe operation if a system component fails. Further, the identification and validation of system responses to scenarios that could be encountered within design domain boundaries was lacking. For the two cases in which safety-critical functionality was developed using ML, it's concerning no regulator reports provided detailed findings regarding the role of ML models, algorithms, or training data.