Machine learning (ML) is increasingly used across various industries to automate decision-making processes. However, concerns about the ethical and legal compliance of ML models have arisen due to their lack of transparency, fairness, and accountability. Monitoring, particularly through logging, is a widely used technique in traditional software systems that could be leveraged to assist in auditing ML-based applications. Logs provide a record of an application’s behavior, which can be used for continuous auditing, debugging, and analyzing both the behavior and performance of the application. In this study, we investigate the logging practices of ML practitioners to capture responsible ML-related information in ML applications. We analyzed 85 ML projects hosted on GitHub, leveraging 20 responsible ML libraries that span principles such as privacy, transparency & explainability, fairness, and security & safety. Our analysis revealed important differences in the implementation of responsible AI principles. For example, out of 5,733 function calls analyzed, privacy accounted for 89.3% (5,120 calls), while fairness represented only 2.1% (118 calls), highlighting the uneven emphasis on these principles across projects. Furthermore, our manual analysis of 44,877 issue discussions revealed that only 8.1% of the sampled issues addressed responsible AI principles, with transparency & explainability being the most frequently discussed principles (32.2% of all issues related to responsible AI principles). Additionally, a survey conducted with ML practitioners provided direct insights into their perspectives, informing our exploration of ways to enhance logging practices for more effective, responsible ML auditing. We discovered that while privacy, model interpretability & explainability, fairness, and security & safety are commonly considered, there is a gap in how metrics associated with these principles are logged. Specifically, crucial fairness metrics like group and individual fairness, privacy metrics such as epsilon and delta, and explainability metrics like SHAP values are not considered current logging practices. The insights from this study highlight the need for ML practitioners and logging tool developers to adopt enhanced logging strategies that incorporate a broader range of responsible AI metrics. This adjustment will facilitate the development of auditable and ethically responsible ML applications, ensuring they meet emerging regulatory and societal expectations. These specific insights offer actionable guidance for improving the accountability and trustworthiness of ML systems.