Deep learning has revolutionized medical image processing, offering powerful tools for diagnosing critical conditions like brain tumors, skin cancer, and pneumonia. However, the opaque nature of these models necessitates interpretability, especially in healthcare applications where transparency and trust are vital. This study addresses this challenge by integrating explainable artificial intelligence (XAI) techniques, specifically local interpretable model-agnostic explanations (LIME) and gradient-weighted class activation mapping (Grad-CAM), to enhance model transparency. A comparative analysis of five deep learning models-CNN, XceptionNet, EfficientNet, VGG, and ResNet-is conducted across three distinct medical image classification tasks. Performance is evaluated using accuracy, precision, recall, F1 score, specificity, and AUC score, while LIME and Grad-CAM provide visual insights into the regions influencing model predictions, supporting clinical validation. The findings underscore the importance of XAI in improving model transparency, offering valuable insights into the reliability and applicability of deep learning models in medical image analysis, ultimately aiding in the development of more explainable and trustworthy AI systems for healthcare.