In the evolving landscape of autonomous driving technology, the ability to accurately detect and localize objects in complex environments is paramount. This paper introduces an innovative object detection algorithm designed to enhance the perception capabilities of autonomous vehicles. We propose a novel fusion framework that integrates LiDAR point clouds with monocular depth estimations, utilizing a Depth-Aware Transformer (DAT) architecture. The DAT, a recent advancement in transformer models, is uniquely equipped to handle spatial hierarchies and depth cues, making it ideal for interpreting three-dimensional scenes from two-dimensional images. Our approach leverages the complementary strengths of LiDAR and monocular vision, where LiDAR provides precise depth information while the monocular camera offers rich visual textures and color information. The adaptive fusion strategy dynamically adjusts the weight given to each sensor modality based on the reliability and quality of the data in real-time, ensuring optimal performance under varying environmental conditions. We validate our method using the extensive KITTI dataset, a benchmark in autonomous driving research. Extensive experiments demonstrate that our algorithm outperforms state-of-the-art object detection models, achieving higher accuracy in object localization and classification. Moreover, our solution showcases improved robustness and generalization across diverse driving environments, thanks to the enhanced depth perception enabled by the DAT architecture. To further validate the effectiveness of our model, we conducted both comparative and ablation experiments, which confirmed the performance improvements of our approach and demonstrated the critical contributions of the DAT and Adaptive Fusion components. The proposed fusion of LiDAR and monocular depth estimation using Depth-Aware Transformers represents a significant step forward in autonomous driving perception systems. It not only advances the field of object detection but also paves the way for more sophisticated applications in autonomous navigation, where a deep understanding of the environment is crucial for safe and efficient operation.