In autonomous driving systems, the monocular 3D object detection algorithm is a crucial component. The safety of autonomous vehicles heavily depends on a well-designed detection system. Therefore, developing a robust and efficient 3D object detection algorithm is a major goal for institutes and researchers. Having a 3D sense is essential in autonomous vehicles and robotics, as it allows the system to understand its surroundings and react accordingly. Compared with stereo-based and Lidar-based methods, monocular 3D Object detection is a challenging task as it only utilizes 2D information to generate complex 3D features, making it low-cost, less computationally intensive, and with great potential. However, the performance of monocular methods is impaired due to the lack of depth information. In this paper, we propose a simple, end-to-end, and effective network for monocular 3D object detection without the use of external training data. Our work is inspired by auxiliary learning, in which we use a robust feature extractor as our backbone and multiple regression heads to learn auxiliary knowledge. These auxiliary regression heads will be discarded after training for improved inference efficiency, allowing us to take advantage of auxiliary learning and enabling the model to learn critical information more conceptually. The proposed method achieves 17.28% and 20.10% for the moderate level of the Car category on the KITTI benchmark test set and validation set, respectively, which outperforms the previous monocular 3D object detection approaches.