With the development of computer science, automatic control, aircraft design and other disciplines, artificial intelligence-driven Unmanned Combat Aerial Vehicle (UCAV) air combat decision-making technology has brought revolutionary changes in air combat theory and mode. Aiming at the six-degree-of-freedom UCAV close-range air combat autonomous decision-making problem, this paper proposes aUCAVair combat decision-making method based on the deep reinforcement learning method. Firstly, a close-range air combat environment model based on the six-degree-of-freedom UCAV model is developed. Secondly, an autonomous decision-making model for the UCAV close-range air combat with multi-dimensional continuous state input and multi-dimensional continuous action output is established based on the deep neural network, which receives the combat situation information and outputs the UCAV's joystick displacement commands. Then, a reward function considering the missile attack zone and air combat orientation is designed, which includes the angle reward, the distance reward and the height reward. On this basis, a twin delayed deep deterministic policy gradient algorithm is employed to train the autonomous decision-making model for air combat. Finally, simulation experiments of the UCAV close-range air combat scenario are carried out, and the simulation results show that the proposed intelligent air combat decision-making machine has a win rate 3.57 times higher than that of an expert system, and occupies an average situation reward 1.19 times higher than that of the enemy aircraft.