Small unmanned aerial vehicles (UAVs) rapidly develop and are implemented in various industries to make people's lives easier. However, there are potential risks in their use, such as unauthorized surveillance of critical infrastructure<br />objects and the delivery of explosive devices, which poses a significant threat to public and national security. The acoustic method promises direction for solving this issue by analyzing the sound characteristics and Doppler shift signatures of UAVs, using microphone arrays and machine learning techniques. The aim of this article is to develop an algorithm for effective detection and classification of drone audio signals using a deep learning convolutional neural network (CNN), constructing its architecture, and evaluating its performance. Before submitting the drone audio dataset into the neural network, the quality of the audio recordings is improved through normalization, Wiener<br />filtering, and segmentation. The audio is segmented into frames with a duration of 25 ms and a 50% overlap, applying Hamming windowing for better accuracy in the time domain, as temporal precision is crucial in audio signal processing. The obtained data is divided into three sets in a 60/20/20 ratio: for training, validation, and testing purposes. Next, the data is represented by a simplified set of features, extracting mel-spectrograms from each frame of the processed audio signals to capture their temporal and spectral characteristics. The frequency range of analysis corresponds to the working frequency limits of the microphone model (20 Hz - 20 kHz), with a frequency resolution of 50 Hz and 30 working mel frequency bands. Using the training data and the extracted audio features, a neural network architecture is developed to investigate the<br />performance of the drone detection and classification algorithm. It consists of 10 pairs of convolutional layers, ReLU activation, batch normalization, and max-pooling layers. The number of these layers is determined by the size of t the pooling window along the time dimension. This follows by flattening, dropout, fully connected, and Softmax layers. A classification layer is applied to normalize the output data and obtain final probabilities. The Adam optimizer is chosen for model training. Based on the dataset set, the initial learning rate is set to 0.001, gradually decreasing by a factor of 10 after 75% of the epochs to enhance convergence. The accuracy of the input data recognition reaches 99%, and the F1 score of the trained model is 0.93, indicating a high level of overall architecture performance. The maximum distance of effective detection of drones by the algorithm is 200 m.