Adversarial attack is aimed to add small perturbations to the model that are imperceptible to humans, resulting in incorrect outputs with high confidence. Currently, adversarial attack mainly focuses on image classification and object detection tasks, but are insufficient in visual tracking. Nevertheless, existing attack methods for object tracking are limited to Siamese networks, with other types of trackers being infrequently targeted. In order to expand the usage of adversarial attacks in object tracking, we propose a model-free black-box framework for learning to generate universal and sparse adversarial examples (USAE) for tracking task. To this end, we first randomly add a noisy patch to any interference image, and then apply standard projected gradient descent to optimize the generation process of adversarial examples which is subjected to a similarity constraint with original images, making its embedding feature closer to the patched interference image in ������2-norm. Consequently, there is no significant difference between adversarial images and original images for human vision, but leading to tracking failure. Furthermore, our method just attacks 50 original images with adversarial images in each sequence, rather than an entire dataset. Numerous experiments on VOT2018, OTB2013, OTB2015, and GOT-10k datasets verify the effectiveness of USAE attack. Specifically, the number of lost reaches 1180 on VOT2018, the precision of OTB2015 decreased by 42.1%, and the success rate of GOT-10k is reduced to 1.8%, which shows the attack effect is remarkable. Moreover, USAE has a good transferability among various trackers like SiamRPN++, ATOM, DiMP, KYS, and ToMP. Notice that the proposed method is black-box and applicable to most realistic scenarios.