Current methods for salient object detection in optical remote sensing images (RSI-SOD) adhere strictly to the conventional supervised train-test paradigm, where models remain fixed after training and are directly applied to test samples. However, this paradigm faces significant challenges in adapting to test-time images due to the inherent variability in remote sensing scenes. Salient objects exhibit considerable differences in size, type, and topology across RSIs, complicating accurate localization in unseen test images. Moreover, the acquisition of RSIs is highly susceptible to atmospheric conditions, often leading to degraded image quality and a notable domain shift between training and testing phases. In this work, we explore test-time model adaptation for RSI-SOD and introduce a novel multitask collaboration approach to tackle these challenges. Our approach integrates a self-supervised auxiliary task, specifically image reconstruction, with the primary supervised task of saliency prediction to achieve collaborative learning. This is accomplished through an architecture that comprises a shared feature encoder and two distinct task-specific decoders. Most importantly, the self-supervised image reconstruction task optimizes model parameters using unlabeled test-time images, allowing adaptation to test distributions and enabling flexibly scene-dependent representation learning. In addition, we design a cross-task modulation module (CMM) positioned between the task-specific decoders, which fully exploits intertask correlations to enhance the adjustment of saliency representations. Extensive experimental evaluations confirm the superiority of our method across three widely used RSI-SOD benchmarks and validate the robustness of our proposed test-time adaptation strategy against diverse types of RSI corruptions.