In machine learning classification problems, it is overly assumed that training and test data must be drawn from the same locale and essentially follow the same distribution for optimal performance. However, acquiring sufficient labeled data from the same locale is impractical and labeling is resource-intensive. Domain Adaptation (DA) offers a reliable solution, but manual feature and parameter selection for multi-dimensional image data are also resource-intensive and time-consuming. To address these issues collectively, we propose "PSO-based Unified Framework for Unsupervised Domain Adaptation in Image Classification" (PSO-UDAIC), an unsupervised domain adaptation framework, that minimizes domain discrepancy between labeled and unlabeled data by learning a subspace applying various objective functions, making them appear from the same distribution. The PSO technique is utilized to automate the selection of the optimal common subset of features and objective function parameters, reducing manual effort and time. Extensive experiments on benchmark image datasets (Office+Caltech (SURF, DeCAF, VGG6), USPS+MNIST, COIL20, and VLCS) with varied images and settings across diverse image classification tasks which achieved average accuracies of 58.18%, 93.71%, 91.64%, 100%, 81.76%, and 67.38%, respectively, outperforming the base method LDAPL by 12%, 2.12%, 2.55%, 0.28%, 2.76%, and 14.61% showing PSO-UDAIC's superior performance compared to primitive and domain adaptation methods. Statistical paired t-tests reaffirmed the significantly improved performance of PSO-UDAIC over the compared methods.