In today’s digital age, consumers increasingly rely on online shopping for convenience and accessibility. However, a significant drawback of online shopping is the inability to physically try on clothing before purchasing. This limitation often leads to uncertainty regarding fit and style, resulting in customer post-purchase dissatisfaction and higher return rates. Research indicates that online items are three times more likely to be returned than in-store ones, especially during the pandemic. To address this challenge, we propose a virtual try-on method called FITMI, an enhanced Latent Diffusion Textual Inversion model for virtual try-on purposes. The proposed architecture aims to bridge the gap between traditional in-store try-ons and online shopping by offering users a realistic and interactive virtual try-on experience. Although virtual try-on solutions already exist, recent advancements in artificial intelligence have significantly enhanced their capabilities, enabling more sophisticated and realistic virtual try-on experiences than ever before. Building on these advancements, FITMI surpasses ordinary virtual try-ons relying on generative adversarial networks, often producing unrealistic outputs. Instead, FITMI utilizes latent diffusion models to generate high-quality images with detailed textures. As a web application, FITMI facilitates virtual try-ons by seamlessly integrating images of users with garments from catalogs, providing a true-to-life representation of how the items would look. This approach differentiates us from competitors. FITMI is validated using two widely recognized benchmarks: the Dress-Code and Viton-HD datasets. Additionally, FITMI acts as a trusted style advisor, enhancing the shopping experience by recommending complementary items to elevate the chosen garment and suggesting similar options based on user preferences.