This study provides insights into the use of conversational AI, particularly ChatGPT, in household appliance evaluation interviews and how it differs from real user behaviour. Three comparison experiments (real researcher-real user, real researcher-simulated user vs. simulated researcher and simulated user) reveal the differences in the responses of ChatGPT simulated and real users in specific evaluation scenarios, especially in the evaluation of product appearance, GUI, and PUI. The study found that although simulated users agreed with real users in evaluating the core features of smart appliances, there were limitations in certain practical experience aspects and significant differences in SUS, learning ability, and usability scores across experimental settings. The study also explores the advantages and disadvantages of incorporating simulated users into the product evaluation process, concluding that this introduces an innovative approach to product evaluation that, although challenging, demonstrates the great potential of simulated users in future product evaluation.