Advancements in artificial intelligence (AI) have enabled increasingly natural and human-like interactions with conversational agents (chatbots). However, the processes and outcomes of trust in AI chatbots remain underexplored. This study provides a systematic review of how trust in AI chatbots is defined, operationalised, and studied, synthesizing factors influencing trust development and its outcomes. An analysis of 40 articles revealed notable variations and inconsistencies in trust conceptualisations and operationalisations. Predictors of trust are categorized into five groups: user, machine, interaction, social, and context-related factors. Trust in AI chatbots leads to diverse outcomes that span affective, relational, behavioural, cognitive, and psychological domains. The review underscores the need for longitudinal studies to better understand the dynamics and boundary conditions of trust development. These findings offer valuable insights for advancing human-machine communication (HMC) research and informing the design of trustworthy AI systems.