This paper provides a systematic literature review of current studies between January 2015 and January 2022 on user trust in artificial intelligence (AI) that has been conducted from different perspectives. Such a review and analysis leads to the identification of the various components, influencing factors, and outcomes of users’ trust in AI. Based on the findings, a comprehensive conceptual framework is proposed for a better understanding of users’ trust in AI. This framework can further be tested and validated in various contexts for enhancing our knowledge of users’ trust in AI. This study also provides potential future research avenues. From a practical perspective, it helps AI-supported service providers comprehend the concept of user trust from different perspectives. The findings highlight the importance of building trust based on different facets to facilitate positive cognitive, affective, and behavioral changes among the users.