Leveraging Artificial Intelligence to support human decision-makers requires harnessing the unique strengths of both entities, where human expertise often complements AI capabilities. However, human decision-makers must accurately discern when to trust the AI. In situations with complementary Human-AI expertise, identifying AI inaccuracies becomes challenging for humans, hindering their ability to rely on the AI only when warranted. Even when AI performance improves post-errors, this inability to assess accuracy can hinder trust recovery. Through two experimental tasks, we investigate trust development, erosion, and recovery during AI-assisted decision-making, examining explicit Trust Repair Strategies (TRSs) - Apology, Denial, Promise, and Model Update. Our participants classified familiar and unfamiliar stimuli with an AI with varying accuracy. We find that participants leveraged AI accuracy in familiar tasks as a heuristic to dynamically calibrate their trust during unfamiliar tasks. Further, once trust in the AI was eroded, trust restored through Model Update surpassed initial trust values, followed by Apology, Promise, and the baseline (no repair), with Denial being least effective. We empirically demonstrate how trust calibration occurs during complementary expertise, highlighting factors influencing the different effectiveness of TRSs despite identical AI accuracy, and offering implications for effectively restoring trust in Human-AI collaborations.