The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations why trust-calibration errors occur, taking clinical decision-support systems as case study.