Transparency in AI systems has shown mixed effects on human decision-making, sometimes leading to under-reliance, other times to over-reliance. We investigate this inconsistency through the lens of users’ mental models: internal representations people form about how AI systems behave. Focusing on AI confidence scores as a form of transparency, we examine how users’ trust and reliance are affected when they recognize that the AI is aware of its own capabilities. We propose a game-based experimental framework inspired by real-world Command-and-Control scenarios, requiring collaborative decision-making between a human and multiple AI agents. This setup allows us to study how confidence scores shape mental model formation and influence both per-decision step reliance (i.e., how much users depend on AI agents at each decision) and their overall trust in the AI agents and as well as human-AI team performance.