The Psychological Safety of AI Agents
Designing AI interfaces that augment human decision making without inducing automation complacency or mistrust.
When we deploy AI in high-stakes environments—like air traffic control or emergency response—the user interface is as critical as the model accuracy. If an operator trusts the AI too much, they disengage (automation complacency). If they trust it too little, they ignore valid warnings.
We advocate for 'Explainable UX'. Instead of just showing a recommendation, our interfaces show the confidence interval and the key variables that influenced the AI's decision. This keeps the human in the loop, transforming the AI from a black box into a transparent colleague.
Psychological safety in AI isn't about making the AI 'friendly'. It's about making its boundaries and limitations visible, allowing human experts to wield it with confidence.