How good design turns GenAI into a clear and reliable partner.
When I look back on my design journey, one thing always stands out: trust. A product can be fast, powerful, and even beautiful, but if people don’t trust it, they won’t use it. Working with AI made this even clearer. My goal has always been simple — design tools that feel open, understandable, and dependable. Because at the end of the day, design is not just about how things look, but about how confident people feel when they use them.

AI often fails not because it lacks capability, but because it lacks transparency. If users don’t understand how it arrived at a decision, they’re left with two choices: accept results blindly or go back to manual methods. Both outcomes defeat the purpose.
The challenge, then, isn’t just creating a powerful AI — it’s designing an experience that makes its logic visible and its decisions verifiable.
Before we introduced AI-driven solutions, compliance auditors worked across multiple screens, copying and pasting, scrolling through long documents, and manually checking every requirement. It was exhausting, repetitive, and prone to errors.
But simply telling them, “The AI has found a match” wasn’t enough. Professionals in this field rely on accuracy, and they need to see proof before they can trust results. Without it, they’ll double-check everything, making the AI redundant.
We set out to design a system where the AI doesn’t just give answers — it shows its work.
1. Instant context
Hovering over an AI result reveals the exact page, the requirement, and what the system found. Green for a match, red for a gap, and highlighted differences for partial matches. The auditor sees the “why” instantly, without leaving their workflow.
2. Connecting problem to solution
Any gap flagged by AI links directly to the comment that will appear in client-facing reports. No more scrolling, guessing, or switching screens — the context is always there.
3. Keeping humans in control
While AI suggests the wording of gap comments, auditors can edit, refine, or rewrite them entirely. Features like “revert” and “compare with AI suggestion” ensure experts remain the final decision-makers.
By designing these layers of transparency and control, we turned what could feel like a faceless machine into a partner auditors can rely on.
This approach is not unique to insurance auditing. Think about Grammarly: it built trust by showing writers exactly why it suggested a change, leaving the choice in their hands. The same principle applies here — AI earns trust when it explains itself.
Trust isn’t built overnight, but thoughtful UX design accelerates it.
In the end, the role of design is to make AI less mysterious and more dependable. By doing so, we allow experts to spend less time on tedious checks and more time on the meaningful work that requires their judgment.