Aram Andreasyan
August 18, 2025

Designing Trust | Between People and AI

How good design turns GenAI into a clear and reliable partner.

When I look back on my design journey, one thing always stands out: trust. A product can be fast, powerful, and even beautiful, but if people don’t trust it, they won’t use it. Working with AI made this even clearer. My goal has always been simple — design tools that feel open, understandable, and dependable. Because at the end of the day, design is not just about how things look, but about how confident people feel when they use them.

Aram Andreasyan

Why trust is the real design problem

AI often fails not because it lacks capability, but because it lacks transparency. If users don’t understand how it arrived at a decision, they’re left with two choices: accept results blindly or go back to manual methods. Both outcomes defeat the purpose.

The challenge, then, isn’t just creating a powerful AI — it’s designing an experience that makes its logic visible and its decisions verifiable.

The auditor’s daily struggle

Before we introduced AI-driven solutions, compliance auditors worked across multiple screens, copying and pasting, scrolling through long documents, and manually checking every requirement. It was exhausting, repetitive, and prone to errors.

But simply telling them, “The AI has found a match” wasn’t enough. Professionals in this field rely on accuracy, and they need to see proof before they can trust results. Without it, they’ll double-check everything, making the AI redundant.

Our design approach: from black box to clear glass

We set out to design a system where the AI doesn’t just give answers — it shows its work.

1. Instant context
Hovering over an AI result reveals the exact page, the requirement, and what the system found. Green for a match, red for a gap, and highlighted differences for partial matches. The auditor sees the “why” instantly, without leaving their workflow.

2. Connecting problem to solution
Any gap flagged by AI links directly to the comment that will appear in client-facing reports. No more scrolling, guessing, or switching screens — the context is always there.

3. Keeping humans in control
While AI suggests the wording of gap comments, auditors can edit, refine, or rewrite them entirely. Features like “revert” and “compare with AI suggestion” ensure experts remain the final decision-makers.

By designing these layers of transparency and control, we turned what could feel like a faceless machine into a partner auditors can rely on.

The bigger picture

This approach is not unique to insurance auditing. Think about Grammarly: it built trust by showing writers exactly why it suggested a change, leaving the choice in their hands. The same principle applies here — AI earns trust when it explains itself.

What changes with trust-driven design?

  • Faster verification without sacrificing accuracy
  • Less context-switching and mental fatigue
  • Confidence in both AI results and human oversight

Trust isn’t built overnight, but thoughtful UX design accelerates it.

Takeaways for designing human–AI collaboration

  • Make reasoning visible: Don’t just provide answers — show evidence.
  • Keep users in flow: Minimize switching between tools or screens.
  • Preserve human authority: AI assists, humans decide.
  • Use familiar patterns: Tooltips, highlights, and comparisons make new tech feel approachable.

In the end, the role of design is to make AI less mysterious and more dependable. By doing so, we allow experts to spend less time on tedious checks and more time on the meaningful work that requires their judgment.

Aram Andreasyan
Industry Leader, Design Expert