Blog
AI trust is hard to achieve. As a user, you can tell right away when an AI application feels like “a bit of a fraud.” Sure, something impressive appears right after you enter a prompt. But look closer — the answer often feels slightly off. You’re left wondering: can you really trust this?
You’re not alone — a recent Boston Consulting Group report found that despite massive investments, 74% of companies struggle to consistently realize value from AI. The gap between AI’s potential and its current performance holds back broader adoption, especially in critical areas like legal services or advanced technical support, where accuracy and reliability matter most.
Trust in AI doesn’t emerge automatically just by using powerful foundation models. It must be carefully designed from the start through fine-tuned models, deep domain expertise, clear guardrails, structured workflows, and thoughtful data preparation.
AI today delivers impressive results for simpler tasks. Summarizing documents or creating listicles? Pretty solid — even if they come with a sprinkle of those all-too-familiar LinkedIn emojis (busted!).
Image generation, too, can be stunning — useful for directional inspiration or quick concept drafts.
But when it comes to more complex outputs — like AI-generated presentations or strategic narratives — things often fall apart and trust fades fast. The results can feel disjointed, misrepresent important details, or miss the point entirely.
Like many, I’m optimistic about AI’s potential. But right now, unlocking that potential still requires too much manual intervention, quality control, and editing — especially when I’m working on professional, customer-facing content.
Tools like Gamma, Canva, and Midjourney help with visual assets, but I still find myself in a cycle of tweaking, rerunning, and reworking more than I’d hoped.
No free rides, I suppose.
In "The Foundations of a Good AI UX," RunLLM CEO Vikram Sreekanti highlights two essential ideas: visibility and control. Visibility ensures users see exactly what the system is doing; control allows them to guide and adjust it as needed.
Trustworthy AI transparently reveals its reasoning, actions, and even uncertainties. Users must easily flag issues and make corrections without starting over. This isn’t superficial UX polish — it’s fundamental to reliable AI. Real trustworthiness means the product is debuggable: errors are clear, fixable, and don’t require restarting from scratch.
Many AI tools today fall short on visibility and control. They look helpful on the surface — with sliders, toggles, and prompts — but leave users guessing at what’s really happening underneath.
Take Midjourney, for example. It gives me what feels like horseshoe-and-hand-grenade–level control. I wouldn’t go so far as to call them fake controls — but I’ve definitely sworn under my breath using it. It’s great when I need something that looks good directionally. But if I’m trying to communicate a specific message, in a consistent style, with recurring characters? It breaks down fast.
You can tweak sliders for stylization, weirdness, and variety, or upload reference images. It feels like control — but it’s not. The results are unpredictable, inconsistent, and frustrating. Miss the mark, and you’re stuck regenerating endlessly. It’s like pulling the handle on a slot machine — over and over, hoping for something usable.
Cursor, by contrast, does the opposite. It shows its work. It tells you exactly which code changes it plans to make, in plain language and highlighted diffs — then lets you approve, reject, or tweak each one.
This fine-grained feedback loop gives developers real control. You're never left wondering what it’s doing or whether your input registered. That transparency — and that tight loop — is a big part of why Cursor has won over so many developers so quickly. Coinbase’s Brian Armstrong says 100% of their coders use it.
We’re big fans too. It’s visibility and control, built in.
If trust is the desired outcome, design is the lever.
These four principles offer a practical foundation for building AI products that feel honest, reliable, and human-centered — not just smart.
1. Transparency – Expose the system’s logic.
Example: Harvey.ai explicitly cites legal documents, enabling attorneys to verify each claim easily.
2. Honesty – Make failure modes obvious.
Example: When confidence drops, Anthropic’s Claude openly says, “I’m unsure about this,” letting users decide the next steps.
3. Control – Empower small, low-friction interventions.
Example: Windsurf lets developers adjust AI-generated suggestions inline — before accepting them — giving fast, contextual control without starting from scratch.
4. Responsiveness – Adapt over time through user feedback.
Example: RunwayML learns from each interaction — asking for feedback after edits and improving future outputs based on user preferences.
Trust doesn’t happen instantly; it accumulates through consistent performance. Products earn trust by gracefully managing edge cases, recovering quickly from errors, and continually improving through user input.
Visibility and control aren’t just best practices; they’re essential. The next generation of AI products will win on reliability over novelty — tools that feel like genuine teammates and empower rather than frustrate users.
Trust isn’t merely a feature — it’s the product itself.