Blog
This week, we shared our vision for the AI Support Engineer and key features that set it apart from generic AI tools. The AI Support Engineer we built is the result of two years of deep thinking about what it takes to develop a great Native-AI application experience. And we’ve also learned by using a lot of other AI apps, some of which became our favorites.
What we’ve found is that the best experiences with AI usually happen when things just work — when the system figures something out before we do or when work happens behind the scenes. It’s a kind of magic that most products don’t deliver. But we also know that making AI feel effortless takes relentless attention to detail and quality — how the product makes you feel when you’re using it. And that’s honestly been one of the harder aspects of this journey.
As we’ve built RunLLM — and to keep things straight for ourselves — we developed a set principles about what makes a great AI product. Many of the features we’ve shared this week stem from these principles. We’re proud of the progress we’ve made, and — if you’ll indulge us — we’re excited to share those with you.
One of the most magical things that modern AI enables (which simply wasn’t possible before) is the ability to figure things out for you. By default, a AI product shouldn’t ask you to fill out forms or spell out every detail. It should ask you to just point it in the right direction, and it should anticipate your needs. Instead of waiting for you to constantly tell it what to do, it should just work.
No one trusts a system (or a person) that pretends to know everything. The best AI recognizes when it’s uncertain, quantifies its confidence, provides citations, and escalates complex issues to the right experts. With both humans and AI, credibility comes from knowing when to seek input rather than bluffing through uncertainty. Because AI is doing so much behind the scenes, it needs to build trust with you by being transparent and calling out when it doesn’t know something.
Information overload is a problem that AI is well-suited to solve when designed correctly. The best AI products distill a high volume of information into something manageable by highlighting what matters and presenting information in a way that is clear, concise, and actionable. The key is balance: provide a simple answer at first but allow users to go deeper when necessary. AI needs to know how to “speak in headlines” but then also be ready to give you all the details.
What AI applications are ultimately about is doing work for you. The most popular example of this today is OpenAI’s Deep Research: You give it an area to explore, go focus on other things, and come back to find a full-fledged report ready for you. AI applications may not be ready to take on months-long projects (yet?), but they certainly should be able to work unsupervised.
Each of these core principles has directly influenced the way we’ve been building (and will continue to build) RunLLM.
Building an onboarding experience that figures things out for you was one of the most fun things we’ve done at RunLLM. For most of the past year, adding data to RunLLM required filling out a long form — what kind of data you were ingesting, where you data lived, which URLs to crawl, and so on. Once you set up an assistant, learning about all the things RunLLM did required having, well, a person tell you.
Now, all you have to do is point RunLLM at the URL for your docs and it figures out the rest. Within a couple minutes, your assistant is ready to go, and RunLLM will teach you about itself along the way — how to add more data, how to teach it when it gets something wrong, and how to deploy it for your users to use.
Most users today are (rightly) skeptical of AI. We’ve all seen LLMs hallucinate. At RunLLM, trust is a top priority — especially because our audience relies on accurate technical answers in high-stakes situations. Choosing the wrong technology, misconfiguring a system, or struggling to get started can cost developers time, money, and even customers. Here’s how we try to build trust:
RunLLM processes thousands of customer conversations each week. Buried in those interactions are valuable insights — but no team can manually sift through them all. That’s why RunLLM structures this data into clear, actionable insights, identifying patterns, surfacing key issues, and highlighting opportunities — whether for your team to act on or for RunLLM to handle automatically.
We aim to proactively separate signal from noise. We start by automatically categorizing all questions by topic and generating brief summaries, so you don’t have to read hundreds of conversations per-topic. On top of that, we help you uncover customer use cases, track trends in user behavior, identify documentation gaps, and surface feature requests — all in a way that’s immediately actionable.
AI products are ultimately about delivering work so that you don’t have to. RunLLM is always working in the background — whether it’s to help your customers be successful or to help you understand what your customers are doing. This unlocks your time to focus on your highest value customer relationships.
To make your customers successful, we don’t stop at a single answer. We look for alternatives, search the internet, and execute code — all with the goal of maximizing the likelihood that we solve the problem at hand. Simultaneously, we’re always looking for ways to help you improve. The insights mentioned above are periodically updated for you, and RunLLM will proactively flag documentation issues and suggest updates.
When done well, AI can feel like magic. The best AI isn’t just useful — it’s intuitive, responsive, and even delightful. Few things make us happier than hearing a customer ask, “How did it figure that out?!”
Of course, the answer isn’t magic — it’s thoughtful design, relentless iteration, and a deep focus on user experience. Creating seamless AI requires rethinking product design from the ground up and constantly refining how the system anticipates, adapts, and assists.
We’ve learned a lot through this process, and we hope sharing these principles helps others think about what makes AI feel more natural, intuitive, and effective.
And we’re just getting started — with more background data analysis, smarter suggestions, and even more proactive insights on the way.