Blog
Programming Note: No blog post next week for Thanksgiving. We’ll be back after that with some content to wrap up the year. To prepare for Thanksgiving, we thought we’d give you a hot take this week, so you have something to discuss over dinner that’s not politics.
2025 is going to be the year that AI has to deliver on all the promise and hype from the last couple years — or else! As such, AI strategies are all the rage today, just like cloud strategies and mobile strategies were before them. Unfortunately, planning your AI strategy is mostly a waste of time, and you should probably stop.
What do we mean? Well, an AI strategy is a way for you to plan how your organization is going to adopt AI — and also of course a way for you to attract investors and appease your board. The problem is that having a single strategy for adopting AI doesn’t make much sense. There are very few principles that apply well across all the possible applications of AI within an organization, so there’s only really one AI strategy that we support: Use more AI. (Yes, we’re biased, but we still think we’re right!)
There are many pitfalls that you’ll encounter when trying to develop a single AI strategy for a whole organization, but the one that we’ve repeated in basically every blog post this year is that things are simply changing too fast for anyone to formulate general organizational goals for how AI should be used.
Moreover, we believe (increasingly strongly) that much of the value derived from AI in the near term is going to be from AI application companies. In the same way that you wouldn’t have the same hiring process to evaluate software engineers and account executives, you shouldn’t have similar criteria to evaluate two AI products in different spaces. You also wouldn’t have a centralized committee determine which parts of the business should hire what kinds of people; you would let the head of each department figure out what’s best for them. Dictating that the sales org should adopt AI independent of whether AI sales tools fit their needs is a fool’s errand.
Finally, AI strategies pretty quickly tend to run amok in terms of bureaucratic restrictions. In one of the more frustrating experiences we’ve had, a CISO asked us to fill out a questionnaire about whether we had evaluated the societal impact of our AI tooling. We wanted to yell, “You’re building a database, and we’re answering questions about it!” — but we kept our cool. Safe to say, we didn’t win that customer. This isn’t to say that you shouldn’t be asking this question about some products, but it shouldn’t be a blocker for every use case. This is also definitely an extreme example, but it’s not an isolated one. We’ve had plenty of customers tell us that they have to run all AI purchases by a centralized committee which can take months to review decisions, even for something as simple and common as GitHub Copilot.
With all that complaining out of the way, what should your AI strategy be? As we said above, we think that basically everyone — ourselves included at RunLLM — could be doing a better job of taking advantage of AI. Your strategy should be to use more AI!
More seriously, here’s a few principles that we think are generally applicable. To be clear, this isn’t really a strategy. There’s no centralized process or order of operations here. Instead, it’s a set of guideposts that should help you make clearer and more thoughtful decisions.
We’re very obviously biased, but if you’re reading this, you probably are too. We think more people should be using AI in general, and more people should be using RunLLM in particular. With how quickly things are changing, we think it’s one of those hit-you-over-the-head obvious points to say that the goal should be to get AI into as many areas as it makes sense. If, instead, you’re spending your time on nuanced and complicated AI strategies, you’ll probably find that your plans have become obsolete by the time you’ve finished writing them down.