The Endgame for LLMs Might Look Like Windows XP Hear me out.
1. The commercial space already pays for heavy, local tools
Mechanical engineers pay thousands of dollars per seat for their productivity software. Those profits go directly into improving the tools. Sometimes there are cloud add-ons, but the bulk of the functionality is local and offline. The feature sets are deep: automated FEA, rendering, parts databases. You run this on powerful machines with serious GPUs, fast CPUs, and large amounts of memory.
Graphics artists do the same with rendering engines. The money funds better pipelines, shaders, SDKs for in-house work. All run locally on heavy hardware.
Electrical engineers buy Matlab, Simulink, and other specialized packages. Industrial engineers rely on AutoCAD, LabVIEW, and so on.
Across disciplines, the pattern is the same:
- Hardware intensive to run
- Indispensable for professionals
- Purchased by companies that see them as essential to success
In some cases, like Excel/Word, work software becomes the defacto home software as well.
However …
2. Running LLMs as a service is expensive
LLM companies like OpenAI, Anthropic, and others face two big problems:
- The core tech is very costly to run at scale for end users.
- Building and maintaining the surrounding infrastructure adds even more cost.
I cannot make these arguments as well as this article does
This limits how cheap and open ended a hosted LLM service can realistically be. TFA above implies $100,000/y cost to run an AI agent!
3. A better model: license LLMs as installable software
LLMs aren’t Model Ts replacing horses. They’re the internal combustion engine, a transformative core technology that powers many different products, but isn’t a product by itself. Even a chat interface is a product built on top of the engine. So, there’s really two markets here: LLMs and the LLM-enabled produts.
AI companies are trying to capture LLM market and all the product market by licensing LLM access. Fine, but they’re doing it by also building the hardware the product runs on. This is fine for a text editor and catastrophic for a massively intense, insanely popular LLM.
In the future, Anthropic, OpenAI, Google, and others could:
- Sell custom-made LLMs directly for customer use
- Build a range of software products that run on them
- Separate revenue streams for core tech and end-user tools
Imagine deleting the massive inference backend and instead selling a license for “the world’s best coding LLM” that you install and run locally. Claude Code already works fine as a concept. Many tools already support OpenAI style APIs. Consumer hardware can run small or quantized models today, and inference-optimized chips are only getting better. NVIDIA can remain king of training, while others optimize for running models.
You could offer:
- Consumer models: smaller, optimized for laptops/desktops
- Commercial models: larger, more capable, for professional workstations
- OpenSource models: For those folks who don’t want to pay.
The companies already buying engineering grade PCs could easily justify the cost, the app ecosystem doesn’t have to pay the AI tax, and AI companies can stop endless capex and focus on building the best LLMs (and maybe some apps).
4. The Excel analogy
If LLMs are sold as “commercial offerings to power your local tools,” then someone gets to write the LLM equivalent of Word/Excel. Popular in professional environments, but with spillover into home use because people want the same capabilities everywhere.
And given the broad appeal of LLMs, they could end up embedded in every work machine making them as expected as having Excel installed.
5. The likely future: PCs+Phones ship with AI core hw/sw
The endgame could be a coherent “operating system” built around licensed LLMs, used both at work and at home. All your tools like code editors, web apps, productivity software, would run on it. You’d pay for the hardware and the software license, just like today’s PCs and phones. Apps are built with the assumption that the OS includes an LLM module to interface with.
Comments
I have not configured comments for this site yet as there doesn't seem to be any good, free solutions. Please feel free to email, or reach out on social media if you have any thoughts or questions. I'd love to hear from you!