Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.
From the article:
Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.
Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.



Three letter agencies are 50 years beyond of what is publicly accessible.
We haven’t invested sufficiently in them for this too be plausible. They’re incentives haven’t been to get very ahead either.
I don’t really think that’s true, because, again, idk why people here think this is all a bad take. It’s real simple. For decades, corporations and institutions have had the upper hand. They have vast resources to spend on computational power and enterprise software and algorithms to keep things asymmetrically efficient. Algorithms don’t sleep, don’t get tired, they follow one creed ABO, Always Be Optimizing. But that software costs a lot of money, and you have to know all this other stuff to know how to use it correctly. Then along comes the language model. Suddenly, you just talk to the computer the way you’d talk to another human, and you get what you ask for.