

I was talking about the Gulf War in the 90s: https://youtu.be/b5EeKsEFpHI
I think the Iraqi tanks were mostly blown up by the time Bush Jr did his invasion.
I was talking about the Gulf War in the 90s: https://youtu.be/b5EeKsEFpHI
I think the Iraqi tanks were mostly blown up by the time Bush Jr did his invasion.
Mixture of experts has been in use since 1991, and it’s essentially just a way to split up the same process as a dense model.
Tanks are an odd comparison, because not only have they changed radically since WW2, to the point that many crew positions have been entirely automated, but also because the role of tanks in modern combat has been radically altered since then (e.g. by the proliferation of drone warfare). They just look sort of similar because of basic geometry.
Consider the current crop of LLMs as the armor that was deployed in WW1, we can see the promise and potential, but it has not yet been fully realized. If you tried to match a WW1 tank against a WW2 tank it would be no contest, and modern armor could destroy both of them with pinpoint accuracy while moving full speed over rough terrain outside of radar range (e.g. what happened in the invasion of Iraq).
It will take many generational leaps across many diverse technologies to get from where we are now to realizing the full potential of large language models, and we can’t get there through simple linear progression any more than tanks could just keep adding thicker armor and bigger guns, it requires new technologies.
The gains in AI have been almost entirely in compute power and training, and those gains have run into powerful diminishing returns. At the core it’s all still running the same Markov chains as the machine learning experiments from the dawn of computing; the math is over a hundred years old and basically unchanged.
For us to see another leap in progress we’ll need to pioneer new calculations and formulate different types of thought, then find a way to integrate that with large transformer networks.
Arguably fighting against Japan and Germany in WW2 is one of the only times the US used their military in a justifiable way. Fascism had to be stopped.
The Japanese military expected to lose 20 million people in the very first battle of the invasion, and the Americans were considering using poison gas because the casualties of fighting it out in the streets would have been in the millions of troops. People don’t realize how dark it was in 1945, food shipments had all but ceased, Japan was entering a famine; if the war had dragged on through a land invasion it would have been cataclysmic.
We’re just talking about the filename, the exact creation time is tracked by the OS. Plus I’d imagine most documents also have a time and date inside. The file name is mostly for sorting and human readability.
I understand you feel very strongly about four digit years, but I really don’t see any situation that I couldn’t sort out with a simple script.
Usually I don’t put dates in file names in the first place, but when I do I use the UTC timestamp; a date without a timezone is inherently fuzzy, and it’s easier to compare and differentiate numerical times.
If someone used two digit years in their naming convention I wouldn’t even blink, let alone get the woodchipper, life is too short to get angry over stuff like that.
It’s just a filename, calm down. The created by date is tracked by the file system and the repo.
The exact date of creation is usually preserved in the filesystem, we’re just talking about what to name the documents themselves. The filename should be short and to the point, it gets truncated if it’s too long, and on windows you only have 260 characters for the entire path to the file plus the name.
Here you go gramps:
(shortD) => {
return parseInt(shortD.slice(0, 2), 10) > 50 ? "19" + shortD : "20"+shortD;
}
ISO 8601 is YYYYMMDD
(or YYYY-MM-DD
in extended format)
Are you really going to wood chipper someone for leaving off the leading 20
? I think we can safely infer the century and millennium with a high confidence, why not trade them for two extra name characters?
You can do plenty of work on an air. I have one because it was a gift, but I find it pretty convenient. It’s so small and portable. For serious work I have my desktop, but the air is great for emails and programming on the go, homebrew runs my entire workflow.
The air is still very thin.
Windows in particular I think gets overlooked as ‘good enough’, it’s only when you get into Linux that you really understand how far it has strayed from the light.
You don’t need to spend hours and hours to start, you can dip your toes in with WSL, maybe use a Linux VM for a few tasks that make your life easier at work. It’s not an all-or-nothing affair, but having proficiency in more than one operating system is great professional development regardless of your personal computing preferences.
I’ve found that many people will go to great lengths to avoid learning anything new.
They want to be able to ignore their computers as much as possible, even considering the prospect of alternative software is taxing and upsetting for them.
I think that’s basically how Microsoft and Adobe are so successful, they bought and cheated their way into the default position, and now they can do whatever they want with no real repercussions.
The user wants to click on the same icons with the same names as before, sometimes it’s as simple as wanting the same name; if it’s not called ‘outlook’ they don’t want it, doesn’t matter how well it works.
True! A fully loaded train is about the most efficient way to move humans from one place to another, and has been for over a hundred years.
Lithium is limited, but you can make 150 e-bikes with a single electric car battery. If we could figure out some sort of solid state sodium battery chemistry it wouldn’t even be an issue.
Usually using electricity in something like an electric car requires more emissions to generate the power than would be emitted from the food and respiration required to walk the same distance.
Bicycles are interesting because they improve efficiency so much that it offsets the emissions needed to make the bike, and e-bikes are able to leverage that high efficiency to get 80+ km of travel per KWh (compared to ~6 from something like a Tesla)
E-bikes sit in a weird spot where the amount of human effort saved is substantially higher than the carbon footprint of the components.
Which implies the optimal transportation mix would be electric trains+trams with e-bikes to go the last few miles.
The problem is the companies building the data centers; they would be just as happy to waste the water and resources mining crypto or hosting cloud gaming, if not for AI it would be something else.
In China they’re able to run DeepSeek without any water waste, because they cool the data centers with the ocean. DeepSeek also uses a fraction of the energy per query and is investing in solar and other renewables for energy.
AI is certainly an environmental issue, but it’s only the most recent head of the big tech hydra.
True. Though in what tank vs tank combat there was, the advantages of modern armor were stark.