

(Clojure is (parentheses (diluted with (Java))).)


(Clojure is (parentheses (diluted with (Java))).)


A quick search might answer your question, but at its core, it treats people as the vulnerability rather than anything software related.


People talk about security here occasionally, though there are places to discuss it. Also, bugs are the most common source of vulnerabilities (though social engineering is a much more common type of attack), so the response seems reasonable to me.
Regarding hacking, for white hat I believe there are communities, though I’d imagine there’s overlap with cybersecurity communities. I don’t know of, nor would I recommend, anything black hat.


working as a web dev, 8+ hours at a keyboard. by evening my hands are tired and i just want to zone out.
been trying to do 20 min before work instead. it’s not much but it’s consistent. any other devs who play — how do you manage it?


You could put it into the
archinstallscript and just never finish the installation if there is no age set. You could also prevent a user from logging into an account that has no age set, this could be achived by modified core packages in thebasepackage.
My (rather limited) understanding is that Arch can be installed both without the archinstall script and without a user. Also, the rest of your comment covers how stupid it is to require a value anyway since people can put whatever they want.
Outside of that, it’s all open source. It’s possible to fork and remove the field entirely from an install script, distro, or even systemd itself.
Nobody can enforce this in the open source world. This is honestly the strongest argument for an open source exemption in these laws. It cannot be enforced on open source OSs.


Alpine is less obscure now because of containers, but I haven’t considered running it as a desktop OS.


No, it wont. I wasn’t suggesting someone should use rustc directly. You’re already using Rust, so using cargo isn’t adding to the supply chain.
That being said, there was one time I needed to use rustc directly. We had an assignment that needed to be compilable from a single source file. I couldn’t bundle a Cargo.toml, so I gave a build script that used rustc directly.


I don’t see why not. Cargo is fundamentally just a fancy wrapper around rustc, anyway. Sure, it’s a really fancy wrapper that does a lot of stuff but it’s entirely possible to just call rustc yourself.


My favorite kind of graphs are ones where an entire axis is unlabeled:

You see this a lot with marketing graphs. They say nothing, but they’re designed to convince you that the graphs mean something.
Anyway, it’s neat they found and fixed, supposedly, some real bugs. I’m curious how many fake reports they had to sift through to find any real ones.


You can run rustc directly! You just need to pass about 30 different parameters to it as well as a list of all the dependencies you use and…
Look, it works for small projects.


The success rate of main branch builds compounds this further. It has fallen to 70.8%, its lowest in over five years – 30% of attempts to merge code for production are now failing.
…
The integration bottleneck finding is credible. If you’re generating code faster than your team can review and integrate it, that’s a genuine problem this data is consistent with.
I disagree here. If more attempts are failing, then more attempts are needed to merge a branch. If the pipeline is running more and fewer branches are merging, it’s also possible that people need to go through more revisions to merge their code than they needed to before.
People using AI to write their entire PR will find that fixing issues with it takes more work. They often don’t know how the PR works. I wouldn’t be surprised if this resulted in PRs taking longer to merge as a result, which would contradict CircleCI’s claims of teams benefiting from AI.
I believe the report has insufficient data to draw any meaningful conclusions. The data is interesting, at least.


This decoupling of commands from effects is interesting, but I don’t think I’d use it in most places. In this specific example, passing in an interface for an API client (or whatever other thing you want to call) lets you create a mock client and pass that in during testing, and different environments should be configured differently anyway.
There is one place I’d consider this though, and it’s incredibly specific: a MTG rules engine. Because of replacement effects, triggered abilities, and so on, being able to intercept everything from starting turns to taking damage means you can apply the various different game effects when they come up rather than mixing that logic up all over the codebase. I’m tempted to try this and see if it works, actually.


On the contrary, I have a 1440p 120Hz primary monitor and a 4k 60Hz vertical side monitor, and I can only seem to make that setup work with Wayland. I’ve been using only Wayland this whole time as a result.
As for all your issues with it:
The rest of these aren’t issues I’ve had to deal with at all, but I can see them coming up. Wayland does have some issues, but nothing I’ve come across that’s major enough to bother me all that much.


Poor lonelycpp. They became a target for major nations worldwide over such a dumb reason.
If I were them, I’d do anything I can to break the app immediately and force them to use anything else. I wouldn’t want to be in the supply chain for an app like this where being compromised is immediately a RCE incident for a government app.


What I usually push for is that every CI task either sets up the environment or executes that one command™ for that task. For example, that command can be uv run ruff check or cargo fmt --all -- --check or whatever.
Where the CI-runs-one-script-only (or no-CI) approach falls apart for me is when you want to have a deployment pipeline. It’s usually best not to have deployment secrets stored in any dev machine, so a good place to keep them is in your CI configs (and all major platforms support secrets stored with an environment, variable groups, etc). Of course, I’m referring here to work on a larger team, where permission to deploy needs to be transferrable, but you don’t really want to be rotating deployment secrets all the time either. This means you’re running code in the pipeline that you can’t run locally in order to deploy it.
It also doesn’t work well when you build for multiple platforms. For example, I have Rust projects that build and test on Windows, MacOS, and Linux which is only possible by running those on multiple runners (each on a different OS and, in MacOS’s case, CPU architecture).
The compromise of one-script-per-task can usually work even in these situations, from my experience. You still get to use things like GitHub’s matrix, for example, to run multiple runners in parallel. It just means you have different commands for different things now.


As a first language, JS is too much. They would need to learn three languages to make websites (JS, CSS, and HTML).
I’d start with Python. It’s easy to learn, and modern Python gives you the tools to write code that’s easy to read and follow without being too verbose.
uv should make things very easy to setup too. Install uv, then give them a starter repo with the Python version set. uv run should just work after that, no manual venv/conda/etc nonsense involved.


My school didn’t have a course, but the test was entirely in Java.
C would have been a lot more useful.


Like ISO 8601. /s
I recently built a voice-to-text agent in Rust
Agent…? Uh, okay, let’s just use that word for everything now.
I did not have the Rust toolchain installed on my system. I simply told the coding agent that I use Nix, and it figured out how to pull in the entire Rust toolchain through Nix, compile the project inside an isolated shell and produce a working binary.
Sorry, where is the part where you built something?
Anyway, NixOS gets a lot of praise. Maybe it’s something I should try if Manjaro doesn’t survive its current drama (though it seems like they have a path forward now).
Wtf?