cschreib@programming.devtoProgrammer Humor@programming.dev•What it's like to be a developer in 2024
4·
6 months agoI don’t know what shady shit you’re referring to. They do AI, but I don’t use any of that. IMO their core strength is the search engine and how it works for you rather than against.
How I wish CUDA was an open standard. We use it at work, and the tooling is a constant pain. Being almost entirely controlled by NVIDIA, there’s no alternative toolset, and that means little pressure to make it better. Clang being able to compile CUDA code is an encouraging first step, meaning we could possibly do without nvcc. Sadly the CMake support for it on Windows has not yet landed. And that still leaves the SDK and runtime entirely in NVIDIA’s hands.
What irritates me the most about this SDK is the versioning and compatibility madness. Especially on Windows, where the SDK is very picky about the compiler/STL version, and hence won’t allow us to turn on C++20 for CUDA code. I also could never get my head around the backward/forward compatibility between SDK and hardware (let alone drivers).
And the bloat. So many GBs of pre-compiled GPU code for seemingly all possible architectures in the runtime (including cudnn, cublas, etc). I’d be curious about the actual number, but we probably use 1% of this code, yet we have to ship the whole thing, all the time.
If CPU vendors were able to come up with standard architectures, why can’t GPU vendors? So much wasted time, effort, energy, bandwidth, because of this.
How do you people manage this?