exactly. Forking for any reason is the essence of FOSS.
Scenarios like OPs were taken care of right from the start. That’s just the legal side, tho. But someone still needs to do the actual work which is why it sometimes fails.
exactly. Forking for any reason is the essence of FOSS.
Scenarios like OPs were taken care of right from the start. That’s just the legal side, tho. But someone still needs to do the actual work which is why it sometimes fails.
Public funds.
There actually are lots of initiatives (e.g. https://bigdatastack.eu/european-open-source-initiative ) but it’s still young and there are multiple problems between available public money and contributors actually earning a salary.
Money is not the problem.
either earn a good living being a code monkey, or find a job in a small company that has passion
crazy idea: let’s publicly fund FOSS projects so devs working on stuff they like with a passion can actually make a good living and enable sustainable non-profits to hire expertise, marketing and all the stuff a company needs
the result would be actually good software and happy devs
25 years in the industry here. As I said there’s nothing against learning something new but I doubt it’s as easy as “leveling up”.
Both fields profit a lot from experience and it’s as much gain for a scientist do become a software dev as an architect becoming a carpenter. It’s simply not productive.
there is so much time lost in research institutes because of shoddy programming
Well, that’s the way it is. Scientific code and production code have different requirements. To me that sounds like “that machine prototype is inefficient - just skip the prototype next time and build the real thing right away.”
It’s always good to learn new stuff but in terms of productivity: Don’t attempt to be a programmer. Rather attempt to write better research code (clean up code, revision control, better commenting, maybe testing…)
Rather try to improve cooperation with programmers, if necessary. Close cooperation, asking stupid questions instead of making assumptions etc. makes the process easy for both of you.
Also don’t be afraid to consult different programmers since beyond a certain level, experience and expertise in programming is vastly fragmented.
Experienced programmers mostly suck on your field and vice versa and that’s a good thing.
Consequence:
Software can only be good, when enough people WANT to work on it and with it along the complete life-cycle. There’s a critical amount of developers/contributors/testers and (feedback providing) users.
Hence a lot of critical consumer stuff is based on popular opensource.
Also, we’re entering an aera where the difference between hardware/firmware/software gets increasingly blurred. So all of this applies to more and more hardware, too.
byebye unix principles
I only know thumb = motion/current but now since you say, it’s clear: people used x/y for 2D logically but the 2D plane used to be paper. which is parallel to the earth surface (usually). Computer screens are perpendicular so Y points up, not away from you.
So this makes sense with paper, TIL. With computers, Z traditionally means depth.
How does projection work in your field? X, Y, Z get converted to X, Z and 2D screen planars have no Y axis?
Who invented this, why did she do it and where to send my official letter of complaint?
nushell scripts aren’t shellscripts?