What worries me is that companies are using “the AI fucked up” as an excuse and just… not fixing the problem. They’re using it as an accountability shield.
IMO this was always the reason for it. It’s the ultimate scapegoat and from the second you saw a headline that said “AI is responsible for…” or “AI did…” and not “Humans used AI to…” it was all over.
Humans are using AI to justify wage suppression, mass layoffs, janky everything and we just gonna blame software and data centers. It’s humans, it always was and at least for the foreseeable future it’s always gonna be.
It’s like all those articles that read “The vehicle struck…” instead of “The driver struck…”, “A shooting then took place…” instead of “The officer then shot…”, etc, etc.
It’s a deflection of blame and whenever I see it it makes my blood boil.
What worries me is that companies are using “the AI fucked up” as an excuse and just… not fixing the problem. They’re using it as an accountability shield.
IMO this was always the reason for it. It’s the ultimate scapegoat and from the second you saw a headline that said “AI is responsible for…” or “AI did…” and not “Humans used AI to…” it was all over.
Humans are using AI to justify wage suppression, mass layoffs, janky everything and we just gonna blame software and data centers. It’s humans, it always was and at least for the foreseeable future it’s always gonna be.
It’s like all those articles that read “The vehicle struck…” instead of “The driver struck…”, “A shooting then took place…” instead of “The officer then shot…”, etc, etc.
It’s a deflection of blame and whenever I see it it makes my blood boil.
That’s what companies always do.
The very purpose of creating most company’s is to limit liability of shareholders and staff.
It’s significantly easier to commit crimes with the knowledge that the system can’t come after your liberty or wealth for those crimes.