- Limiting Parole: A new law pushed by Louisiana Governor Jeff Landry cedes much of the power of the parole board to an algorithm that prevents thousands of prisoners from early release.
- Immutable Risk Score: The risk assessment tool, TIGER, does not take into account efforts prisoners make to rehabilitate themselves. Instead, it focuses on factors that cannot be changed.
- Racial Bias: Civil rights attorneys say the new law could disproportionately harm Black people in part because the algorithm measures factors where racial disparities already exist.
In fascism, the cruelty is the point. Using an algorithm to be cruel is an attempt to diffuse responsibility and dodge accountability. We can’t let society keep going this direction, but how to oppose the allure of cruelty?
“Our system has determined that you’re no longer eligible for support, as your submission of Form 792B was 0:56 late. Please resubmit your application at your earliest convenience, our clients are important to us, and your well-being is our top priority. We’re currently experiencing a high volume of document submission, and the processing time may take as long as 3 months. Please ensure that you have an adequate supply of life-sustaining medication during this time, as no other assistance will be available until completion of your file.”
A symptom of underfunded systems
“A Louisiana law cedes much of the power of the parole board to an algorithm…”
The people responsible for this imbecile sloughing off of responsibility need to be held to account. The stakes are too high to permit them to claim that “the brain warden got it wrong, but it’s out of our hands”, these are the types of situations that warrant standing directly in front of someone and forcing them justify the damage they’ve caused, and explain how they’re going to rectify their fuck up and compensate the affected parties. What I’m hearing is alarmingly similar to news about current problems with medical insurance and the labour market.
Our age will eventually (assuming our species survives) be known as the age of responsibility diffusion. Companies, bureaucracies, AI, they are all just different mechanisms to achieve the same thing, detach the people from any sense of responsibility for the outcome of their horrible choices.
If the responsibility is being shelled out to “the algorithm”, then doesn’t that mean they have less responsibility? Shouldn’t they be paid less if they have less responsibility?
Something tells me they won’t see the logic in that though.
“If you don’t pay me the big bucks for my experience and expertise, who else will tell the robot how to destroy your life?”
Automating this system with some kind of algorithm is not right, but a nearly blind 70-year-old can still do damage? The angle here is weird.
I know for a fact they’ve released “harmless old men,” who basically instantly go out and kill someone.
Might I suggest some light reading? https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction