Controversy… What controversy? It sounds more like blatant journalistic malpractice
A few years ago, blatant journalistic malpractice was a controversy.
That’s why he was fired
The article says “controversy” as of this is some cancel culture crap.
When I suggested he be fired on another thread I received several responses saying “he made a mistake” and “he was sick”, and many downvotes in return.
The comments here around this were so… Off. I guess nothing was certain, but we were supposed to believe that the author was too sick to write an article, but also writing an article and using an AI “tool” at the same time.
Hindsight is 20/20, but popular defenses at the time were
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this.
I was the one who wrote that comment, and it was not an attempt to excuse all of his actions but a response to the following comment:
Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.
Here is the full comment that I wrote, including the part you snipped off at the end:
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.
You know that the writer himself is quoted in the OP article, right?
Yes…? I saw his comments weeks ago, and smelled something off about them. And apparently Ars agreed, and determined they were lacking.
And now, “Edwards said he was unable to comment at this time.”
If he had Covid, then why was he working?
Sick time/PTO is a treasured resource here in the US. You don’t waste what little you might have on a silly thing like covid…
/s
I did not downvote you—my instance does not allow or show downvotes, which is really nice!—but he was sick, and he did make a mistake, and him being fired does not make either of those things false.
Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.
but he was sick, and he did make a mistake, and him being fired does not make either of those things false.
No, but I believe they were, nonetheless. Regardless, those things also do not excuse his actions, which is why I said he should be, and ultimately was, fired. And I think that’s a positive thing.
Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.
The point is, plenty of people were downvoting me and defending him (such as yourself), which is what made it “controversial”. I was explaining this to the person who was confused as to why it was controversial.
I agree that these things do not excuse his actions, but there was a tendency in that thread to paint him in the worst possible light, which I felt was uncalled for.
I am said to have seen him be fired from Ars because I think there were mitigating circumstances—it is troubling that he felt the need to work while sick!—but on the other hand, given how badly he violated the trust placed in him, it is hard to see how Ars could have made any other choice.
Moreso than violating the trust placed in him is violating the trust readers put into the Ars publication.
I agree, that is a better way of putting it.
Amazing. Just great.
Imagine being confronted for lying and just going “hey it was an accident okay I didn’t MEAN to decieve people, I just used the machine known for deceiving people and willingly put my name on its deceptions and it deceived people!” and having people defend you.
Actually, he completely admitted to and took full responsibility for his mistake; at no point did he offer an excuse, only an explanation.
To the extent I was defending him, it was because people insisted on painting him in the worst possible light, and on misinterpreting his explanation as an excuse, not because I think that everything that he did was okay.
You do have a point, after reading the article. That’s a bit embarrassing for me, honestly. Ragebait got me again, it seems…
“malpractice” would have been not puling the story/issuing a retraction.
It seems like he had humility, but he put his name on an article that had false content that he didn’t verify. That’s not a mistake so much as it is neglect of due diligence. Simply checking if the important citations in his article were true would have saved him, but he didn’t. I can only imagine how many journalists do this without getting caught.
Oh my bad I thought we were talking about the entire Ars team, not the individual author.
I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.
It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.
Just the facts, folks.
The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn’t catch the false information, and not because they used the tools forced upon them.
To be fair to Ars Technica, that doesn’t sound like the case to me.
The “journalist” in question seems to be suggesting that this was their own bad judgment to use AI to “find relevant quotes” from the source material.
Having said that, there’s also a senior editor on the by-line who hasn’t been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.
Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that’s kind of relevant.
Is there any evidence this is happening at Ars Technica? They’re pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn’t mean it’s happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.
Most companies have AI forced, either directly or indirectly (“you need to double your output, AI can help…” kind of thing)
It’s relevant in a situation where the author has not accepted responsibility.
Absolutely not. Ars has a no AI policy, it’s the exact opposite. Guessing you are a nice little bot.
A fucking moron who runs around calling everything a bot when you disagree with whatever the topic is.
It’s the new CyberTruck of online insecurity.
Hope that’s “good” enough for you.
Main character moment.
Whoa. There are actually consequences? ArsTechnica is actually sorry??
No, the worker was fired and the executive whose job title is making sure that the work submitted is correct was not fired.
The executives will get a bonus this year.
Copy editing won’t be an executive’s job. But yeah, they didn’t do the bare minimum which is concerning, it seems to indicate that they may not do the bare minimum on all of their articles. How much stuff went undiscovered?
I’m not going to outright say that journalist shouldn’t use AI to write articles, because it’s basically an enforceable rule, but there should be someone at some point whose ultimate responsibility is to make sure that the articles are at least factual, whether they were written by a human or not. Determining whether a quote is legitimate is pretty easy, you just have to Google the quote, if you can’t find any other sources you start to ask questions. As I said it’s the bare minimum they could have done.
The executives will get a bonus this year.
well of course! they just saved a lot of money on wages, they deserve it!
I think the executive in question is Kyle Orland, who I don’t know personally but I’ve interacted with sometimes. He’s pretty good! Again, as I’ve said elsewhere in this thread, maybe I’m too close. I’ve never worked for either of them, but I’ve encountered them on social media from time to time. I think I interacted with Kyle concerning a Storybundle book once.
Journalistic integrity? On my internet? Well I never.
Seems fair. Was a pretty big fuck up. Might deter others from making similar fuck ups.
As they should
Obviously the use of a LLM was a terrible decision, but I think in this context we can also blame some country’s lack of sick pay.
AI - damned if you do and damned if you don’t. And it’s not just journalism affected.
In this case it was very much NOT “damned if you do, damned if you don’t”–It’s just don’t.
As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully. There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.
The fact that this was a story on AI misuse in the first place only adds insult to injury.
There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.<
Except that there is a requirement in Conde Nast to use AI.
As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully.<
That is what the AI is supposed to be for.
They can’t have it both ways - either they demand AI and accept the consequences, or they give sufficient resources to staff to complete their work without it.
And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.
Yeah. But there’s always the risk of being undercut by someone or something cheaper if you’re operating in a workplace with zero standards. After all, you could write a lot of articles if you didn’t give a rat’s ass about the veracity or quality of the information within.
Good newsrooms are supposed to have standards–that’s what makes them good.
If this the people at Ars had done their jobs to a high standard, the article in question wouldn’t have been written like that in the first place, let alone edited and published as is. They want to fire the writer in question, and the writer wants to blame being sick, but the fact remains that the publishing of that article reveals a systemic problem with how Ars are operating, and a total lack of editorial standards.
The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.
Or, you know, double-check that the quotes given to you by the experimental AI “quote extractor” tool are accurate?
He is (was) their go-to AI reporter. It’s not like they handed the assignment to an intern and said “go nuts.”
And the article was about AI fabricating an attack on a developer that rejected its PR.
The whole point of using AI is that its a search tool and that is the verification.
Otherwise there’s no point in using it.
And you can guarantee Conde Nast demands journalists use AI all the time.
I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.
Why are we blaming AI here instead of the journalist?
I mean they fired the guy, and the guy took full responsibility for the errors. If that’s not blaming the journalist, I don’t know what is.
Tbf, I didn’t read the article. But the title mentions “controversy.” Also are people so lazy they can’t make up their own fake quotes? Was AI really needed here?
Tbf, I didn’t read the article. But the title…
Say no more. Please
Are people so lazy they can’t even bother to read the headline? Maybe an AI would’ve been useful here to generate its own defense.
Being too lazy to read is one thing, not being too lazy to then comment is a whole other kind of existence.
JFC RTFM







