• Zarxrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I mean they fired the guy, and the guy took full responsibility for the errors. If that’s not blaming the journalist, I don’t know what is.

      • jaybone@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Tbf, I didn’t read the article. But the title mentions “controversy.” Also are people so lazy they can’t make up their own fake quotes? Was AI really needed here?

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I would fire them and hope that they are blacklisted from ever working in journalism ever again

    • rodneylives@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve interacted with Benj Edwards on social media for some time. He’s done lots of good work! He’s on (or maybe used to be on) Mastodon and Bluesky. He runs Vintage Computing and Gaming, and has written good articles for several prominent places. I’ve said as much in multiple forums, I feel like I’ve maybe been going on a crusade.

      I haven’t seen many others defending him. I’m really torn up over this. They had a weak moment. They were sick (I mean, literally.). A few other people, notably Cory Doctorow and Paul Ford, have written LLM-defending places. And the AI hype has been deafening.

      It’s amazing though, that so soon after he used AI, that it immediately hallucinated something job-ending. I knew it was really bad, but I didn’t know it was THAT bad. You get the sense, with so many people talking positively about it, that the hallucinations must be something that happens, what, maybe 5% of the time?

      To me, it seems like the kind of mistake that he should be able to apologize for, promise not to do it again, and move on. But we’ve all had our good will taken advantage of for so long by malicious actors, like how Gamergate was used as a wedge to push loathsome politics onto a legion of young males. It feels like we can’t give anyone the benefit of the doubt any more.

      I don’t know. I know I’m influenced by all the good work he’s done. I feel like that shouldn’t all be thrown away.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        why the fuck wouldn’t a journalist double check the things that the AI is returning? in what universe is this even considered journalism? it’s so crazy to me that I can’t imagine how it even happened. it’s too stupid for my imagination

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        5% of the time? LLMs, from their own perspective, are only capable of hallucinating. There’s no difference in what they’re doing between cases we call “hallucinating” and “correct.” It’s the same process.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Cory Doctorow … have written LLM-defending places.

        citation requested because everything I’ve seen them write is opposed

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “futurism has confirmed”. Later on the article: “reached out to three parties, no replies and no comment”.

    Huh? So how did they confirm?

      • deltaspawn0040@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It seems like he had humility, but he put his name on an article that had false content that he didn’t verify. That’s not a mistake so much as it is neglect of due diligence. Simply checking if the important citations in his article were true would have saved him, but he didn’t. I can only imagine how many journalists do this without getting caught.

    • artyom@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      When I suggested he be fired on another thread I received several responses saying “he made a mistake” and “he was sick”, and many downvotes in return.

      • Totally Human Emdash User@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I did not downvote you—my instance does not allow or show downvotes, which is really nice!—but he was sick, and he did make a mistake, and him being fired does not make either of those things false.

        Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.

        • artyom@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          but he was sick, and he did make a mistake, and him being fired does not make either of those things false.

          No, but I believe they were, nonetheless. Regardless, those things also do not excuse his actions, which is why I said he should be, and ultimately was, fired. And I think that’s a positive thing.

          Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.

          The point is, plenty of people were downvoting me and defending him (such as yourself), which is what made it “controversial”. I was explaining this to the person who was confused as to why it was controversial.

          • Totally Human Emdash User@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I agree that these things do not excuse his actions, but there was a tendency in that thread to paint him in the worst possible light, which I felt was uncalled for.

            I am said to have seen him be fired from Ars because I think there were mitigating circumstances—it is troubling that he felt the need to work while sick!—but on the other hand, given how badly he violated the trust placed in him, it is hard to see how Ars could have made any other choice.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        The comments here around this were so… Off. I guess nothing was certain, but we were supposed to believe that the author was too sick to write an article, but also writing an article and using an AI “tool” at the same time.

        Hindsight is 20/20, but popular defenses at the time were

        He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this.

        • Totally Human Emdash User@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I was the one who wrote that comment, and it was not an attempt to excuse all of his actions but a response to the following comment:

          Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.

          Here is the full comment that I wrote, including the part you snipped off at the end:

          He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.

          • Lawnman23@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Sick time/PTO is a treasured resource here in the US. You don’t waste what little you might have on a silly thing like covid…

            /s

      • deltaspawn0040@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Amazing. Just great.

        Imagine being confronted for lying and just going “hey it was an accident okay I didn’t MEAN to decieve people, I just used the machine known for deceiving people and willingly put my name on its deceptions and it deceived people!” and having people defend you.

        • Totally Human Emdash User@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Actually, he completely admitted to and took full responsibility for his mistake; at no point did he offer an excuse, only an explanation.

          To the extent I was defending him, it was because people insisted on painting him in the worst possible light, and on misinterpreting his explanation as an excuse, not because I think that everything that he did was okay.

          • deltaspawn0040@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            You do have a point, after reading the article. That’s a bit embarrassing for me, honestly. Ragebait got me again, it seems…

  • tidderuuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.

    It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.

    Just the facts, folks.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.

      This person was simply fired because they didn’t catch the false information, and not because they used the tools forced upon them.

      • ExcessShiv@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write regardless of whether or not they use AI tools.

        • tangeli@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they used all the AI companies for stealing content.

          Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.

          • ExcessShiv@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Then maybe they shouldn’t be using these tools in the first place

            I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Absolutely not. Ars has a no AI policy, it’s the exact opposite. Guessing you are a nice little bot.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.

          It’s the new CyberTruck of online insecurity.

          Hope that’s “good” enough for you.

      • MountingSuspicion@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I don’t work at Ars, and maybe you know something I don’t, but I have seen nothing to suggest that they’re one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        To be fair to Ars Technica, that doesn’t sound like the case to me.

        The “journalist” in question seems to be suggesting that this was their own bad judgment to use AI to “find relevant quotes” from the source material.

        Having said that, there’s also a senior editor on the by-line who hasn’t been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.

          • protist@mander.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Is there any evidence this is happening at Ars Technica? They’re pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn’t mean it’s happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.

          • Railcar8095@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Most companies have AI forced, either directly or indirectly (“you need to double your output, AI can help…” kind of thing)

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      and “it was only once” is bullshit

      They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?

        • 5gruel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can’t be serious.

            • 5gruel@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I highly doubt that. how would that even work? a third-party to the publisher would have to check every statement before the issue goes to print. I can’t imagine this happening for anything that is not research papers or official reports.

              but I happy to learn something new.

              • Bronzebeard@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                This can and should be done internally. Why would it need to be a third party? Any publisher that cares about their reputation anyway. Fact-checkers are a real thing. They routinely follow up on interviews to make sure authors aren’t bullshitting.

  • tangeli@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    AI - damned if you do and damned if you don’t. And it’s not just journalism affected.

      • vrek@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I don’t know but implication the other poster is making is “a human can write 2 articles, a Ai can write 5, I’m being asked for 5 which is impossible. I can use Ai and risk trusting it or not meet my required outputs and also get fired.”

        I made up those numbers but that’s the accusation. You are damned if you use the Ai to meet your goals. You are damned if you don’t meet your goals.

        • tangeli@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          My wife is an accountant. She went to a seminar today where they were told to start using AI or get out of the way. They were shown an AI that can produce consolidated annual accounts and financial statements in a few minutes, that it takes her and the auditors a month to produce. And they look very good! The company is unlikely to pay her and wait for the quality reports she has been producing for years. She’s on notice: start prompting the AI or move on. The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future.

          • Archer@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Did they actually check if the generated stuff was correct? I’m betting it isn’t

            • paequ2@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Nobody ever does. All AI demos are just: look at this mountain of generated text.

              • No checks if the mountain is correct
              • No checks if the mountain is a maintainable design
              • No checks if the mountain was even needed AT ALL
          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future

            LOL there’s no “glorious future”, they’re just going to rat fuck themselves, because those accounts are going to be riddled with errors.

          • Fedizen@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            What company does she work for so I can stay clear of that impending hallucinatory clusterfuck?

            • tangeli@piefed.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Her company has been good, though a recent restructuring is worrying. The advice came to an assembly of CFOs. The problem is much bigger than her company. This was the second professional development guidance she has received in the past month, promoting AI. I give her examples of unreliability and advise caution. At the session, they advised that no one should study programming or accounting any more. My advice was that they should study how to audit and that use of AI would make effective audit much harder than it has been, but also more necessary. The clusterfuck is going to affect everyone, unfortunately. You can’t avoid it by avoiding her company.

          • vrek@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Ouch! Tell her I’m sorry, and I’m sorry for you too. All the accountants I worked with did alot more than just reports. Not to mention that sounds great until the Ai says 2+4 =2*4 and now the company owes 20 billion on taxes…

            Plus in a lot of cases people don’t submit records in identical format, the number of excel workbooks I’ve seen where the data was on “sheet 2” for some unknown reason…

            Maybe its just me, I always provided raw data on sheet 1, analyzed data on sheet 2, and if needed complicated formulas on sheet 3. I would be willing to bet their Ai would break on that format.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Best if you don’t if quality is more important than financial viability, but no one can compete financially with the flood of AI/LLM being given away for free or, at most, far below actual cost. It’s not good for anyone but the billionaires, but have you noticed how much wealth they have accumulated in the past few years? It’s very, very good for them.

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I get where you’re coming from, but I think it’s important that ars has held this person accountable. They have a journalistic standard they are sticking to, which is that there should be no AI use, and there are repercussions for people who don’t abide. There’s not an extremely large cohort that is willing to spend more to avoid AI, but I am certainly part of it, and seeing ars hold this person accountable helps me know that I can trust and patronize them ethically. There are businesses out there unwilling to acquiesce to an AI first narrative, and I’m just worried that elements of doomerism are going to make people unwilling to believe those companies when they have every reason to believe them.

    • mrmaplebar@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      In this case it was very much NOT “damned if you do, damned if you don’t”–It’s just don’t.

      As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully. There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.

      The fact that this was a story on AI misuse in the first place only adds insult to injury.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.

        • mrmaplebar@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Yeah. But there’s always the risk of being undercut by someone or something cheaper if you’re operating in a workplace with zero standards. After all, you could write a lot of articles if you didn’t give a rat’s ass about the veracity or quality of the information within.

          Good newsrooms are supposed to have standards–that’s what makes them good.

          If this the people at Ars had done their jobs to a high standard, the article in question wouldn’t have been written like that in the first place, let alone edited and published as is. They want to fire the writer in question, and the writer wants to blame being sick, but the fact remains that the publishing of that article reveals a systemic problem with how Ars are operating, and a total lack of editorial standards.

          • tangeli@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.

      • ThomasWilliams@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.<

        Except that there is a requirement in Conde Nast to use AI.

        As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully.<

        That is what the AI is supposed to be for.

        They can’t have it both ways - either they demand AI and accept the consequences, or they give sufficient resources to staff to complete their work without it.

    • Fedizen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.

          • themachinestops@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            Oh your right my mistake. I guess unit testing and debugging are useful. I did use copilot to find a missing slash. Also useful for revising email and paragraphs, of course you have to review it. It also should never be used for scientific research and journalism. Of course it doesn’t justify the investments into LLMs, we should focus on more useful things like alpha fold

    • resipsaloquitur@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Or, you know, double-check that the quotes given to you by the experimental AI “quote extractor” tool are accurate?

      He is (was) their go-to AI reporter. It’s not like they handed the assignment to an intern and said “go nuts.”

      And the article was about AI fabricating an attack on a developer that rejected its PR.

      • ThomasWilliams@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The whole point of using AI is that its a search tool and that is the verification.

        Otherwise there’s no point in using it.

        And you can guarantee Conde Nast demands journalists use AI all the time.

  • ParadoxSeahorse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Obviously the use of a LLM was a terrible decision, but I think in this context we can also blame some country’s lack of sick pay.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      No, the worker was fired and the executive whose job title is making sure that the work submitted is correct was not fired.

      The executives will get a bonus this year.

      • rodneylives@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I think the executive in question is Kyle Orland, who I don’t know personally but I’ve interacted with sometimes. He’s pretty good! Again, as I’ve said elsewhere in this thread, maybe I’m too close. I’ve never worked for either of them, but I’ve encountered them on social media from time to time. I think I interacted with Kyle concerning a Storybundle book once.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The executives will get a bonus this year.

        well of course! they just saved a lot of money on wages, they deserve it!

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Copy editing won’t be an executive’s job. But yeah, they didn’t do the bare minimum which is concerning, it seems to indicate that they may not do the bare minimum on all of their articles. How much stuff went undiscovered?

        I’m not going to outright say that journalist shouldn’t use AI to write articles, because it’s basically an enforceable rule, but there should be someone at some point whose ultimate responsibility is to make sure that the articles are at least factual, whether they were written by a human or not. Determining whether a quote is legitimate is pretty easy, you just have to Google the quote, if you can’t find any other sources you start to ask questions. As I said it’s the bare minimum they could have done.