• ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    6 days ago

    There should be only one exception: In case someone needs an example of an AI-generated text.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.

  • eletes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    6 days ago

    There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.

    The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.

    • The Velour Fog @lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 days ago

      You’re not working on anything, clanker.

      For those wondering, check the timestamps this accounts comment history, especially comments from 4 days ago or longer. Fully formatted multi-paragraph comments made 10-30 seconds apart. This is an LLM-controlled account.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Yeah you can tell because the comment doesn’t really say anything. It’s just a lot of text but no actual meaning.

        • The Velour Fog @lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Yup, one of the main hallmarks of AI generated slop that’s often hard to explain unless you have an example like the above in front of you. A lotta words, but very little substance.

  • infeeeee@lemmy.zip
    link
    fedilink
    English
    arrow-up
    370
    arrow-down
    2
    ·
    7 days ago

    Saved you a click:

    After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    • arcine@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Treating it like a tool instead of treating it like a God. What a novel idea !

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      239
      arrow-down
      1
      ·
      7 days ago

      AIbros: we’re creating God!!!

      AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        95
        ·
        7 days ago

        The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

        The “AI” is just streamlining the process to save time.

        Relying on it otherwise is stupid and just proves instantly that you are incompetent.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          This is absolutely the case, and honestly, at least for now how it needs to be across the board.

          Noone should be using AI to do things you’re incapable of doing (or undoing).

        • 7101334@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Relying on it otherwise is stupid and just proves instantly that you are incompetent.

          Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.

        • Zagorath@quokk.au
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          7 days ago

          the user needs to be smart enough to do whatever they’re asking anyway

          I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.

          • Aralakh@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            This is where domain expertise would come in, no? It’s speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I’d say.

          • Pyro@programming.dev
            link
            fedilink
            English
            arrow-up
            16
            ·
            7 days ago

            But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.

            • fartographer@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 days ago

              If you’re unable to brute-force verification (research, testing, consulting the ancient texts), there’s where you stop what you’re doing, and take a breath. Then, consult an expert. Just like the film critic analogy, it’s easier to verify than to create, so you’re saving the expert time and effort while learning about something that you were obviously already passionate enough about to have started this endeavor.

                • fartographer@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 days ago

                  As someone who codes, I specifically didn’t say “always” because of course it’s not always true. Especially in the cases of “garbage in, garbage out.”

                  But there’s still an argument to be made for mental load and context, for which I’d argue that planning solutions and then writing the code generally is more taxing than someone handing you suggested solutions with semi-complete code or pseudo-code, and then identifying road blocks.

                  On the other hand, if someone you trust unexpectedly hands you hallucinated garbage, then you’re likely to spin your wheels trying to identify what they did.

            • Zagorath@quokk.au
              link
              fedilink
              English
              arrow-up
              9
              ·
              7 days ago

              At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.

              • EldritchFemininity@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 days ago

                You were thinking logically about a normal production chain. In that case, QA or whoever says “This is wrong, rework it and correct the issue” and that’s that. With AI, it does the whole thing over again and may or may not come back with the same issue or an entirely new one.

      • youcantreadthis@quokk.auBanned from community
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 days ago

        Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.

    • MissesAutumnRains@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      7 days ago

      Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      7 days ago

      Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.

      I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.

    • FauxPseudo @lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      7 days ago

      Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.

      • Zagorath@quokk.au
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 days ago

        That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.

        Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          Eh, that’s not quite original research. There are plenty of other examples of images and sound files created for Wikipedia. A representative example isn’t research, it’s just indicating what something is.

          The Wikipedia article on AI slop and generative AI has a few instances of content that’s representative to illustrate a sourced statement, as opposed to being evidence or something.

          It’s similar to the various charts and animations.

  • SpaceNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    ·
    7 days ago

    An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.

  • Sunless Game Studios@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    7 days ago

    I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI, they need people like him.

  • Mwa@thelemmy.club
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    7 days ago

    W Wikipedia,would be better to remove the exceptions but its fine tbh.

  • davidgro@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 days ago

    I hoped the exceptions would be like “Quoted example text of LLM output, when it’s clearly labeled and styled separately from the article text.”

  • webp@mander.xyz
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    6
    ·
    7 days ago

    Why do they need AI at all? Wikipedia had existed long before it and was doing fine.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      7 days ago

      You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.

      …except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.

      It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    7 days ago

    So in other words, when used responsibly as a tool with limitations, AI has it’s uses? Though very environmentally unfriendly uses?

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    7 days ago

    Good news. Hopefully they’ll get rid of those two exceptions in the future.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      7 days ago

      Would be pretty shitty to make sure every time you are editing Wikipedia to disable any AI based grammar/spellcheckers (e.g Grammarly), and not being allowed to use translation tools.

      Because those are the two exceptions.

      • antonim@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        7 days ago

        Spell- and grammar-checking is useless anyway. If you don’t have at least one word underlined with red in every sentence, you’re not writing anything intellectually serious. 🧐

        • Warl0k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 days ago

          Spelling/grammar checking and machine translation have been in use for decades on wikipedia, the only difference is that AI has improved the usefulness of the tools for first-pass editing. I don’t believe the policy has even changed - you still had to be fluent in the language if you were using the old style MTL tools, too.

          Aside from generating videos of young girls with gigantic titties, this is the only thing generative AI is actually useful for.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I still think it should be banned. It’s prone to just making shit up. Therefore, it’s not useful for any sort of professional work. If you had just a guy named Al, who would work for free, but sometimes would just make stuff up to make you happy, would you let Al work on important things?