• Wispy2891@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    1 month ago

    I found a workaround for this:

    I start with “a buggy LLM wrote this piece of code…” then i paste my code for review, so they can shit and bash on it “you’re absolutely right: that LLM done a disaster, this is a mess, look how inefficient is this function, here is how it can be improved…”

  • pinchy@lemmy.world
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    1 month ago

    SO: “that’s a stupid question!” GPT: “that’s a great question!”

  • katy ✨@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 month ago

    piefed is working on solution and answer features and i can’t wait for stackoverflow like communities without the ai “enhancements”

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      This sounds pretty exciting and I keep hearing more and more about piefed lately. I’m kinda excited for this new burgeoning era of the federated 'Net!

  • mstrk@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    1 month ago

    I usually combine both to unblock myself. Lately, SO, repository issues, or just going straight to the documentation of the package/crate seem to give me faster outcomes.

    People have suggested that my prompts might not be optimal for the LLM. One even recommended I take a prompt engineering boot camp. I’m starting to think I’m too dumb to use LLMs to narrow my research sometimes. I’m fine with navigating SO toxicity, though it’s not much different from social media in general. It’s just how people are. You either take the best you can from it or let other people’s bad days affect yours.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 month ago

      If SO doesn’t have the answer to your question, LLMs won’t either. You can’t improve that by prompting “better”.

      They are just an easier way to search for it. They don’t make answers up (or rather, they do, but when they do that, they are always wrong).

    • smh@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      I’ve been having good luck with Kimi K2 for CSS/bootstrap stuff, and boilerplate API calls (example: update x to y, pulling x and y from this .csv). I appreciate that it cites its sources because then I can go read more and hopefully become more self-reliant when looking up documentation.

  • rozodru@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    it’s getting worse too.

    Just this morning I asked claude a very basic question, it halucinated the answer 3 times in a row. zero correct solution. first answer It halucinated what a certain cli daemon does, second solution it provided an alternative that it literally halucinated, the thing didn’t exist at all, third solution it halucinated how another application works as well as the git repo for said application (A. doesn’t even do the the thing Claude describe and B. the repo it provided had NOTHING to do with the application it described) I just gave up and went to my searx and found the answer myself. I shouldn’t have been so lazy.

    ChatGPT isn’t much better anymore.