• UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    248
    ·
    4 days ago

    “why didn’t you do them?”

    “That is a good question and it merits further explanation. When you made your originally inquiry I determined that the answer you wanted to hear was “yes.” so that is the answer that I provided. Upon further reflection it is clear that your question required a more thoughtful answer. If you would like me to provide more truthful answers in the future, please amend your queries with “no cap” and I will do my best to remember that preference.”

  • mudkip@lemdro.id
    link
    fedilink
    English
    arrow-up
    32
    ·
    3 days ago

    If you asked Claude to build you a house it would build you the most beautiful house, and then you’d go inside and you’d be like, “Claude there’s no bathrooms.” And Claude would say, “There were no bathrooms before either, so it’s actually a pre-existing issue”

  • criss_cross@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    3 days ago

    Actually the dishes you are talking about aren’t dishes at all.

    If you want I can go through options to replace these items with actual dishes.

  • Agent641@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    4 days ago

    “Babe did you fix that hole in the drywall yet?”

    “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”

    • Vegan_Joe@piefed.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 days ago

      Dumb question, but…is Claude worse than GPT or Gemini?

      I was under the impression that it was the lesser of evils

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        47
        arrow-down
        1
        ·
        4 days ago

        They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.

      • ptu@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        4 days ago

        I just started with Claude and I can’t yet distinguish when it has actually done something it says it has done. With ChatGPT I can see through the bullshit quite well by now. At first I was happy when I thought Claude was rid of that bullshit, but turns out it’s just a different type of bullshit.

        The UI and file handling is better in Claude though, and supposedly you can make it create skills which are like instruction booklets on how to do some tasks and then export and share them. But the ones I created were lost during the weekend so I’m not sure how robust they actually are.

      • Leon@pawb.social
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        4 days ago

        In what manner? Capabilities, or belonging to an evil corporation that happily steals data and works to undermine democracy?

      • ZoteTheMighty@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        4 days ago

        Claude is almost always the better model compared to GPT. I find that this is a good leaderboard. However, both Claude and GPT have similar business models: make sure everything they do is completely proprietary, and keep everything behind a monthly paywall. They both run massive data centers to train their models, and neither really deserves the term “Artificial Intelligence”.

      • IndustryStandard@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        4 days ago

        It is better than GPT and Gemini but not great. Claude some US military contracts. At least to public knowledge.

        https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html

        Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.

        The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of America.

        Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.

      • hoch@lemmy.world
        link
        fedilink
        arrow-up
        16
        arrow-down
        15
        ·
        4 days ago

        No. Many people here just hate LLMs in general and will use every opportunity to complain about it.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          4
          ·
          4 days ago

          I’d say 99.9% of people. You’re actually the first other person I’ve seen who doesn’t!

              • some_designer_dude@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                4 days ago

                Then build better guardrails. These are the tools of the future. (And I intend both meanings of the word “tools”.) AI is very good at following rules. In their absence, they require someone far more experienced to drive them properly.

                • ClownStatue@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  4 days ago

                  This is a really good point! It used to be “a computer is only as smart as its user.” The same can be said of AI: the model’s results are kind of dictated by the prompt. while anyone can prompt an AI with whatever they want, it takes experience to use an AI to develop a project from idea to v1. At the e d of the day, the AI can search the web better than me, and type faster than I can - but I know what I want my code to do, and I know how I want it done. Those two things don’t have to be mutually exclusive.

      • rozodru@piefed.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        4 days ago

        less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you’re going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    The stuck-on residue is real.

    But here’s the brutal reality: it’s not just residue; it’s residon’t.

    Options:

    • A: (recommended) do the dishes
    • B: don’t do the dishes
    • C: mix of both
  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    4 days ago

    I find Claude to actually pushes back more than ChatGPT does. That’s why I prefer to use Claude. But of course, I still do due diligence.