• Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    8 days ago

    Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

    Someone needs to put together a list of things that tech journalists need to understand about LLMs and generative AI. This level of anthropomorphism makes the rest of the article look silly.

    Also, I don’t think that’s how it works lol. Who’s to say that the LLM isn’t auto-completing what a list of banned words might look like, and why wouldn’t a list of banned words have a regex layer on top to prevent it from getting out like that.

    • Zak@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.

      If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn’t have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.

      As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as abuse, hate, and threat which are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Ha it’s so easy to bypass bad word regex, just try asking in a language other than English. I doubt these fuckers even remember such thing exists.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 days ago

    You can run local models that will do this without being gaslit.

    Manipulating chatbots to bypass their refusal conditioning is pretty simple, you can find copy paste blocks of text that will work on most public models.

    You’re likely to get your account banned as there are other, non-LLM, systems searching your chatlog for banned terms specifically to address these kinds of jailbreaks.

    • setVeryLoud(true);@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      I tried it with an uncensored version of Qwen, it straight up told me how to tie a noose and how to make sure the knot would be effective in order to kill me. I could even ask it for a more painful method, and it gave it to me.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 days ago

    if you build a bomb from instructions from AI…you’re a bigger idiot than a regular person who builds bombs from books.

  • demonsword@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    Not really useful since an “hallucinated” bomb recipe regurgitated by any LLM is likely to not work at all