• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    As we’ve previously explored in depth, SB-1047 asks AI model creators to implement a “kill switch” that can be activated if that model starts introducing “novel threats to public safety and security,”

    A model may only be one component of a larger system. Like, there may literally be no way to get unprocessed input through to the model. How can the model creator even do anything about that?

    • OhNoMoreLemmy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It just says can be activated. Not “automatically activates”.

      Kill switches are overly dramatic silliness. Anything with a power button has a kill switch. It sounds impressive but it’s just theatre.

      • WilderSeek@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        They’re safety washing. If AI has this much potential to be that dangerous, it never ever should have been released. There’s so much in-industry arguing, it’s concerning.