• growsomethinggood ()@reddthat.com
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    11
    ·
    3 months ago

    Sigh, ACLU, is this really that high a priority in the list of rights we need to fight for right now? Really?

    Also, am I missing something, or wouldn’t these arguments fall apart under the lens of slander? If you make a sufficiently convincing AI replica that is indistinguishable from reality of someone’s face and/or voice, and use it to say untrue things about them, how is that speech materially different from directly saying “So-and-so said x” when they didn’t? Or worse, making videos of them doing something terrible, or out of character, or even mundane? If that is speech sufficient to be potentially covered by the first amendment, it is slander imo. Even parody has to be somewhat distinct from reality to not be slander/libel, why would this be different?

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      It’s important we don’t let people of influence insulate themselves from criticism using this as a scapegoat.

    • Timii@biglemmowski.win
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      For me its more the risk that making real and fake speech indistinguishable renders any speech meaningless.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      I stopped supporting the ACLU when they wrote an op-ed supporting Citizens United.

      They have their heads firmly up their asses on a handful of fairly crucial issues.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    3 months ago

    As Mary Anne Franks, a George Washington University law professor and a leading advocate for strict anti-deepfake rules, told WIRED in an email, “The obvious flaw in the ‘We already have laws to deal with this’ argument is that if this were true, we wouldn’t be witnessing an explosion of this abuse with no corresponding increase in the filing of criminal charges.”

    We’re certainly witnessing an explosion of media coverage of abusive deepfakes, as with coverage of everything else AI-related. But if there’s no increase in criminal cases, what’s the evidence that the “explosion” is more than that?

  • azl@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    3 months ago

    Look at this in the same light as the 2nd amendment: bearing arms was more compatible with society when the “arms” were mechanically limited in their power/capability. Gun laws have matured to some degree since then, restricting or banning higher powered weaponry available today.

    Maybe slander/defamation protections are not agile or comprehensive enough to curtail the proliferation of AI-generated material. It is certainly much easier to malign or impersonate someone now than ever before.

    I really don’t think software will ever be successfully restricted by the government, but the hardware that is behind it might end up with some form of firmware-based lockout technology that limits AI capabilities to approved models providing a certificate signed by the hardware maker (after vetting the submission for legally-mandated safety or anti-abuse features).

    But the horse has already left the barn. Even the current level of generative AI technology is fully capable of fooling just about anyone, and will never be stopped without advancements in AI detection tools or some very aggressive changes to the law. Here come the historic GPU bans of the late 20’s!

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      Yeah, freedom of the press was fine when only a few rich people could afford their own printing press, but now that everyone has internet, we really need to do something about that. These stupid peasants ruin everything.

      I really don’t think software will ever be successfully restricted by the government, but the hardware that is behind it might end up with some form of firmware-based lockout technology that limits AI capabilities to approved models providing a certificate signed by the hardware maker (after vetting the submission for legally-mandated safety or anti-abuse features).

      Exactly! All computers must be bugged and surveilled 24/7. Telescreens in every room must be 2-way. That is the lesson of George Orwell’s 1984.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    edit-2
    3 months ago

    The ACLU sounds like a good idea until one realizes that “civil rights” is a vague category with almost no connection to actual freedom.

    These are the same people who sold out humans for corporate “freeze peach” in Citizens United. ETC. There’s no point in supporting a bunch of irrational ideologues.