Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      7 months ago

      I think the question is: should we have designed the internet such as to have made it impossible to find bomb plans on it? And to be honest, I don’t think the internet would be what it is if it were possible to have that level of filtering and censorship. Child porn is reprehensible in any form. To me, it makes more sense to blame the moron with the hammer than to blame the hammer.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      7 months ago

      What you are asking for is equivalent to stopping people from writing literotica about children using word.

      Nobody is advocating for child literotica or defending it, but most understand that it would take draconian measures to stop it. Word would have to be entirely online and everything written would have to pass through a filter to verify it isn’t something illegal.

      By it’s very nature, it’s very difficult to remove such things from generative models. Although there is one solution I can think of which would be to take children completely out of models.

      The problem is this isn’t a solution that is being proposed, sadly all current possible legislations are meant to do one thing and that is to create and cement a monopoly around AI.

      I’m ready to tackle all issues involving AI but the main current issue is a handful of companies trying to rip it out of our hands and playing on people’s emotions to do so. Once that’s done, we can take care of the 0.01 % of users that are generating CP.