• Ferk@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    9 hours ago

    Thanks for the link, and the clarification (I didn’t know about april 2026)… although it’s still confusing, to be honest. In your link they seem to allude to this just being a way to maintain a voluntary detection that is “already part of the current practice”…

    If that were the case, then at which point “the new law forces [chat providers] to have systems in place to catch or have data for law inforcements”? will services like signal, simplex, etc. really be forced to monitor the contents of the chats?

    I don’t find in the link discussion about situations in which providers will be forced to do chat detection. My understanding from reading that transcript is that there’s no forced requirement on the providers to do this, or am I misunderstanding?

    Just for reference, below is the relevant section translated (emphasis mine).

    In what form does voluntary detection by providers take place, she asks. The exception to the e-Privacy Directive makes it possible for services to detect online sexual images and grooming on their services. The choice to do this lies with the providers of services themselves. They need to inform users in a clear, explicit and understandable way about the fact that they are doing this. This can be done, for example, through the general terms and conditions that must be accepted by the user. This is the current practice. Many platforms are already doing this and investing in improving detection techniques. For voluntary detection, think of Apple Child Safety — which is built into every iPhone by default — Instagram Teen Accounts and the protection settings for minors built into Snapchat and other large platforms. We want services to take responsibility for ourselves. That is an important starting point. According to the current proposal, this possibility would be made permanent.

    My impression from reading the dutch, is that they are opposing this because of the lack of “periodic review” power that the EU would have if they make this voluntary detection a permanent thing. So they aren’t worried about services like signal/simplex which wouldn’t do detection anyway, but about the services that might opt to actually do detection but might do so without proper care for privacy/security… or that will use detection for purposes that don’t warrant it. At least that’s what I understand from the below statement:

    Nevertheless, the government sees an important risk in permanently making this voluntary detection. By permanently making the voluntary detection, the periodic review of the balance between the purpose of the detection and privacy and security considerations disappears. That is a concern for the cabinet. As a result, we as the Netherlands cannot fully support the proposal.

    • DacoTaco@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      Id need to look for it again, but i remember reading she was saying that the current proposal is vague in what it sees as required to prevent what she calls risks. I remember them asking her multiple times if she was against a law to prevent csa and the sharing there off, in which she replied multiple times that she was not, but that the law was too vague about what it constitutes as necessary to prevent it. Did i dream it? ><

      Edit: found it!

      Mevrouw Kathmann (GroenLinks-PvdA):
      Het is niet per se alleen zo dat de huidige praktijk wordt voortgezet. Er zitten bijvoorbeeld ook zinnen in het voorstel die aangeven dat álle risico's moeten worden weggenomen. Het is ongelofelijk vaag, een heel grijs gebied, wat dat betekent. Dat is één. Dat is echt een heel groot risico. Daarnaast noemde de heer Van Houwelingen net al het punt van de leeftijdsverificatie. We hebben niet goed met elkaar kunnen bespreken wat daar nou precies in voorligt en hoe wij daar verder mee om moeten gaan. Dit zijn twee dingen die ik er nu zo uitpik.