Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    11
    ·
    edit-2
    3 days ago

    This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

    I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

    EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

    • Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 days ago

      it had a ridiculous amount of safeties to protect people’s privacy

      The hell it did, that shit was gonna snitch on its users to law enforcement.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        3 days ago

        Nope.

        A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

        Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      4 days ago

      Apple had it report suspected matches, rather than warning locally

      It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        They were not “suspected” they had to be matches to actual CSAM.

        And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.

        So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed

        • Natanael@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 days ago

          Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.

          https://scaling-attacks.net/

          Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert

          • lepinkainen@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 days ago

            And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.

            No cops are called, no accounts closed

            • Natanael@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.