scarily… They don’t need to to be this creepy, but even I’m a tad baffled by this.

Yesterday me and a few friends were at a pub quiz, of course no phones allowed, so none were used.

It came down to a tie break question of my team and another. “What is the run time of the Lord of the Rings: Fellowship of the ring” according to IMDb.

We answered and went about our day. Today my friend from my team messaged me - top post on his “today feed” is an article published 23 hours ago…

Forgive the pointless red circle… I didnt take the screenshot.

My friend isn’t a privacy conscience person by any means, but he didnt open IMDb or google anything to do with the franchise and hasn’t for many months prior. I’m aware its most likely an incredible coincidence, but when stuff like this happens I can easily understand why many people are convinced everyone’s doom brick is listening to them…

  • TurboHarbinger@feddit.cl
    link
    fedilink
    arrow-up
    7
    ·
    9 hours ago

    Do not forget phones give location/profile data and can easily extrapolate the people you’re meeting. It doesn’t even need GPS, just need to see which wifi networks are available and compare.

    More than listening, it deduces things based on the fingerprints people leave.

  • TheFriar@lemm.ee
    link
    fedilink
    arrow-up
    10
    ·
    14 hours ago

    I mean, I know everyone says it’s impossible for phones to be listening, but I feel like there are just too many examples for that to be the case. My friend was looking for something for our other friends birthday. Her husband suggested opening instagram and talking about the thing she was looking for, describing the specific jacket, saying the company started with an “A.” Minutes later, she got the ad for the jacket she was looking for.

    When I was driving with some people from work, we were talking about daddy Yankee, and his song “gasolina.” We were using maps to navigate home from an away job. On our route, suddenly there were multiple waypoints suggested on our map, “estaciones de gasolina.” We were speaking English, the person whose phone it was doesn’t speak Spanish.

    If they’re not listening, how could these things be possible?

  • BaumGeist@lemmy.ml
    link
    fedilink
    arrow-up
    41
    arrow-down
    2
    ·
    2 days ago

    Phones abaolutely do listen, but not to audio via the mic. When Apple and Google tell you they respect your privacy, they mean they don’t harvest data directly from a live feed of the mic nor camera; they still scan your files in some cases, and they harvest your browsing history, and read your text messages metadata, and check your youtube watch history, and scan your contacts, and check your location, and harvest hundreds of other litttle tiny data points that don’t seem like much but add up to a big profile of you and your behavior and psyche.

    So your friend was at a pub quiz with a couple dozen other people, and his phone knew where he was and who was nearby. A statistically significant portion of the people there were not privacy conscious and googled “Lord of the Rings runtime” or something similar. All that data got harvested by Google and Apple, and processed, and then the most recent and fitting entry from some master list of customers’ sites’ articles was pushed to all their newsfeeds.

    Humans don’t understand intuitively how much information is being processed through nonverbal means at any given time, and that’s the disconnect large companies exploit when they say misleading things like “noooo, your phone isn’t listening to you.”

    But it’s totally not privacy invasive, because at no point along the line did a human view your data (/s)

    • lattrommi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      The person asking the trivia question needed to know the answer, so they could determine who was correct.

      Phones, as I understand them, average about 30 pings per second. That’s 30 times per second the phone is checking for signal strength with the nearest tower, among other data.

      They also work with any device that has wifi or bluetooth to help with location triangulation. So anyone at trivia that had their phone on them and powered, had their position noted as well as their proximity to others. If the location has smart TV’s on the walls, those were picking up the pings as well. If they have internet available to customers, there’s another point picking up the info.

      It’s already been shown that a few companies have listened to microphones. The data being extrapolated is so large, listening to the microphone would be counterproductive and redundant. There are devices everywhere, security cameras, billboards, inside each row of shelves at your grocery store, in every car that has a computer, lights at intersections, smart watches and other IOT devices, even appliances these days have wifi and bluetooth like refridgerators, coffee pots, robot vacuums, treadmills, i could go on.

      It’s scary that some company might be listening to your through your phones microphone but the real scary thing is that they don’t need to. They knew people at that trivia game would be searching for that answer before the question was even asked, without needing to listen in.

      • SkyezOpen@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        14 hours ago

        I hope they listen to me absolutely ripping ass. The idea that some corporate lackey who is noting what I say to feed me targeted ads just has his eardrums blown out by my booty thunder on a regular basis warms my heart.

  • Kanzar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    60
    ·
    2 days ago

    Other folks in the area searched it, and bluetooth nearby as well as wifi tracking put them all in the same place. Same as old mate with the Spanish comment, he was hanging around in an area with folks who regularly look at stuff in Spanish.

    What you think might be spontaneous isn’t.

      • murmelade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        What’s scary is that tracking tech is so good they don’t even need to listen to you to know what you’ve been talking/thinking about.

      • smpl@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        2 days ago

        Your fellow competitors did not necessarily perform the search when they were at the pub. It could be a the john when they got home. Your data profile is still tied to them right now.

  • John Doe@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    2 days ago

    I’m an Android/Google/Pixel person. I have a Google Home speaker at work (self-employed barber/stylist) and was playing old classic country music a few weeks ago. My client mentioned that her husband’s favorite artist is Porter Wagoner and his favorite song is Cold Hard Facts Of Life. Well, guess what the very next song was? And now, ever since then, I’ve been inundated with that song. It plays constantly.

    • ⛓️‍💥@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      13 hours ago

      I was watching a recording of Jeopardy (captured via antenna) on my private media server. This was not a recording of a recent episode. One of the answers was a band I’ve never heard of. The next day Pandora played a song by that band.

    • John Doe@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      20 hours ago

      Today I decided to check with a couple of local insurance agencies to see if I could get my family’s current coverage any cheaper. I never searched for this specific topic, only for contact info to reach out to a couple of agencies. Then I made two phone calls, sent two emails via the Gmail app including my current policies declaration pages, and I received one text message from an insurance agency. Now my news stream is flooded with ads for comparing insurance rates and changing companies.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    2 days ago

    If you think your Apple phone isn’t listening to you, I have some seaside real estate I’d like to sell you in Montana.

  • Iirc correctly maybe it wasn’t picked up on a mic but if your friends all googled it after, it’s likely their devices and accounts were already associated with you, so online services will think maybe you’d bet interested in it too.

  • Neuromancer49@midwest.social
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    10
    ·
    2 days ago

    No no, they listen. How do you think the “Hey Google” feature works? It has to listen for the key phrase. Might as well just listen to everything else.

    I spent some time with a friend and his mother and spoke in Spanish for about two hours while YouTube was playing music. I had Spanish ads for 2 weeks after that.

    • moody@lemmings.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      Your phone listens for the phrase “Hey Google” and uses little processing power to do so. If it was listening to everything and processing that information, your battery would die incredibly fast. We’re talking charging your phone multiple times a day even if you weren’t using it for anything else.

      As someone else mentioned in another commend, being near Spanish speakers’ phones, Bluetooth/Wifi tracking are what Google is using to track you. They search Google in Spanish, Google can tell you spend time with them, Google thinks you speak Spanish.

      • tetris11@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Your phone listens for the phrase “Hey Google” and uses little processing power to do so.

        I need some metrics on this. It must be recording at least some things above a certain volume threshold in order to process them.

        • moody@lemmings.world
          link
          fedilink
          arrow-up
          6
          ·
          2 days ago

          I mean the microphone is active, so it’s listening, but it’s not recording/saving/processing anything until it hears the trigger phrase.

          The truth is they really don’t need to. They track you in so many other ways that actually recording you would be pointless AND risky. While most people don’t quite grasp digital privacy and Google can get away with a lot because of it, they do understand actual eavesdropping and probably wouldn’t stand all their private moments being recorded.

          • tetris11@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            so it’s listening, but it’s not recording/saving/processing anything until it hears the trigger phrase.

            I think this is the part I hold issue with. How can you catch the right fish, unless you’re routinely casting your fishing net?

            I agree that the processing/battery cost of this process is small, but I do think that they’re not just throwing away the other fish, but putting them into specific baskets.

            I hold no issue with the rest of your comment

            • Onihikage@beehaw.org
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 day ago

              How can you catch the right fish, unless you’re routinely casting your fishing net?

              It’s a technique called Keyword Spotting (KWS). https://en.wikipedia.org/wiki/Keyword_spotting

              This uses a tiny speech recognition model that’s trained on very specific words or phrases which are (usually) distinct from general conversation. The model being so small makes it extremely optimized even before any optimization steps like quantization, requiring very little computation to process the audio stream to detect whether the keyword has been spoken. Here’s a 2021 paper where a team of researchers optimized a KWS to use just 251uJ (0.00007 milliwatt-hours) per inference: https://arxiv.org/pdf/2111.04988

              The small size of the KWS model, required for the low power consumption, means it alone can’t be used to listen in on conversations, it outright doesn’t understand anything other than what it’s been trained to identify. This is also why you usually can’t customize the keyword to just anything, but one of a limited set of words or phrases.

              This all means that if you’re ever given an option for completely custom wake phrases, you can be reasonably sure that device is running full speech detection on everything it hears. This is where a smart TV or Amazon Alexa, which are plugged in, have a lot more freedom to listen as much as they want with as complex of a model as they want. High-quality speech-to-text apps like FUTO Voice Input run locally on just about any modern smartphone, so something like a Roku TV can definitely do it.

              • tetris11@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                1 day ago

                I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.

                My question is, how often is audio sampled from the vicinity to allow such processing to happen.

                Given the near-immediate response of “Hey Google”, I would guess once or twice a second.

                • Onihikage@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 hours ago

                  I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.

                  Your stumbling block seemed to be that you didn’t understand how it was possible, so I was trying to explain that, but I may have done a poor job of emphasizing why the technique I described matters. When you said this in a previous comment:

                  I do think that they’re not just throwing away the other fish, but putting them into specific baskets.

                  That was a misunderstanding of how the technology works. With a keyword spotter (KWS), which all smartphone assistants use to detect their activation phrases, they they aren’t catching any “other fish” in the first place, so there’s nothing to put into “specific baskets”.

                  To borrow your analogy of catching fish, a full speech detection model is like casting a large net and dragging it behind a ship, catching absolutely everything and identifying all the fish/words so you can do things with them. Relative to a KWS, it’s very energy intensive and catches everything. One is not likely to spend that amount of energy just to throw back most of the fish. Smart TVs, cars, Alexa, they can all potentially use this method continuously because the energy usage from constantly listening with a full model is not an issue. For those devices, your concern that they might put everything other than the keyword into different baskets is perfectly valid.

                  A smartphone, to save battery, will be using a KWS, which is like baiting a trap with pheromones only released by a specific species of fish. When those fish happen to swim nearby, they smell the pheromones and go into the trap. You check the trap periodically, and when you find the fish in there, you pull them out with a very small net. You’ve expended far less effort to catch only the fish you care about without catching anything else.

                  To use yet another analogy, a KWS is like a tourist in a foreign country where they don’t know the local language and they’ve gotten separated from their guide. They try to ask locals for help but they can’t understand anything, until a local says the name of the tour group, which the tourist recognizes, and is able to follow that person back to their group. That’s exactly what a KWS system experiences, it hears complete nonsense and gibberish until the key phrase pops out of the noise, which they understand clearly.

                  This is what we mean when we say that yes, your phone is listening constantly for the keyword, but the part that’s listening cannot transcribe your conversations until you or someone says the keyword that wakes up the full assistant.

                  My question is, how often is audio sampled from the vicinity to allow such processing to happen.

                  Given the near-immediate response of “Hey Google”, I would guess once or twice a second.

                  Yes, KWS systems generally keep a rolling buffer of audio a few seconds long, and scan it a few times a second to see if it contains the key phrase.

      • The Doctor@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        It would be pretty easy to test, too.

        Get a pre-paid phone. Set up a brand-new Google or Apple account. Activate phone using the new account. Put it through its paces for a few hours and note the ads you get.

        Shoot the shit with your friends and family with the phone on the table for a few hours.

        Put the phone through its paces again and note the ads you get.

    • Ethalis@jlai.lu
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      2 days ago

      The amount of processing power that would be needed to listen the output of billions of devices 24/7 just to push ads wouldn’t make economic sense.

      • The Doctor@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        AI acceleration ASICs are already in a lot of hardware these days. It doesn’t take a whole lot anymore for it to be both cheap and feasible.

  • surph_ninja@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    Phones absolutely listen. But they probably process the speech locally, unless there’s a trigger word flagged, and send mostly text.

    But then it was found Google would upload the audio when a zipper sound was heard, so who knows how often your triggering spy conditions.

  • WIZARD POPE💫@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    I am convinced they do listen. I have had 2 instances og this. Once I was talking with my mom about some new bedsheets and covers. She later went to the store and sent me a picture to see if it’s okay. I later got an add for the exact same bedsheets and covers.

    Had another similar thing when I got an add for some stuff we were just talking about with some people. Cannot remember what specifically.

  • bloubz@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    Google does listen, what do you mean? They have a feature in form of their voice assistant to make sure it can

    • padlock4995@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Yes I’m aware they have the feature but as others stated, listening 24/7 would require enormous levels of compute power that even google wouldnt see as economical