• beeng@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    ·
    13 days ago

    You’d think these centralised LLM search providers would be caching a lot of this stuff, eg perplexity or claude.

    • droplet6585@lemmy.ml
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      13 days ago

      There’s two prongs to this

      1. Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but.

      2. Crippling information sources outside of service means information is more easily “found” inside the service.

      So if it was ever a bug, it’s now a feature.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        13 days ago

        Third prong, looking constantly for new information. Yeah, most of these sites may be basically static, but it’s probably cheaper and easier to just constantly recrawl things.