As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

  • Docus@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    1 month ago

    It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          1 month ago

          You can still have extra allotted time, or be provided a wiped computer or tablet. Colleges dealt with these disabilities before llms

      • Docus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.