• peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    22 days ago

    Any AI model is technically a black box. There isn’t a “human readable” interpretation of the function.

    The data going in, the training algorithm, the encode/decode, that’s all available.

    But the model is nonsensical.

    • neatchee@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      22 days ago

      In almost exactly the same sense as our own brains’ neural networks are nonsensical :D

      • aeshna_cyanea@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 days ago

        Yeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition