ByteOnBikes@slrpnk.net to People Twitter@sh.itjust.worksEnglish · 23 days agoGrok got no chillslrpnk.netimagemessage-square82fedilinkarrow-up11.09Karrow-down112file-text
arrow-up11.08Karrow-down1imageGrok got no chillslrpnk.netByteOnBikes@slrpnk.net to People Twitter@sh.itjust.worksEnglish · 23 days agomessage-square82fedilinkfile-text
Woops. Sorry mods. Reposting with links. https://nitter.space/thelillygaddis/status/1904852790460965206#m https://archive.is/vuiMj https://x.com/thelillygaddis/status/1904852790460965206#m
minus-squarepeoplebeproblems@midwest.sociallinkfedilinkEnglisharrow-up19arrow-down3·22 days agoAny AI model is technically a black box. There isn’t a “human readable” interpretation of the function. The data going in, the training algorithm, the encode/decode, that’s all available. But the model is nonsensical.
minus-squareneatchee@lemmy.worldlinkfedilinkarrow-up9arrow-down1·22 days agoIn almost exactly the same sense as our own brains’ neural networks are nonsensical :D
minus-squareaeshna_cyanea@lemm.eelinkfedilinkEnglisharrow-up2·edit-221 days agoYeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition
Any AI model is technically a black box. There isn’t a “human readable” interpretation of the function.
The data going in, the training algorithm, the encode/decode, that’s all available.
But the model is nonsensical.
In almost exactly the same sense as our own brains’ neural networks are nonsensical :D
Yeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition