Yea the anti-AI crowd on Lemmy tends to misplace their anger on all AI when a lot of their anger should be directed to the corporate BS shoving it everywhere and anywhere to make a profit and line go up
As always, technology isn’t the enemy, it’s the corporations controlling it that are. And honestly the freely available local LLMs aren’t too far behind the big ones.
I am very strongly anti-AI, I think it has some legitimate uses that have probably saved and improved a lot of lives (like AlphaFold). My main problem (and most people’s main problem with it) is the way it has been trained with stolen data and art.
Since I don’t know much about non-corporate AI I am interested to know how an open-source LLM just trained off of your bookmarks would work, I assumed it would still need to be trained off of stolen data still so it can form sentences as well as the more popular models but I may be wrong, maybe the volume of data needed for a system like that is small enough that it can just be trained off of data willingly donated to it? I doubt it though.
fluid dynamics are quite effective at that. And there seem to only be a few main talking points that they bring up (environmental, energy, training data, art, job loss, personal dislike, wealth concentration (probably better to just say economic, but it’s pretty much just this), ill fitting usages, or not understanding how the model works at a fundamental level), unless people think about something a lot they generally come up with similar arguments.
That’s exactly what I was thinking. And this is actually the first time I’ve heard of some use of LLMs that I may actually be interested in.
Yea the anti-AI crowd on Lemmy tends to misplace their anger on all AI when a lot of their anger should be directed to the corporate BS shoving it everywhere and anywhere to make a profit and line go up
As always, technology isn’t the enemy, it’s the corporations controlling it that are. And honestly the freely available local LLMs aren’t too far behind the big ones.
Well in some ways they are. It also depends a lot on the hardware you have of course. A normal 16GB GPU won’t fit huge LLMs.
The smaller ones are getting impressively good at some things but a lot of them are still struggling when using non-English languages for example.
I am very strongly anti-AI, I think it has some legitimate uses that have probably saved and improved a lot of lives (like AlphaFold). My main problem (and most people’s main problem with it) is the way it has been trained with stolen data and art.
Since I don’t know much about non-corporate AI I am interested to know how an open-source LLM just trained off of your bookmarks would work, I assumed it would still need to be trained off of stolen data still so it can form sentences as well as the more popular models but I may be wrong, maybe the volume of data needed for a system like that is small enough that it can just be trained off of data willingly donated to it? I doubt it though.
Wow, that’s a big net. Surely your comment is applicable to all your catch.
Right?
Yes, do tell me more about the tendencies of the crowd as a whole.
fluid dynamics are quite effective at that. And there seem to only be a few main talking points that they bring up (environmental, energy, training data, art, job loss, personal dislike, wealth concentration (probably better to just say economic, but it’s pretty much just this), ill fitting usages, or not understanding how the model works at a fundamental level), unless people think about something a lot they generally come up with similar arguments.
LLM has its uses if you arent relying too much on it or trusting it to give true information