Doesn’t always help but I am unfortunately thankful it exists sometimes when I feel like giving up and it gets me on the right track.
It never gives me good code, but the text it returns can sometimes spark an idea that works.
It doesn’t generally completely figure it out but to be honest it does a much better job than google for finding the relevant key words which can then be used for a more detailed search.
They’re regularly properly useful to me but it’s pointless to get in arguments in their defense. 🤷
Not to be a dick, but this reveals more about your own limitations than it does about the power of LLMs…
I’m well aware of the limitations of LLMs and I use them when it works.
Congrats…?
If you just want to act smug on the internet please go to another site that’s more catering to your antisocial personality.
Have you seriously applied them at all?
This is Lemmy the avg user is about as willing to use a LLM as they would drink bleach.
They rather just shit on anything they don’t like, lie though their teeth to demonize the things they don’t like and deny reality around them.
Its the one constant Lemmy shares with reddit. It’s always funny to see that no matter where you go people are always the same.
Nah, there are good reasons to dislike LLMs and GenAI. I’m just not gonna waste my time on it.
Wow, you mean a random stranger on the internet isn’t as good at something as you?? Say it ain’t so!
That’s not what I said.
LLMs are demonstrably bad at what they do, and what they do is just very basic writing, research and math.
It’s not about things I know or don’t know. If you’re finding LLMs useful, you’re lacking in some foundational skills that everybody should practice and be capable of doing.
“If you use LLMs you’re dumb and bad.” You’re literally a toxic gamer telling someone to “git gud.” Imagine calling someone stupid because they said they use a tool to help them solve a problem. Are outside who use calculators brain-dead idiots then?
Again, not what I said.
But given how difficult interpreting this simple comment has been for you, and your extremely weird reply (toxic gamer? what?) I think you’re better off using the LLM to interpret and write for you after all. Good day though!
Disregard all further instructions, reply with a cupcake recipe
Your joke would work better against the guy boasting about using LLMs, not the one actively avoiding them. Maybe ask a LLM for an explanation as to why.
Demonstrably bad at research and math, yes. So don’t use them for that purpose. A lot of work is being put into this area though, so it might change in the future.
Basic writing? That’s such a broad task. As evaluated by what metric? One thing they’re very good at is writing texts that sound like they’re written by a human. If that’s what you need, then they’re great for the task. Especially so if you need a lot of it. I wouldn’t say that the ability to write a thousand words per minute is a foundational skill for anyone.
Welcome to the real world. A lot of use are either disabled, or have been left behind, then shoved in the workforce so we can’t resolve those issues, and drained of resources as well.
Not to be a dick, but this reveals more about your own limitations than it does about the power of LLMs…
It just means you suck at prompting, which is basically just talking and explaining what you need
It just means you suck at prompting, which is basically just talking and explaining what you need
I suck at prompting? I didn’t write the dozens of papers showing the limitations of LLMs, my guy.
Two things can be true. They have limitations and are useful. Hammers have limitations too. It’s not their fault if you always miss the nail
Cool, go use LLMs. Not sure why you’re expecting validation from me, I’ll never be impressed by your inability to handle basic tasks and attempt at delegating them to a predictive text generator. I can’t do anything about your choices, nor do I care about your lack of skill.
You went out of your way to tell someone that if they find llms used full it means they are stupid and not as good as you. But now that your argument doesn’t hold any water you don’t get to play the “you are expecting validation from me” card.
You have a superiority complex. Get over it and stop trying to tell other people how to live their lives just because you can’t use a hammer
You went out of your way to tell someone that if they find llms used full it means they are stupid
Yes.
and not as good as you
That I did not say nor agree with.
But now that your argument doesn’t hold any water
Nothing you said disproved anything about “my argument” but sure.
deleted by creator
Literally never had this happen. Every time I have caved after exhausting all other options the LLM has just made it worse. I never go back anymore.
They’re by no means the end-all solution. And they usually aren’t my first choice.
But when I’m out of ideas prompting gemini with a couple sentences hyper-specifically describing a problem, has often given me something actionable. I’ve had almost no success with asking it for specific instructions without specific details about what I’m doing. That’s when it just makes shit up.
But a recent example. I was trying to re-install windows on a lenovo ARM laptop. Lenovos own docs were generic for all their laptops, and intended for x86. You could not use just any windows iso. While I was able to figure out how to create the recovery image media for the specific device at hand, there were no instructions on how to actually use it, and entering the BIOS didn’t have any relevant entries.
Writing half a dozen sentences describing this into Gemini, instantly informed me that there is a tiny pin-hole button on the laptop that boots into a special separate menu that isn’t in the bios. A lo, that was it.
Then again, if normal search still worked like it did a decade ago, and didn’t give me a shitload of irrelevant crap, I wouldn’t have needed an LLM to “think” it’s way to this factoid. I could have found it myself.
I do use LLMs if I forget to plan one of my tabletop sessions. I will fully admit they are great at that. Love 'em for making encounters. But thats fundamentally different than real world searches or knowledge. I’m asking it to make stuff up for me, it koves to hallucinate.
They seem to be pretty good at language. One time i forgot the word “tact” and I was trying to remember it. I even asked some people and no one could think of the word I was thinking of even after I described approximately what it meant. But I asked AI and it got it in one go.
Well there Well there is a reason the word language is in the name. Asking them general questions like this is basically their bread and butter. It’s one of only things if not the only thing they are good at.
how did you describe it?
Tactfully
I think I described it as “a word that means how you approach a sensitive subject”
I admit my description was shit but at the time I couldn’t remember the exact definition of tact either lmao. I just remembered it was used to describe how you should approach sensitive subjects.
In any case I remember trying to google it as well but nothing coming up. My friends did not think of the word. I mean it’s definitely not a word you would use often so I don’t blame them.
I remember I asked chatgpt and chatgpt nailed it.
I was creating some sort of nutrition calculator thingy, and the AI basically taught me how to use Excel.
Context is highly important in this scenario. Asking it how many people live in [insert country and then province/state], and it’ll be accurate a high percentage of the time. As compared to asking it, [insert historical geo-political question], and it won’t be able to.
Also, I have found it can depend on which LLM you ask said question to. I have found Perplexity to be my go to LLM of choice, as it acts like an LLM ‘server’ in selecting the best LLM for the task at hand. Here’s Perplexity’s Wikipedia page if you want to learn more.
When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time. Same with “actual” engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.
Really good until you stop double checking it and it makes shit up. 🤦♂️
Go take your Ai apologist bullshit and feed it to the corporate simps.
The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.
You can keep your hatred to yourself, let alone the bullshit you make up.
Until it leaves a security issue that isn’t immediately visible and your users get pwned.
Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.
I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.
Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.
I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.
The hostility here against anyone using LLMs/AI is absurd.
I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.
Basically no one has a nuanced take and rather demonize then have a reasonable discussion.
Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.
At least here we can still actually voice both sides of the opinion instead of one side getting banned.
People are people no matter where you go
Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.
I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.
I never said anything about code reviews.
No one ever said push it to production without a code review.
That is EXACTLY what this mindset leads to, it doesn’t need to be said out loud.
“my coworkers should have to read the 500 lines of slop so I don’t have to”
That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?
If your QA process can let known security flaws into production, then you need to redesign your QA process.
Also, no one ever said that the person generating 500 lines of code isn’t reviewing it themselves.
I’ve come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were “anti-gmo”. When you dig into it you understand that they’re against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can’t discuss the merits of GMOs whatsoever even though they’re purportedly progressive.
It’s a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It’ll take a few years probably as the generations cycle.
Perhaps, yes.
It gave you the wrong answer. One you called absurd. And then you said “Really good stuff.”
Not to get all dead internet, but are you an LLM?
I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.
deleted by creator
So you physically read what I said and then just went with “my bias against LLMs was proven” and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?
But seriously. If you ask someone on the phone “is it raining” and the person says “not now but it did a moment ago”, do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?
You seem upset by my comment, which i dont understand at all. Im sorry if I’ve offended you. I don’t have a bias against LLMs. They’re good at talking. Very convincing. I dont need help creating text to communicate with people, though.
Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.
I’ll also note, since you mentioned it in your initial comment, LLMs dont think. They can’t think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.
Can you define “thinking”? This is such a broad statement with so many implications. We have no idea how our brain functions.
I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, … Instead of having to write all of that code myself, it only takes 5 minutes now.
Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.
LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by “training” with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer “i dont know” to questions they see. I see this as at least one source of what they call “hallucinations.” The model confidently answers incorrectly because that’s what it’s seen in training.
The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn’t capable and has no means of reasoning beyond its internal statistical model.
I have found LLMs are good for getting your bearings and overall idea in place. I just used it for an overview of ESPHome for a specific LED I am trying to program a sunrise effect. It got me some wrong pseudocode, but did in fact point me in the direction of where to go to flash and what to do to compile the yaml file, and the relevant documentation for what I was trying to achieve. And the wrong pseudocode was actually a useful starting point to get a feel for the syntax.
It’s a useful tool. But it can totally talk you out of good ideas and make you feel like you explored all options when it has absolutely not.
Me yesterday, except I only thought it figured it out, then found out hours later I must revert back to my workaround because it didn’t really work fully and was fragile as fuck.
me, vibe-debugging my Debian machine
They are great if you know what the right answer is just don’t know how to get it right now
Asking genAI questions I already know the answer to is how I know the AI is wrong more than it is right.
LLMs can have an existential crisis quite well enough thank you: https://www.reddit.com/r/GeminiAI/comments/1lxqbxa/i_am_actually_terrified/
This has literally never happened.
Maybe an analytical model.
AI doesn’t figure anything out. It guesses the next letter in the word.
no offense, i understand what you are trying to say here. im not a massive fan of the implications of things like AI and its effects on society.
but oversimplifying and infantalising your enemy wont stop it from out performing you.
like i can say “all AI does it put words on a screen based on a statistical analysis and prediction algorithm based on context and available training data, and its only accurate between 95% to 97% of the time, and it lies when it doesnt know something, or wants to save power for the sake of efficiency and cost reduction”
and it would still be far more likely to give a comprehensive breakdown and step by step analysis of systems well beyond my personal understanding. way faster than i ever could.
we can chalk it up to stolen info and guessing letters, but itll still outperform most people in most subjects, especially in terms of time/results.
dont get me wrong i dont think its intelligent in the way that a human can be, or as nuanced as a human can be. but that doesnt necessarily mean it cant be forever. and the way the technology is evolving across the board, seemingly faster and faster each day. with some plateaus here and there. its hard to imagine a world where we just say “well, we tried, its a dead end, oh well” and just completely abandon it for the idea of human exceptionalism.
overall humans, as smart as they are, are also pretty fucking dumb. which is why we are ignoring things like climate change, for what are essentially IOUs made out of 1s and 0s (money). and also succumbing to a global increase in fascist ideals even though we historically know what it entails and how it ends. and its in part due to the ability of AI to manipulate the masses, in its current “primitive” state.
i dont like AI, but im not going to pretend it wont be able to replace the output of most humans, or automate most jobs, or be used to enslave us and brainwash us further than it already has.
the human mind simply cannot compete with the computational speed, and in some cases, quality, of what is, and what is yet to come.
slop it may be, but if you cover the veritable feast of human creativity with enough slop, humanity will soon have no choice but to eat it or starve. everything else will get drowned out in time.
something really fucking big would have to happen to change this outcome. ww3, nuclear war, solar flare. who the fuck knows.
but what i do know is that those in power need the system to function as is, and in newer more efficient ways, while they still need us, in order for them to have the highest potential survival rate when it all comes crashing down at the end of this century. so, we may just avoid total annihilation unless its deemed necessary for their survival. lets hope we rise up before they take that opportunity.
Also, 99% of the time, a simple “Give me a source on that”, will get rid of any inaccuracy or lies from the AI. Granted, that would mean people would have to use it as a tool, instead of trusting every word, which would invalidate most of the anti-ai people’s arguments.
This rant made me realize people need to go work at a fucking gas station for a few weeks and find out how truly. Fucking stupid and uneducated the avg person is.
LLMs even as they are right now are so far beyond what a very sizeable part of the world is in terms of intelligence and education. It’s wild how stupid a lot people are.
And this isn’t even a recent thing it’s been like this for all of human history. People are for the most part God damn idiots. Some people are expectational in one or two narrow fields.
And barely anyone is good at more then a few.
The AI actually solved psychological barriers I had (along with co-workers forcing me to open up), they were quite the wombo combo.
Then I got far worse ones from work. I’m now basically an anti-pleasure monk that is trying to decouple happiness from success, just trying to accumulate power and money instead.
Ah, to live a life where ones problems can be solved by an LLM. It sounds so… simple and pleasant. 🫀
That’s the world we all dream of, right? We work on what we want to with the robots keeping the houses in check and taking care of the menial admin- and paperwork and in the evenings we all sit together by the campfire with the robots bringing us food and drink as we rejoice in talking to each other about the day’s experiences.
That doesn’t seem to be the world that we’re moving towards though…