Been using Perplexity AI quite a bit lately for random queries, like travel suggestions.

So I started wondering what random things people are using it for to help with daily tasks. Do you use it more than Google/etc?

Also if anyone is paying for Pro versions? Thinking if it’s worth it paying for Perplexity AI Pro or not.

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      Yeah I’ve used it occasionally to goof around with and try to get silly answers. And I’ve occasionally used it when stuck on an idea to try to get something useful out of it…the latter wasn’t too successful.

      Quite frankly I don’t at all understand how anyone could possibly be using this stuff daily. The average consumer doesn’t have a need imo.

  • Stern@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    2
    ·
    7 months ago

    Basically nothing. I’m good at using search engines and the porn feels boringly samey from it so the only use case left for me is making meme images, which is rare at best.

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    7 months ago

    I don’t use it for daily tasks. I’ve been tinkering around with local LLMs for recreation. Roleplay, being my dungeon master in a text adventure. Telling it to be my “waifu”. Or generating amateur short stories. At some time I’d like to practice my foreign language skills with it.

    I haven’t had good success with tasks that rely on “correctness” or factual information. However sometimes I have it draft an email for me or come up with an argumentation for a text that I’m writing. That happens every other week, not daily. And I generously edit and restructure it afterwards or just incorporate some of the paragraphs into my final result.

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      D&D related things actually seems like a decent use case. For most other things I don’t understand how people find it useful enough to find use cases to do daily tasks with it.

      • rufus@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        7 months ago

        Agree. I’ve tried some of the use-cases that other people mentioned here. Like summarization, “online” search, tech troubleshooting, recipes, … And all I’ve had were sub-par results and things that needed extensive fact-checking and reworking. So I can’t really relate to those experiences. I wouldn’t use AI as of now for tasks like that.

        And this is how I ended up with fiction and roleplay. Seems to be better suited for that. And somehow AI can do small coding tasks. Like writing boiler-plate code and help with some of the more tedious tasks. At some point I need to feed another of my real-life problems to the current version of ChatGPT but I don’t think it’ll do it for me. And it can come up with nice ideas for stories. Unguided storywriting will get dull in my experience. I guess the roleplaying is nice, though.

        Edit: And I forgot about translation. That also works great with AI.

  • tiredofsametab@kbin.run
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    7 months ago

    Nothing. I’m a software developer, but don’t use any AI tools with any regularity. I think I only asked ChatGPT or similar something once about programming because the documentation was awful, but I do remember that as having been helpful.

    The only thing that might be close, though not directly, is translation software (kanji be hard).

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      7 months ago

      The only thing that might be close, though not directly, is translation software (kanji be hard).

      Well that’s the dirty little open secret, isn’t it? These “AI” programs are just beefier versions of the same kinds of translation, predictive text, “smart” image editing, and chatbot software we’ve had for a while. Significantly more sophisticated and more powerful, but not exactly new. That’s why “AI” is suddenly appearing everywhere: in many cases, a less sophisticated predecessor of it was already there, they just didn’t use the marketing language OpenAI popularized.

      I legit had a spelling and grammar checking add-on that rebranded itself to “AI”, and it did absolutely nothing different than what it already did.

      And the whole point is that absolutely none of this is “AI” in any meaningful way. It’s like when that company tried to brand their new skateboard/segway things from a few years ago as “hoverboards”. You didn’t achieve the thing, you’re just reducing what the term means to make it apply to your new thing.

  • doxxx@lemmy.ca
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    7 months ago

    I’m a professional software dev and I use GitHub Copilot.

    It’s most useful for repetitive or boilerplate code where it has an existing pattern it can copy. It basically saves me some typing and little typo errors that can creep in when writing that type of code by hand.

    It’s less useful for generating novel code. Occasionally it can help with known algorithms or obvious code constructs that can be inferred from the context. Prompting it with code comments can help although it still has a tendency to hallucinate about APIs that don’t exist.

    I think it will improve with time. Both the models themselves and the tools integrating the models with IDEs etc.

    • cassie 🐺@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I used Copilot for a while (in a Rust codebase fwiw) and it was… both useful and not for me? Its best suggestions came with some of the short-but-tedious completions like path().unwrap().to_str().into() etc. Those in and of themselves could be hit-or-miss, but useful often enough that I might as well take the suggestion and then see if it compiles.

      Anything longer than that was OK sometimes, but often it’d be suggesting code for an older version of a particular API, or just trying to write a little algorithm for something I didn’t want to do in the first place. It was still correct often enough when filling out particular structures to be technically useful, but leaning on it more I noticed that my code was starting to bloat with things I really should have pulled off into another function instead of autocompleting a particular structure every time. And that’s on me, but I stopped using copilot because it just got too easy for me to write repetitive code but with like a 25% chance of needing to correct something in the suggestion, which is an interrupt my ADHD ass doesn’t need.

      So whether it’s helpful for you is probably down to how you work/think/write code. I’m interested to see how it improves, but right now it’s too much of a nuisance for me to justify.

  • Cloudless ☼@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    7 months ago
    • Proofread/rewrite emails and messages
    • Recipes
    • Find specs for computers, gadgets, cars etc.
    • Compare products
    • Troubleshoot software issues
    • Find meaning of idioms
    • Video game guide/walkthrough/reviews
    • Summarise articles
    • Find out if a website is legit (and ownership of the sites)

    I don’t see any need for Pro versions. ChatGPT 4 is already available for free via Bing. I simply use multiple AI tools and compare the results. (Copilot / Gemini / Claude / Perplexity)

  • MicrowavedTea@infosec.pub
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    I’ve only used ChatGPT and it’s mostly good for language-related tasks. I use it for finding tip-of-my-tongue words or completing/paraphrasing sentences. Basically fancy autocorrect. It’s also good at debugging stuff sometimes when the language itself doesn’t give useful errors (looking at you sql). Other than that, any time I’ve asked for factual information it’s been wrong in some way or simply not helpful.

  • weariedfae@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    I don’t word good and ChatGPT bro helps me use my nouns.

    That’s only kind of a joke, I have anomic aphasia and use ChatGPT to help me find the words when I lose them. I used to use Google but it doesn’t really work anymore.

    • aStonedSanta@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      7 months ago

      Yeah. Wtf did Google do to itself lol. I’m in the same boat as of usage. No diagnosis but severe adhd so assume it’s dyslexia on my end lol

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    I use LLM bots mostly

    • as websearch - e.g. “list sites containing growing conditions for pepper plants”;
    • for practical ideas - e.g. “suggest me a savoury spice mix containing ginger”

    I never use them for the info itself. It’s foolish to trust a system that behaves like a specially irrational assumer. (It makes shit up, it has the verbal intelligence of a potato, and fails to follow simple logic.)

    I’m not using any Pro version.

    For reference: nowadays I’m using ChatGPT 3.5 and Claude 1.2, both through DuckDuckGo. I used Gemini a fair bit, but ditched it - not just for privacy, but because Gemini’s “tone” rubs me off the wrong way.

    • Cloudless ☼@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Yeah Gemini’s tone is weird. It is constantly reminding you that Gemini does not have an opinion on anything. It actively tries to avoid giving definitive answers whenever possible.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        That’s related; what rubs me off the most is how patronising it sounds - going out of its way to lecture you with uncalled advice, assuming your intentions behind the prompt (always the worst), and wasting your time with “social grease”. And this is clearly not a consequence of the underlying tech, as neither Claude nor ChatGPT do it so bad; it’s something that Google tailored into Gemini.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    7 months ago

    I’m going to continue to monitor this thread but so far I’m surprised at how little use most are getting from AI tools. And the highest upvoted comment is that one does NOT use AI tools in their daily routine.

    So much hype around AI recently and I’m not seeing/hearing a lot of REAL, PRACTICAL use case for it.

    Interesting.

  • WindyRebel@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    As an SEO - hell no. Those that did got penalized by the latest algorithm update from Google.

    As a DM? Yes! It helped me write a nice poem for a bard that will hopefully give my players some context to what they will be encountering as they move further in my campaign.

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 months ago

    Replaced forums like Stack for me both could give me incorrect information, one doesn’t care how dumb my questions are.

    My job pays from premium, and it’s been useful clearing up certain issues I’ve had with tutorials for the current language I’m learning. In an IDE CO-Pilot can get a bit in the way and its suggestions aren’t as good as they once were, but I’ve got the settings down to where it’s a fancy spell check and synergises well vim motions to bang out some lines.

    It’s only replaced the basic interactions I would have had without having to wait for responses or having a thread ignored.

  • Audalin@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    7 months ago

    I’m using local models. Why pay somebody else or hand them my data?

    • Sometimes you need to search for something and it’s impossible because of SEO, however you word it. A LLM won’t necessarily give you a useful answer, but it’ll at least take your query at face value, and usually tell you some context around your question that’ll make web search easier, should you decide to look further.
    • Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
    • Using a LLM in scripts adds a semantic layer to whatever you’re trying to automate: you can process a large number of small files in a way that’s hard to script, as it depends on what’s inside.
    • Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
    • Some use RAG to query large amounts of information. I think it’s a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
    • Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I’m going to at some point. The 8B version of LLaMA 3 should be good and quick enough.