• xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      24 days ago

      Me too. It’s exactly the kind of clean metaphor that an AI generated image would never be able to understand.

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    41
    ·
    24 days ago

    So I am so ewhat pro AI. But hear me out. I sometimes refer to myself as an automation engineer. I spend a lot of my time automating the set up and use of various software tools. For those who know the term Infrastructure As Code is a part of my job too. And soo many tools have shitty UIs and even shittier apis. The rise of AI is going to add pressure to have better apis because that is what the AI uses. So even if AI falls flat on it’s face in a few years, any improvements in apis is a vig win for me. And since the automation I write is for my coworkers, not external customers, anyone in tech benefits from this.

    Now for me personally, I work ina lot of different languages and DSLs. I rarely spend enough time in any one of them to really memorize the syntax. I pretty much can’t write a working program without some sort of reference. So, I can tell AI exactly what I want it to do, and it can code and test until it runs. Then I can use that as my syntax reference and make it do what it is supposed to do. That ends up being much faster than me having to google various syntaxes to see where I need a semicolon vs a comma, or where I need to use [] instead of {}. So it helps me.

    And I do love using AI to file my jira tickets. Works great for those of us who’s work is interrupt driven. We often file the ticket after we’ve solved the problem.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      24 days ago

      Or they’ll make apis shittier because they don’t want AI using it.

      However, Copilot has made it a lot easier to navigate through Azure’s incomprehensible menu structure.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        24 days ago

        Well, grafana is an example. They want their own AI agent that you can pay for. So they still need the apis to be good. But they don’t make it easy to get your AI it own api token. Each user would essentially have to have two accounts. Which they probably charge for too. It’s not impossible to work around, but it’s a barrier. I would expect more of that kind of thing. Any tool that doesn’t have a way for AI to work with it is going to be selected against for a while. So there is pressure for them to be accessible.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      24 days ago

      The closest facsimile I have in my work is occasionally running an Excel formula I’ve written through Copilot in order to find a formatting error or to help fix an Access query, but If fundamentally understand what I’m doing, can validate that the produced result is correct, and can fix it if I have to somewhere down the line.

      It’s good you’ve found some simple ways to use it, but in the vast majority of work I do, it would take longer if I used AI because everything produced using an LLM has to be human-validated regardless, so I might as well not skip the important step of learning and understanding it.

      I never use it to ideate and never use it for anything that isn’t eminently simple, like creating a sheet with x number of columns and rows or something like that. I hate the idea of the environmental impact and that helps me avoid it.

      • quips@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        24 days ago

        And outside coding its like modest productivity improvments is the best we’ve done in the 4 years we’ve had these models.

        I just can’t see it not being a bubble

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          24 days ago

          Yeah, it’s nothing particularly special IMHO. The best feature I’ve found in using it is that, for Microsoft products in particular, it can tell me capabilities of certain things I didn’t know previously when I present it with a problem.

          Search engines used to do that before they got enshittified.

    • Iusedtobeanalien@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 days ago

      It will end up as an assistant rather than take over the world at least while it is affordable

      I truly believe an AI winter is coming

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 days ago

        There are absolutely some economic factors that can have serious impact on it. And they are impossible to predict. If you really could, you would be rich. But, I don’t see it likely to be an assistant. It’s actually pretty terrible at that. My thinking is that it is a tool like any other. It will take a person significant investment to get proficient with it. Down the line, hopefully it will be more streamlined to distribute learnings and such that make it more accessible to those who haven’t invested the time. There is lots of work happening in that area now, but much more needs doing.

    • bigredgiraffe@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 days ago

      I am in the same boat, long time infrastructure automation engineer as well. Sometimes it’s faster to explain how terraform or whatever needs to act and then fix the issues rather than having to sift through the docs for every provider.

      I also do a similar thing to you with code, I also have to read a lot of other people’s code in languages I don’t know to help troubleshoot things and while I can usually follow the logic it is such a time saver to have AI to read the docs for the libraries and languages for me to at least find the part of the docs I need to read faster than searching myself.

      Overall, I also agree with the sentiment on AI most of the time and all of its criticisms are definitely valid but I think too many people try to use AI to do their work for them instead of using it more like a rubber duck you can program with normal language.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 days ago

        My new (to me) “revelation” is that AI needs a ton of structure. It’s like a child who when presented with too many options stops thinking and just randomly chooses one to do so they can be done with whatever it is. From what I can tell, the people who make the most use of AI have it tightly controlled. Rules, hooks, and various other tricks to essentially herd AI into doing what it should do. Kinda like herding cats.
        Right now the tools and such for setting up that structure are immature, and best practices are hard to define when the base AI is still changing a lot. For people who are just trying to use AI casually, they have heard the hype, and they think they should expect it to work like a person. When it doesn’t, they just say it sucks. And as a person, it does suck. It’s a tool. And a complex one at that. Seems it requires significant investment to get the most out of it.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 days ago

      Problem with the theory is that people believe in LLM strongly enough that whatever pressure there is within a market to be vaguely similar evaporates. SQL certainly has dialects, but at least the basics are vaguely similar, as an example.

      Working with a vendor that is oddly different from every other vendor in the space and we applied pressure to implement more typical interfaces. Their answer was “just have an LLM translate for you and use our different and frankly much weirder interface”. When we did cave and use it and demonstrated the biggest LLMs failed, they said at least they give you the idea. Zero interest in consistent API with LLM as an excuse.

      On the write your code for you, it has to be kept on a short leash and can be a nightmare if not overseen, though it can accelerate some chore work. But I just spent a lot of time last week trying to fix up someone’s vibe coded migration, because it looked right and it passed the test cases, but it was actually a gigantic failure. Another vibe coded thing took 3 minutes to run and it was supposed to be an interactive process. The vibe coded said that’s just how long it takes, if it could be faster, the AI would have done it and none of the AI suggestions are viable in the use case. So I spent a day reworking their code to do exactly the same thing, but do it in under a second.

      For the jira ticket scenario, I had already written a command line utility to take care of that for me. Same ease of use instead of using jira GUI and my works torturous workflows, but with a very predictable result.

      So LLM codegen a few lines at a time with competent human oversight, ok and useful, depending on context. But we have the similar downside as AI video/image/text creative content: People without something substantial to contribute flood the field with low quality slop, bugs and slow performance and the most painful stuff to try to fix since not even the person that had it generated understood it.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 days ago

        There certainly is a group of people who believe in AI strongly. One part of them is just listening to the hype and jumpping on the wagon. Another part however is investing real time to understand it. They work to give it structure and guardrails so that it does what they want it to. And they help others do the same. But currently it still takes a lot of time investment to get good at using. And most people aren’t expecting that.
        But as the second group grows, and the methods for them to share the structure they have set up for AI mature, more people will be anle to use it without all the upfront time investment. That is when the pressure on tool vendors to improve their api interfaces will really heat up. AI compliant or whatever buzz word shows up will be a near requirment for a tool to get investor dollars. MCPs were an attempt to put a layer between the apis and the AI. But if the underlieing api sucks, MCP can’t do much. I am not sure what will come next, but something more about the apis themselves is bound to spring up. Maybe even several standards. Thats ok, there can be several because AI can handle the context switching better than humans can.

    • Washedupcynic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 days ago

      Fuck me if I want to have sessions with my psychiatrist that aren’t recorded and transcribed by AI. Tech companies lie all the time about what they promise, like encrypting their data, and then just pay the fines after the fact, because scraping our data to train models made more profit than the fine. Yes, my doctor rolled out a system for telehealth that prompted me to sign documents agreeing to the use of AI. A procedure code and diagnosis code is the only thing that’s required for medical billing in instances where prior approval for services isn’t required, there’s no need for a transcription of our entire fucking session word for word for my DR to bill and get paid; especially when there is plenty of transcription software he could purchase to operate in house, WITHOUT AI.

      AI is being forced down our throats in so many industries that have NOTHING to do with writing code, for the purposes of reducing headcount because the line must go up.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 days ago

        I totally agree with you on it being forced where it doesn’t belong. I am just not for abolishing it or whatever was proposed here. I think there is middle ground. It can be useful for somethings. In many cases it is being forced on the workforce so that some senior vp can say they made AI happen and get a nice big raise. In others it’s so they can say it in the earnings call to boost the stock price. But forcing the workforce to do stupid things for these reasons is nothing new.

    • Annoyed_🦀 @lemmy.zip
      link
      fedilink
      English
      arrow-up
      85
      arrow-down
      11
      ·
      24 days ago

      And eat the dick billionaire shoving at your face willingly? Because that’s what AI are, people are forced to used it because their boss demand it.

        • [object Object]@lemmy.ca
          link
          fedilink
          English
          arrow-up
          54
          arrow-down
          4
          ·
          24 days ago

          If they’re so useful, why are they being forced on everyone, including by making them part of performance reviews?

          If they’re useful people will naturally use them.

          • username_1@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            72
            ·
            24 days ago

            And people do use them. Naturally.

            What about that “forcing” thing you’re talking about? Look around. You’re being forced with everything by corporations. Why would this new cool technology should be an exception? You’re forced to watch sport events, listen to modern music, wear some vogue clothes, kiss your beloved leader’s ass, hate those evil Cubans or Ukrainians (depending on who your owner is).

        • Nick@lemmy.world
          link
          fedilink
          English
          arrow-up
          33
          arrow-down
          3
          ·
          24 days ago

          “AI Tools” describes both the product and the people who use them.

          • username_1@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            29
            ·
            24 days ago

            They save my time tremendously while searching for something in documentation. Especially if I don’t know if it is actually there.

              • lumpenproletariat@quokk.auBanned from community
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                15
                ·
                edit-2
                24 days ago

                Didn’t know ctrl-f could parse natural language and not only rely on knowing the correct keyword. When did it gain that functionality?

                • Lemminary@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  arrow-down
                  3
                  ·
                  edit-2
                  23 days ago

                  Better question is, when did you lose basic keyword-based searching skills? I know you may want your answers on a platter but realize that there’s value in manual searches. Searching for something with LLMs on the page that you’re on is questionable on so many levels.

        • Dumhuvud@programming.dev
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          2
          ·
          24 days ago

          not against useful tools

          Nobody’s fighting against you guys. Why do sloperators have to take everything personally? Smh my head.

        • JigglySackles@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          24 days ago

          I’d that tool didn’t come at the destructive costs involved, AI would be a lot more palatable.

          • username_1@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            34
            ·
            24 days ago

            destructive costs

            YouTube storing shitillions of dickabytes of cat videos “costs” much more while being completely useless. But those are funny cat videos. Hands off of those videos. Yes?

            • JigglySackles@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              2
              ·
              24 days ago

              If that’s what it takes to stop the excessive destruction caused by unregulated data center construction and operation, yes.

            • Lemminary@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              23 days ago

              You may think you’re being clever, but that is hardly a reasonable comparison while also ignoring the glaring corporate irresponsibility underlying both.

            • TwilitSky@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              24 days ago

              I don’t care what you do but keep your hands off those videos. I need them for things.

            • Catoblepas@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              23 days ago

              If AI is just like a video data center, why did data center energy usage stay stable before AI? Why has data center energy usage doubled since 2010 if videos and AI are equivalent in energy usage?

        • StupidBrotherInLaw@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          9
          ·
          24 days ago

          You’re screaming into the echo chamber, mate. Unless you’re so rabidly anti-AI you believe and spread one of a few comforting, imaginary narratives, you’ll be dog piled.

          I’m staunchly critical of AI, but won’t pretend that it only consists of generative AI, that it still operates as poorly as it did years ago, nor that a disturbing percentage of the population either doesn’t care about or actually supports that shit, so I get my share of insults. Being pro-AI won’t get you much civility so set your expectations low. Unless you’re trolling. Then you’ve nailed it.

    • Internetexplorer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      17
      ·
      edit-2
      23 days ago

      Exactly, it’s weird the anti movement, AI is exceptional in so many ways. Maybe the ruling class don’t want the general population to have access to and like something so fundamentally useful and have created a smear campaign

      • Catoblepas@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 days ago

        Guys trying to shove AI into everything: oh nooo, pleeeaaase don’t use our product, we would haaaate it if you did that!

        Worst conspiracy ever.

        • Internetexplorer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          23 days ago

          It’s not the product makers that are creating the smear campaign, it’s the people who don’t want the general populous to access power AI.

          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            23 days ago

            So they have the power to run a smear campaign that would alter society’s opinion on AI—which is so very useful and everyone would love it otherwise—but they don’t have the power to influence the building of billions of dollars of infrastructure to power these data centers?

            The people pushing this are the people that benefit from the general populace being dumb as shit. That’s why they’re pushing it. AI doesn’t make anyone more informed or intelligent, it’s an echo chamber.

            • Internetexplorer@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 days ago

              Yes, they have the power to smear, it’s not that hard.

              They can’t stop then building why would them being able to smear mean they can’t build data centers? That doesn’t make any sense.

              The anti AI groups can smear, I don’t see what that has to do with AI companies building data centers.

              AI gives informed and data resourced answers. It’s better than just reading Facebook comments.

              It increases intelligence because it shows the best way to do something, and everyone can learn from that.

              It’s not an echo chamber, it provides answers for almost everything.