• audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    3 days ago

    I don’t know why I expected a Zitron-esque lambsating from fortune.com, but reading the article is disappointing,

    But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

    Sure. Let’s blame anything but the AI 🙄

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      I mean, the cult of MBAs expecting miracles from the hot new thing is a pattern we’ve seen before. The functionality of LLMs does not match Sam Altman’s fantasies - but it does function. People are getting use out of this tech. But they’re vastly outnumbered by some mixture of optimistic experimenters and trend-chasing dipshits.

  • gpowerf@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    AI has all the signs of another dot-com bubble. The hype is outpacing reality, with investors and MBAs convinced that slapping “AI” on anything will make it better, or even worse replace people altogether. The truth is, AI is a tool, not magic. The bubble will burst, yet the underlying technology will endure and keep reshaping how we work.

  • ThirdConsul@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 days ago

    They seem to be focusing very much on NOT using LLMs yourself, but buying an SaaS offering providing LLM instead.

    95% are failing, and:

    Companies surveyed were often hesitant to share failure rates,

    Oh. So it’s at least 95%?

    Edit:

    Source is MIT Nanda project - isn’t that an university project? I can’t access it, can anyone share? I’m curious about the methodology and data set

    • Zerush@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      Here you can access some material, but the article in question is for members only, it’s a document stored in Google Docs

      https://nanda.media.mit.edu/

      You can find other pages with this article, but all these pointing to Fortune as source.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Let me clarify what I wrote: I can’t access the google stored document because they send it only to approved people. Do you have access to it and can you share?

        • Zerush@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Well, I have an paleolitic Google Account

          • I don’t believe that it’s enough to access the documents
          • I don’t know if it is still valid in case de remember the password
          • I prefer to suck my elbow before activating the account again for it

          I hope you understand me

  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    10
    ·
    3 days ago

    Wrong Title, what fails isn’t the AI, but a bad management of the AI by pure company interests of people which don’t have any tecnical skills (apart of the poor reliability of some of the employed AIs)

    Summary by Andi

    A new MIT report reveals that 95% of enterprise generative AI pilot programs are failing to deliver meaningful financial impact in 2025, with only 5% achieving rapid revenue acceleration[1].

    The research, titled “The GenAI Divide: State of AI in Business 2025” from MIT’s NANDA initiative, analyzed 150 leadership interviews, surveyed 350 employees, and examined 300 public AI deployments. The core problem isn’t the AI models themselves, but rather a “learning gap” between tools and organizations[1:1].

    “Some large companies’ pilots and younger startups are really excelling with generative AI,” said Aditya Challapally, the report’s lead author. “Startups led by 19- or 20-year-olds have seen revenues jump from zero to $20 million in a year. It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools”[1:2].

    Key findings include:

    • Vendor-purchased AI tools succeed 67% of the time, while internal builds succeed only one-third as often[1:3]
    • Over half of AI budgets go to sales and marketing tools, yet the highest ROI comes from back-office automation[1:4]
    • Companies are reducing headcount through attrition rather than layoffs, particularly in customer support and administrative roles[1:5]
    • “Shadow AI” - unauthorized use of tools like ChatGPT - is widespread in enterprises[1:6]

    The report identifies several critical success factors:

    • Empowering line managers, not just central AI labs, to drive adoption
    • Selecting tools that can integrate deeply with existing workflows
    • Focusing on specialized vendor solutions rather than building in-house
    • Targeting back-office automation for highest returns[1:7]

    Looking ahead, pioneering organizations are testing agentic AI systems that can learn, remember, and operate independently within defined parameters, pointing to the next evolution in enterprise AI[1:8].


    1. Fortune - MIT report: 95% of generative AI pilots at companies are failing ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎