We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 天前

      And the adding missing information doesn’t. Isn’t that just saying we are going to make shit up.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        56
        ·
        edit-2
        4 天前

        If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.

  • Hossenfeffer@feddit.uk
    link
    fedilink
    English
    arrow-up
    69
    ·
    3 天前

    He’s been frustrated by the fact that he can’t make Wikipedia ‘tell the truth’ for years. This will be his attempt to replace it.

    • wrinkledoo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 天前

      There are thousands of backups of wikipedia, and you can download the entire thing legally, for free.

      He’ll never be rid of it.

      Wikipedia may even outlive humanity, ever so slightly.

      • sthetic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 天前

        Seconds after the last human being dies, the Wikipedia page is updated to read:

        Humans (Homo sapiens) or modern humans were the most common and widespread species of primate

  • Crikeste@lemm.ee
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    3 天前

    So they’re just going to fill it with Hitler’s world view, got it.

    Typical and expected.

  • dalekcaan@lemm.ee
    link
    fedilink
    English
    arrow-up
    233
    ·
    4 天前

    adding missing information and deleting errors

    Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”

  • Naevermix@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    3 天前

    Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.

    • biocoder.ronin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 天前

      Uh, just a thought. Please pardon, I’m not an Elon shill, I just think your argument phrasing is off.

      How would you know there are holes in understanding, without logic. How would you remedy gaps of understanding in human knowledge, without the application of logic to find things are consistent?

      • andros_rex@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 天前

        You have to have data to apply your logic too.

        If it is raining, the sidewalk is wet. Does that mean if the sidewalk is wet, that it is raining?

        There are domains of human knowledge that we will never have data on. There’s no logical way for me to 100% determine what was in Abraham Lincoln’s pockets on the day he was shot.

        When you read real academic texts, you’ll notice that there is always the “this suggests that,” “we can speculate that,” etc etc. The real world is not straight math and binary logic. The closest fields to that might be physics and chemistry to a lesser extent, but even then - theoretical physics must be backed by experimentation and data.

        • biocoder.ronin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          3 天前

          Thanks I’ve never heard of data. And I’ve never read an academic text either. Condescending pos

          So, while I’m ironing out your logic for you, “what else would you rely on, if not logic, to prove or disprove and ascertain knowledge about gaps?”

          • andros_rex@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 天前

            You asked a question, I gave an answer. I’m not sure where you get “condescending” there. I was assuming you had read an academic text, so I was hoping that you might have seen those patterns before.

            You would look at the data for gaps, as my answer explained. You could use logic to predict some gaps, but not all gaps would be predictable. Mendeleev was able to use logic and patterns in the periodic table to predict the existence of germanium and other elements, which data confirmed, but you could not logically derive the existence of protons, electrons and neutrons without the later experimentations of say, JJ Thompson and Rutherford.

            You can’t just feed the sum of human knowledge into a computer and expect it to know everything. You can’t predict “unknown unknowns” with logic.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    ·
    4 天前

    “If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”

    ~Fucking Dumbass

  • JustAPenguin@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    3 天前

    The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.

    Sources (unordered):

    Whatever nonsense Muskrat is spewing, it is factually incorrect. He won’t be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 天前

      It’s not so simple, there are papers on zero data ‘self play’ or other schemes for using other LLM’s output.

      Distillation is probably the only one you’d want for a pretrain, specifically.

  • maxfield@pf.z.org
    link
    fedilink
    English
    arrow-up
    152
    arrow-down
    2
    ·
    4 天前

    The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.

        • MajinBlayze@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          4 天前

          Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.

          It would be way too expensive to go through it by hand

        • zqps@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 天前

          Yes.

          He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.

    • WizardofFrobozz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 天前

      To be fair, your brain is a pattern-matching system.

      When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.

      Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.

    • zildjiandrummer1@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      13
      ·
      4 天前

      Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.

      Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        16
        ·
        4 天前

        Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!

        • zildjiandrummer1@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 天前

          That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.

          What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).

          Lemmy is almost as bad as reddit when it comes to hiveminds.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 天前

            You literally called it borderline magic.

            Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.