Today, we present AlphaProof, a new reinforcement-learning based system for formal math reasoning, and AlphaGeometry 2, an improved version of our geometry-solving system. Together, these systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition for the first time.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Hehe, namedropping “AGI” in the very first paragraph and then going ahead with an AI that is super tailored to a narrow task like formal math proofs and geometry…

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 months ago

        Sure. But none of this is about that. And I somehow doubt that’ll be the path towards AGI anyways. Does any combination of narrow abilities become general at some point? Is the sum more than it’s parts? I think so. Especially with intelligence.
        And MoE comes with the issue that it can’t really apply knowledge from one domain to another. At least if you separate the subjects. Whereas I as an general intelligent being can apply my math skills to engineering, coding, doing a plethora of every day tasks. So I’m not sure if MoE help with that.

  • mrroman@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I wonder if they are sure that similar exercises weren’t in the learning set for AI. Such competitions have usually kind of patterns for exercises and people usually learn to them by resolving a large number of exercises to catch the pattern.

  • morrowind@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    2 months ago

    I know there’s a strong anti AI sentiment on lemmy, but I would advise reading at least the article, if not more details before denouncing it