

So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?
also at @chaonaut@lemmy.world
So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?
I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.
It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.
I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.
Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).
Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
Why are you ranking genders? Did you spend time under the impression that men were the better gender? Can you even see how fucked it is to relegate an entire gender as second class? If you bought into the idea that women were somehow less than you by mere virtue of being women, can you not see how they might bear some justified disdain for you specifically?
No, I don’t see men as a “lesser” gender, I don’t see women as “lesser”, either. They are simply different ways of experiencing the world, and the near coin flip odds of what the doctor will say when they check your genitals for the first time isn’t gonna be the way that I determine who are worthwhile people.
There’s more to masculinity and manhood than fitting the awful mold that the hegemonic powers have set out for us. If you stop trying to “win at gender”, or whathaveyou, maybe you can become the sort of man that doesn’t take gender as a zero-sum game
Ah, yes, the pinnacle of masculinity: whining that people fighting for their survival and freedom didn’t pay enough care to the feelings of people who didn’t think their struggle was all that big of a deal in the first place.
Maybe if you weren’t out to spite the people who you believe have beef with you specifically, you’d realize that the struggle against patriarchy benefits men as well, but you’d have to recognize that there are plenty of men that are specifically marginalized by those that up hold their preferred version of masculinity is the only valid way to be a man.
Or, you could skip all that and blame it all on women hating men because some depiction of a woman said something mean to a depiction of a disrespectful man and take it to have meant you specifically, I guess.
Pepperidge Farm remembers what Nintendon’t.
Because there’s a extra system of measurement change hiding in the middle. The Inches, Feet and Yards system (with the familiar 12:1 and 3:1 ratios we know and love), and Rods, Chains, Furlongs and Miles system. Their conversation rates are generally “nice”, with ratios of 4 rods : 1 chain, 10 chains : 1 furlong, and 8 furlongs : 1 mile.
So where do we get 5,280 with prime factors of 2^5, 3, 5 and 11? Because a chain is 22 yards long. Why? Because somewhere along the line, inches, feet and yards went to a smaller standard, and the nice round 5 yards per rods became 5 and 1/2 yards per rod. Instead of a mile containing 4,800 feet (with quarters, twelfths and hundredths of miles all being nice round numbers of feet), it contained an extra 480 feet that were 1/11th smaller than the old feet.