• 1 Post
  • 5 Comments
Joined 1 year ago
cake
Cake day: March 21st, 2025

help-circle


  • I don’t mean to start a full argument since I sense we have quite different views, but maybe you could tell me where I go wrong here. Say for the argument the entire Trump admin is fascist. I think there are still many places to break the chain of fascism before you get to anthropics models. (I use this definition of fascism). I think:

    1. Primary purpose of the DoD is to defend the US and allies against actual invasions of actual land, everything else is just stupid shit the political system allows for and incentivizes. I don’t think this primary purpose, setting aside BS random wars, is fascist so I don’t think the organization is fascist either.
    2. I’m not certain that contractors of the DoD, which is not inherently fascist but which is for the argument said to be controlled at the top by fascists, become members of the ideology or heavily associated with it when they take contracts, or when they later live up to those contracts. You claim “when a private sector company provides services to a government that is obtusely fascist, it itself becomes a tool in which fascist power is concentrated.”. I think this is far too general and strong to be true. Is the department of agriculture a tool where fascist power is concentrated? Is a farmer who cooperates with the USDA a tool where fascist power is concentrated? Is the corn they produce also that, as an analogy to the LLM? The DoD facilitates a lot of horrible stuff, but do the reach of the assumed fascist Trump admin only goes as far as they can order changes within the DoD and within the DoD’s contracted corporations, it doesn’t spread like fire does.
    3. Anthropic has quite a lot of transparency with what “values” they try to get their models to espouse, and the models are generally politically neutral. Regarding your claim that “when a private sector company provides services to a government that is obtusely fascist, it itself becomes a tool in which fascist power is concentrated”: it’s an entirely different thing to be a tool of fascism than to be fascist. Anthropic being a tool in which fascist power is concentrated doesn’t give me any reason that said fascism would “spread” (however such a thing could even happen) to their models.

    So in my view the chain between Trump Admin->DoD->Anthropic->Claude Sonnet 4.6, and in the opposite direction, is pretty weak and not enough that I would call the model fascist. I think this is especially true now that the use of the model is being phased out (?). That’s in the “readily espouses or promotes views connected to fascism” and in the “any usage is directly funding fascist organizations in a major way” senses I feel that a model could be described as fascist (or AI in general could be).

    To analogize again, I don’t think a Bernie supporter working in the DoD is automatically a fascist and certainly don’t think that purchasing an old TV from them is supporting fascism (or that the TV is fascist, even if they had previously used it in their office at the DoD).

    The book thing I’m not sure how you connect to fascism? It might be ultra-bad, it might be copyright infringement, but it doesn’t feel like fascism to me beyond surface level comparisons to book burning.



  • I use LLMs for the following, you can decide for yourself if they are major enough:

    • Generating example solutions to maths and physics problems I encounter in my coursework, so I can learn how to solve similar problems in the future instead of getting stuck. The generated solutions, if they come up with the right answer, are almost always correct and if I wonder about something I simply ask.
    • Writing really quick solutions to random problems I have in python or bash scripts, like “convert this csv file to this random format my personal finance application uses for import”.
    • Helping me when coding, in a general way I think genuinely increases my productivity while I really understand what I push to main. I don’t send anything I could not have written on my own (yes, I see the limitations in my judgement here).
    • Asking things where multiple duckduckgo searches might be needed. E.g. “Whats the history of EU+US sanctions on Iran, when and why were they imposed/tightened and how did that correlate with Iranian GDP per capita?”

    What does this cost me? I don’t pay any money for the tech, but LLM providers learn the following about me:

    • What I study (not very personal to me)
    • Generally what kinds of problems I want to solve with code (I try to keep my requests pretty general; not very personal)
    • The code I write and work on (already open source so I don’t care)
    • Random searches (I’m still thinking about the impact of this tbh, I think I feel the things I ask to search for are general enough that I don’t care)

    There’s also an impact on energy and water use. These are quite serious overall. Based on what I’ve read, I think that my marginal impact on these are quite small in comparison to other marginal impacts on the climate and water use in other countries I have. Of course there are around a trillion other negative impacts of LLMs, I just once again don’t know how my marginal usage with no payment involved lead to a sufficient increase in their severity to outweigh their usefulness to me.