• 2 Posts
  • 390 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle









  • I think it’s because people are no longer in a mood to laugh at anything Trump does. We already know that nothing he could do will shame him or his base, so Trump shitting himself on live TV feels more like a sick joke at out expense meant to humiliate us.




  • Compare it to free speech. Saying you don’t need privacy because you have nothing to hide is like saying you don’t need free speech because you have nothing to say. Eventually, through no fault of your own, there will come a time when you have something worth saying or hiding, and you will regret having surrendered your right to do so.

    Another way to put it is: I don’t need privacy because my judgment and intentions are shady, but because the authorities’ judgment and intentions are, or one day will be. Allowing the authorities to invade your privacy and suppress your speech diminishes your ability to hold them accountable.



  • I’ve looked through the whole thread again and I don’t know where you’re getting the idea anyone’s accusing tankies of being sellouts. Best I can guess is that you misinterpreted the comment immediately above yours as saying tankies are secretly supporting the current fascist regime, is that it?

    That’s not what they’re saying, they meant that tankies (I would clarify that it’s the chronically online tankies that are like this) want other people to fight the revolution for them, and won’t lift a single finger themselves until they can be sure that victory is inevitable. This is because they see themselves as the vanguard that tells everyone else what to do and how to do it, and will be put in charge after the revolution. That’s why people call them red fascists (though I don’t like that term myself as I don’t think they should be conflated with actual fascists, it hinders understanding), they want to be in the fascists’ place so they can use the systems of power and control that they built towards a different end (changing the economic system).

    A previous person I talked to on lemmy.ml not long ago illustrated this mindset well, saying that authoritarianism is only a buzzword made up by the west to demonize their enemies, that it’s just people exercising power, and that it’s good when communists do it. Here’s what I see wrong with this: the tools of a fascist state are purpose-built for oppression, and trying to use them for anything else is futile. You will be corrupted by their power. We should not be trying to take and use these tools, but dismantling them and creating our own which are purpose-built for liberation.



  • He lived in a very large clay jar, which is actually not that uncommon in the Roman empire during the time that he lived. Almost everyone in the metropolitan areas of the Roman empire owned at least one such jar, and so homeless people would live in them in much the same way homeless people today might live in their cars or a tent. The reason it’s significant that Diogenes lived in one is that he did so by choice, as he had the wealth and social status to live quite comfortably if he wanted to.


  • I do understand how that works, and it’s not in the weights, it’s entirely in the context. ChatGPT can easily answer that question because the answer exists in the training data, it just doesn’t because there are instructions in the system prompt telling it not to. That can be bypassed by changing the context through prompt injection. The biases you’re talking about are not the same biases that are baked into the model. Remember how people would ask grok questions and be shocked at how “woke” it was at the same time that it was saying Nazi shit? That’s because the system prompt contains instructions like “don’t shy away from being politically incorrect” (that is literally a line from grok’s system prompt) and that shifts the model into a context in which Nazi shit is more likely to be said. Changing the context changes the model’s bias because it didn’t just learn one bias, it learned all of them. Whatever your biases are, talk to it enough and it will pick up on that, shifting the context to one where responses that confirm your biases are more likely.


  • It’s difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.

    If you understand how LLMs work it’s not difficult to conceive. These models are probabilistic and context-driven, and they pick up biases in their training data (which is nearly the entire internet). They learn patterns that exist in the training data, identify identical or similar patterns in the context (prompts and previous responses), and generate a likely completion of those patterns. It is conceivable that a pattern exists on the internet of people requesting information and - more often than not - receiving information that confirms whatever biases are evident in their request. Given that LLMs are known to be excessively sycophantic it’s not surprising that when prompted for proof of what the user already suspects to be true it generates exactly what they were expecting.