

I’d like to think I taught my child to lie better


I’d like to think I taught my child to lie better
People talking about parental controls as if that magically stops the new ways kids get bullied
That’s iike living in a house that is falling apart and saying if I just let it fall apart completely then I won’t need to fix the house
I don’t understand the joke, could you explain?


Let’s say I agree the concept of mind is relative, would you be willing to accept a rock has a mind?
Let me restate the point differently: lowering the bar for what you consider intelligence doesn’t make the AI sound any smarter.


By defaming intelligence you aren’t making the AI sound smarter. But you are making yourself the fool.


“If the west doesn’t build it then China will” is a claim to a timeline. And in the context of governments, most would assume you’re talking about less than 100 years.


Because asking AI to do humans is still uncanny. Pixar and Ghibli was already ruined by AI…


Yes, code harnesses help by providing deterministic feedback like with a language server and reduce the amount of prompting requirements. I guess I should have led with that example 😅


No, it’s not the same as copying and pasting the TODO into a prompt. Embedding the TODO in code instead of the prompt reduces tokens burned and increases accuracy because it’s observing the TODO in context. Sure you can write more prompting to provide that context, but it still won’t be as accurate. The less context you provide via prompting and instead provide more context through automatic deterministc feedback the better the results

The Romans enslaved their own (and other modern day “whites”) so racism isn’t required to make the analogy.


Examples to consider:
A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.
A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn’t need and confuse it
Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time

And there were Roman slaves that enjoyed a higher standard of living relative to the global norm. We have literal white supremacists argue that chattel slavery was good actually because the standard of living blacks experience now is higher than what they would have if we left them alone.
I don’t think it’s important, just like the legality of “illegal humans” is not important to how people are being mistreated. You’re the one bringing up legality in this context.
But please continue to insult me. In the states we have a saying: a hit dog will hollar - behind your anger is a fear and behind that fear is something you love. If you keep it up there will be enough pieces to figure out what is motivating you

I wonder if the Romans told their slaves about how modern a society they had that their ancestors would have gladly sold their land for a bath house


The OSX laptops was for security, you couldn’t connect to their VPN without it. It was also a way to monitor your usage


Person who sold NFTs is serious about AI. Next it will be quantum


“Properly prompting” is to not prompt. A chat interface is the lowest fidelity interface to use with an LLM.


When I contracted with them we ran our computation loads on a Linux server and deployed our service to an internal Linux server. Only osx I touched was my laptop, and that was a work requirement they insisted on.
Should have set the effort level to “high”