As long as it is roasted in salt and olive oil, the whole thing is good
TORFdot0
It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.
Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.
So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.
He’s not telling you to be terrified of the single bot writing a blog post. He’s telling you to be terrified of the blog post being ingested by other bots and then seen as a source of truth. Resulting in AI recruiters automatically rejecting his resume for job postings. Or for other agents deciding to harass him for the same reason.
Edit: I do agree with you that he was a little lenient with how he speaks about the capabilities of it. The fact that they are incompetent and still seen as a source of truth for so many is what alarms me
My Gen Alpha children will probably lead happy and fulfilling lives. Sure they don’t have a privileged childhood but they have food security and two loving parents which is the best predictor of future adult happiness that I know of.
Discord isn’t getting my ID. Not that I am in any servers that would be affected. But concerning that if discord wanted to chill us out without hard banning our community, they could just make us age restricted
Depends on who you ask I guess. 
Floating point numbers. Floating point integer is an oxymoron 🤓
I hate it because it’s an unethical practice to manipulate or mislead and all modern advertising uses dark patterns to try to get you to overconsume.
The ADHD distress is just another side effect of all that
Funny because I consider Flock a terrorist organization
I want to preface my response that I appreciate the thought and care put into your thoughts even though I don’t agree with them. Yours as well as the others.
The differences between a human hallucination and an AI hallucination is pretty stark. A human’s hallucinations are false information understood by one’s senses. Seeing or hearing things that aren’t there. An AI hallucination is false information being invented by the AI itself. It had good information in its training data but invents something that is misinformation at best and an outright lie at worst. A person who is experiencing hallucinations or a manic episode, can lose their sense of self awareness temporarily but it returns with a normal mental state.
On the topic of self awareness, we have tests we use to determine it in animals, such as being able to recognize oneself in the mirror. Only a few animals such as some birds, apes, and mammals such as orcas and elephants pass that test. Notably, very small children would not pass the test but they grow into recognizing that their reflection is them and not another being eventually.
I think the test about the seahorse emoji went over your head. The point isn’t that the LLM can’t experience it, it’s that there is no seahorse emoji. The LLM knows there isn’t a seahorse emoji and can’t reproduce it but it tries to over and over again because it’s training data points to there being one, when there isn’t. It fundamentally can’t learn, can’t self reflect on its experiences. Even with the expanded context window, once it starts a lie, it may admit that the information was false but 9/10 when called out on a hallucination, it will just generate another slightly different lie. In my anecdotal experience at least, once an LLM starts lying, the conversation is no longer useful.
You reference reasoning models, and they do a better job of avoiding hallucinations by breaking prompts down into smaller problems and allowing the LLM to “check its work” before revealing the response to the end user. That’s not the same as thinking in my opinion, it’s just more complex prompting. It’s not a single intelligence pondering on the prompt, it’s different parts of the model tackling the prompt in different ways before being piped to the full model for a generative reply. A different approach but at the end of the day, it’s just an unthinking pile of silicon and various metals running a computer program.
I do like your analogy of the 7 year old compared to the LLM. I find the main distinction being that the 7 year old will grow and learn form its experience, an LLM can’t. It’s “experience”, through prompt history, can give it additional information to apply to the current prompt, but it’s not really learning as much as it is just another token to help it generate a specific response. LLMs react to prompts according to its programming, emergent and novel responses come from unexpected inputs, not from it learning or otherwise not following its programming.
I apologize I probably didn’t fully address or rebut everything in your post, it was just too good of a post to be able to succinctly address it all on a mobile app. Thanks for sharing your perspective
Maybe that’s they point, people want to play Morrowind but they don’t have a platform that can actually play it
Does this mean that if I pretend to be a bot, I can access any cloudflare site ad-free?