Maybe that’s they point, people want to play Morrowind but they don’t have a platform that can actually play it
TORFdot0
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
I recently got an Xbox One S to play my 360 games via remote play but $10 a month for Xbox live is ridiculous
Watching those hack frauds causes psychological harm. Rich Evans is known to trigger depression
I mean steam adds a convenient way to keep your games up to date instead of having to manually patch them. I also was on the anti-steam bandwagon for the longest time until I finally gave in and decided to buy Modern Warfare 2 in 2010. I ended up repurchasing the rest of the Call of Duty games because it was so convenient not needing the discs and not having to locate patches.
Steam is the one launcher I don’t get pissed about having to use because it has so many value add features.
Unlike epic/origin/uplay
I’d take BBC/PBS over Fox News any day.
Usually when you think of reform, you imagine things getting better, not things getting worse…
Not sure that HDMI to component would be any better than just going straight component if your PVM doesn’t accept HDMI. Even if it does accept HDMI, it probably would be better to do straight component anyway. I would defer to your research as I’m not an experts on PVMs.
But personally I would probably just play ps3 on an LCD rather than a CRT. Most games, especially the last half of the generation were designed to plan on an LCD and it’s the easiest to connect and get good picture quality.
You are right but you can’t exactly publish something and expect it to be private
The protocol is ActivityPub not ActivityPriv
I want to preface my response that I appreciate the thought and care put into your thoughts even though I don’t agree with them. Yours as well as the others.
The differences between a human hallucination and an AI hallucination is pretty stark. A human’s hallucinations are false information understood by one’s senses. Seeing or hearing things that aren’t there. An AI hallucination is false information being invented by the AI itself. It had good information in its training data but invents something that is misinformation at best and an outright lie at worst. A person who is experiencing hallucinations or a manic episode, can lose their sense of self awareness temporarily but it returns with a normal mental state.
On the topic of self awareness, we have tests we use to determine it in animals, such as being able to recognize oneself in the mirror. Only a few animals such as some birds, apes, and mammals such as orcas and elephants pass that test. Notably, very small children would not pass the test but they grow into recognizing that their reflection is them and not another being eventually.
I think the test about the seahorse emoji went over your head. The point isn’t that the LLM can’t experience it, it’s that there is no seahorse emoji. The LLM knows there isn’t a seahorse emoji and can’t reproduce it but it tries to over and over again because it’s training data points to there being one, when there isn’t. It fundamentally can’t learn, can’t self reflect on its experiences. Even with the expanded context window, once it starts a lie, it may admit that the information was false but 9/10 when called out on a hallucination, it will just generate another slightly different lie. In my anecdotal experience at least, once an LLM starts lying, the conversation is no longer useful.
You reference reasoning models, and they do a better job of avoiding hallucinations by breaking prompts down into smaller problems and allowing the LLM to “check its work” before revealing the response to the end user. That’s not the same as thinking in my opinion, it’s just more complex prompting. It’s not a single intelligence pondering on the prompt, it’s different parts of the model tackling the prompt in different ways before being piped to the full model for a generative reply. A different approach but at the end of the day, it’s just an unthinking pile of silicon and various metals running a computer program.
I do like your analogy of the 7 year old compared to the LLM. I find the main distinction being that the 7 year old will grow and learn form its experience, an LLM can’t. It’s “experience”, through prompt history, can give it additional information to apply to the current prompt, but it’s not really learning as much as it is just another token to help it generate a specific response. LLMs react to prompts according to its programming, emergent and novel responses come from unexpected inputs, not from it learning or otherwise not following its programming.
I apologize I probably didn’t fully address or rebut everything in your post, it was just too good of a post to be able to succinctly address it all on a mobile app. Thanks for sharing your perspective