Running Ubuntu 22.04:
I've tried searching for a solution to this, but there's way too many false positives. I know you can enable/disable using a single instance in two places, but that's not my issue. If I open a video the first time, VLC works fine. If I have both the "only use one instance" off, I can click on other videos and they run too. But if I have any combination of those boxes checked I can only play the first video, and to play a new video I have to manually close the VLC player.
What I want to happen is to have one player spawn, but any new video will run in that player, overriding the existing video. Is that just not possible? I don't want to have to close the window every time I run something new, and I don't want to have to go back and lose a whole bunch of separate VLC instances when I'm done.
Update: I found the solution, and I should have tried it first before posting. Oh well, at least I didn't leave anyone who might find this hanging like a DenverCoder9.
Being Ubuntu, VLC was installed via Snap (I can't recall if I did it or it's default). I suppose I should look into the de-Snap process I've seen mentioned before, as I've also had a few non-crash errors since running Ubuntu where snap was the source.
So the solution was to install the Debian version in terminal, not Snap. It works like I expect and wanted, and I can run video after video and it uses the same window.
AGI might use LLM tech in their process, but LLMs by themselves aren't going to become aware. What happened is LLM tech became a gold mine, some who were doing AGI research jumped on it instead, and others followed. There is certainly still AGI research going on somewhere, but it's buried by the race to... something. The biggest problem I see, outside of the need for profit guiding all this, is that what they are building has become so complex they don't really understand it fully, they just keep finding ways to tack on things to get to some higher level without knowing why it works (or why it will break).
And while LLMs aren't AGI, they still have the issue of misalignment, even without a self-awareness. We've seen early on the misdirection to obtain a goal, and the models now are more sophisticated. Maybe it's not their own goal, but a misunderstood goal that they'll say and do anything to get to.
Good thing we're not putting them in control of important things, or full access to systems, right? Right?