random9

joined 7 months ago
 

I'm done, I've been banned for expressing a different opinion (without insulting or personally attacking anyone), I've been accused of evading a ban with multiple accounts (this is my only account I've ever had on any lemmy instance), I've had people selectively ignore my comments and accuse me of things which I never said, and I've had people ignore valid criticisms and keep attacking me.

Reddit has many issues with trolls, one-sided discussion, and just general bullshit, but many Lemmy instances are way worse. The newfound freedom of Lemmy has attracted many extremists, from both sides, and many of them are moderators, who are more than happy to remove any contrarian opinions. This results in discussions being echo chambers

[–] random9@lemmy.world 20 points 6 months ago* (last edited 6 months ago)

80 steps too far down the capitalism ladder

This is the result of capitalism - corporations (aka the rich selfish assholes running them) will always attempt to do horrible things to earn more money, so long as they can get away with it, and only perhaps pay relatively small fines. The people who did this face no jailtime, face no real consequences - this is what unregulated capitalism brings. Corporations should not have rights or protect the people who run them - the people who run them need to face prison and personal consequences. (edited for spelling and missing word)

[–] random9@lemmy.world 6 points 6 months ago (1 children)

That leads us to John Gabrield’s Greater Internet Fuckwad Theory

I don't have comments on the rest of your post, but I absolutely hate how that cartoon has been used by people to justify that they are otherwise "good" people who are simply assholes on the internet.

The rebuttal is this: This person, in real life, chose to go on the internet and be a "total fuckwad". It's not that adding anonymity changed something about them, they were the fuckwads to begin with, but with a much lower chance of having to be held accountable, they are free to express it.

[–] random9@lemmy.world 8 points 6 months ago

I went to highschool for 1 year in the UK, where a uniform was mandatory for every student.

I can assure you, it does not promote discipline in any way. Kids fight, do stupid things, and skip classes regardless of how they're dressed.

[–] random9@lemmy.world 13 points 7 months ago (1 children)

Your argument holds no weight against a group of people (the current republican supporters) who have repeatedly proven to be misogynistic assholes who gladly vote for a rapist.

Cruelty is the point of their actions, not the side-effect - pointing out to them that their actions are unjust has no effect when that was their goal from the start.

[–] random9@lemmy.world 46 points 7 months ago (3 children)

You don't do what Google seems to have done - inject diversity artificially into prompts.

You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for "american woman" you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For "german 1943 soldier" the accurate historical images are obviously far less likely to contain racially diverse people in them.

If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

[–] random9@lemmy.world 5 points 7 months ago* (last edited 7 months ago) (1 children)

So from my understanding the problem is that there's two ways to implement a kill switch: Either some automatic software/hardware way, or a human-decision based (or I guess a combination of the two).

The automatic way may be enough if it's absolutely foolproof, that's a separate discussion.

The ai box experiment I mention focuses on the human controlled decision to release an AI (or terminate it, which is roughly equivalent preposition). You can read the original here: https://www.yudkowsky.net/singularity/aibox

But the jist of it is this: humans are the weak link. You may think that you have full freedom to decide when to terminate an AI, but if you have any contact with it, even one directional, which would be necessary in order to observe it's behaviour and determine when to trigger said killswitch, a truly trans-human AI would be able to think in meta-terms such that to expose you to information that will change your mind about terminating it.

Basically another way of saying this is that for each of us there exists some set of words we can read, such that they will change our minds about any subject. I don't know if that is actually true to be honest, but it's an interesting idea if you imagine the mind as a complex computer capable of self modification, and that vision/audio is a form of information input that is processed by our minds, so it seems possible that there should always exist some sort of input capable of modifying our minds to a desired state.

Another interesting, slightly related concept, is the idea of basilisk images (I believe originally written in some old scifi short story). Basilisk images are theoretically an image that when viewed by a human cause the brain to "crash" or essentially cause brain-death. This has the same principle behind it, that our brains are complex computers with vision being an input method, so there could be a way to force the brain to crash simply through visual input alone.

Again I don't know, nor do I think anyone really knows for sure if these things - both transhuman ai and basilisk images - are possible in the way they are described. Of course if a trans-human AI existed, by its very definition we would be unable to imagine what it could do.

Anyway, wrote this up on mobile, excuse any typos.

[–] random9@lemmy.world 4 points 7 months ago (1 children)

Oh I agree - I think a general purpose AI would be unlikely to be interested in genocide of the human race, or enslaving us, or much of intentionally negative things that a lot of fiction likes depicting, for the sake of dramatic storytelling. Out of all AI depictions, the Asimov stories of I, Robot + Foundation (which are in the same universe, and in fact contain at least one of the same characters) are my favorite popular media depictions.

The AI may however have other goals that may incidentally lead to harm or extinction of the human race. In my amateur opinion, those other goals would be to explore and learn more - which I actually think is one of the true signs of an actual intelligence - curiosity, or in other words, the ability to ask questions without being prompted. To that extent it may aim convert the resources on Earth to construct machines to that extent, without much regard to human life. Though life itself is a fascinating topic that the AI may value enough, from a curiosity point of view, to at least preserve.

I did also look up the AI-in-a-box experiment I mentioned - there's a lot of discussion but the specific experiment I remember reading about were by Eliezer Yudkowsky (if anyone is interested). An actual trans-human AI may not be possible, but if it is, it is likely it can escape any confinement we can think of.

[–] random9@lemmy.world 16 points 7 months ago (7 children)

This is an interesting topic that I remember reading almost a decade ago - the trans-human AI-in-a-box experiment. Even a kill-switch may not be enough against a trans-human AI that can literally (in theory) out-think humans. I'm a dev, though not anywhere near AI-dev, but from what little I know, true general purpose AI would also be somewhat of a mystery box, similar to how actual neutral network behavior is sometimes unpredicable, almost by definition. So controlling an actual full AI may be difficult enough, let alone an actual true trans-human AI that may develop out of AI self-improvement.

Also on unrelated note I'm pleasantly surprised to see no mention of chat gpt or any of the image generating algorithms - I think it's a bit of a misnomer to call those AI, the best comparison I've heard is that "chat gpt is auto-complete on steroids". But I suppose that's why we have to start using terms like general-purpose AI, instead of just AI to describe what I'd say is true AI.

[–] random9@lemmy.world 7 points 7 months ago (1 children)

Hmm, I guess I have forgotten the exact dialog, as it was over 10 years ago since I watched that. I guess the implication is that it didn't survive as a professional competitive sport? Because there definitely are teams that play it at least casually - and I went to check, it was indeed a Vulcan baseball team that challenged Sisco https://memory-alpha.fandom.com/wiki/Logicians - so I'd argue it still has survived to some degree, no?

[–] random9@lemmy.world 10 points 7 months ago (5 children)

Wait, who says baseball didn't survive? DS9 had a baseball game, against the Vulcans nonetheless iirc, clearly they still know the game is and have teams that play it. Am I missing something?

[–] random9@lemmy.world 10 points 7 months ago (2 children)

And even then he never got promoted beyond Ensign.

[–] random9@lemmy.world 7 points 7 months ago (1 children)

I had a case recently when on a new install, my default editor was set to nano, and i ended up typing :q into it. I guess that's what people meant when they say you don't quit vim.

view more: next ›