Invertedouroboros

joined 2 years ago
[–] Invertedouroboros@lemmy.world 46 points 1 month ago (10 children)

Every day we get closer to teaching the robots how to feel pain.

[–] Invertedouroboros@lemmy.world 33 points 1 month ago (1 children)

I'm gonna be honest, I did not think this was staged when I first read it...

...Because why on earth would that be what you want to stage? Like sure, they say in the article it was to "prove Putin isn't hiding behind others" or some such shit. The message it sends to me is "our air defense is so shit we can't even lock it down when the guy in charge comes to visit". What a bizarre choice of propaganda.

[–] Invertedouroboros@lemmy.world 12 points 2 months ago

Not a burden. We're social animals at the end of the day. Everyone relies on everyone else to keep this whole thing we got going. Participating in that doesn't make you a burden, it makes you human. Hope you get what you need OP, we're all here rooting for ya.

[–] Invertedouroboros@lemmy.world 163 points 2 months ago (1 children)

"Lettuce Speak".

[–] Invertedouroboros@lemmy.world 16 points 4 months ago

Yeah, this is what's kinda worrying me about this move. Proton's got a pretty good name with folks who are security conscious. This move feels more like they are kinda trying to pivot, to cash in litterally years of good reputation for something else. That'd suck at the best of times, but in the second Trump era? Really just ain't any good answers to what they might be cashing that in for.

[–] Invertedouroboros@lemmy.world 45 points 4 months ago (5 children)

It's... better in the sense that you don't have right wing weirdos all over the place. But technically? Organizationally? Feels like we're on track to replay the same exact shit over again. It feels like people just aren't learning the lessons they should from the Twitter takeover.

[–] Invertedouroboros@lemmy.world 6 points 4 months ago

I think... there was a kernel of a decent instinct there. At the point that John Oliver bit came out I feel like we were all kinda just marveling at how far stupid playground insults managed to get Trump. "Well, ok, maybe he's onto something. Let's try it and see what happens." Was a fine reaction for the time, but I think it was best abandoned quickly.

In 2025, not useful in the slightest. I don't know what is precisely, but I don't think it's petty name calling.

[–] Invertedouroboros@lemmy.world 15 points 6 months ago

You know, there's that old yarn about Alfred Nobel. That his obituary was accidentally published early and that he was shocked and dismayed to discover that the only thing he'd be remembered for was the invention of Dynamite. So, he went on to create the Nobel Peace Prize, in the hopes of contributing something other than death to the world.

I'm not saying Nobel was a fantastic dude, but at least he cared enough to not be remembered as the guy that made it possible for your son to get blown to peices in a war. He wanted something positive associated with name.

Even that seems too high a bar for these folks. They've become so entrenched in their own little world that I don't think they much care what anyone outside it thinks.

[–] Invertedouroboros@lemmy.world 9 points 6 months ago (1 children)

Obviously this is all stupid and you'll find problems anywhere you choose to look.

The problem I'm finding is this, if Facebook truly is betting on AI becoming better as a way to encourage growth then why are they further poisoning their own datasets? Like ok, even if you exclude everything your own bots say from your training data, which you could probably do since you know who they are, this is still encouraging more AI slop on the platform. You don't know how much of the "engagement" your driving (which they are likely just turning around and feeding back into the AI training set) is actually human, AI grifter, or someone poisoning the well by making your AIs talk to themselves. If you actually cared to make your AI better, then you can't use any of the responses to your bots as most of them will be of dubious providence at best.

Personally I'm rooting on the coming Hapsburg-AI issue so I don't really have that much of a problem with Facebook deciding more poison is a brilliant business move. But uh... seems real dumb if your actually interested in having an actually functional LLM.

[–] Invertedouroboros@lemmy.world 9 points 7 months ago

Yeah, fucked up though it might be, I think that within the moral framework she's chosen to operate in she's "doing the right thing". That framework is monstrous and should be disqualifying for a position on the judiciary. But I think she’s got no moral qualms and would treat the morality that most of us have with a mixture of confusion and hostility.

[–] Invertedouroboros@lemmy.world 5 points 7 months ago

I'll confess I've had the same thought... but I feel like the problem is deeper than that. If people don't have basic awareness of the devices they rely on then they in danger of becoming victims of those who do. I'd point to your average boomer on Facebook to illustrate that point.

[–] Invertedouroboros@lemmy.world 25 points 7 months ago

Is it the tech? Or is it media literacy?

I've messed around with AI on a lark, but would never dream of using it on anything important. I feel like it's pretty common knowledge that AI will just make shit up if it wants to, so even when I'm just playing around with it I take everything it says with a heavy grain of salt.

I think ease of use is definitely a component of it, but in reading your message I can't help but wonder if the problem instead lies in critical engagement. Can they read something and actively discern whether the source is to be trusted? Or are they simply reading what is put in front of them then turning around to you and saying "well, this is what the magic box says. I don't know what to tell you.".

view more: next ›