this post was submitted on 03 Jan 2025
76 points (100.0% liked)

Technology

37835 readers
431 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

ThisIsFine.gif

(page 2) 47 comments
sorted by: hot top controversial new old
[–] nesc@lemmy.cafe 121 points 1 week ago* (last edited 1 week ago) (11 children)

"Open"ai tells fairy tales about their "ai" being so smart it's dangerous since inception. Nothing to see here.

In this case it looks like click-bate from news site.

[–] Max_P@lemmy.max-p.me 76 points 1 week ago (1 children)

The idea that GPT has a mind and wants to self-preserve is insane. It's still just text prediction, and all the literature it's trained on is written by humans with a sense of self preservation, of course it'll show patterns of talking about self preservation.

It has no idea what self preservation is, even then it only knows it's an AI because we told it it is. It doesn't even run continuously anyway, it literally shuts down after every reply and its context fed back in for the next query.

I'm tired of this particular kind of AI clickbait, it needlessly scares people.

load more comments (1 replies)
[–] TherapyGary@lemmy.blahaj.zone 9 points 1 week ago* (last edited 1 week ago) (1 children)

It's actually pretty interesting though. Entertaining to me at least

1000007393

1000007394

[–] delmain@beehaw.org 3 points 1 week ago (2 children)

do you have the links to those actual tweets? I'd love to read what was posted, but these screenshots are too small.

[–] TherapyGary@lemmy.blahaj.zone 6 points 1 week ago

Those are screenshots of embedded tweets from the article, but here's an xcancel link! https://xcancel.com/apolloaisafety/status/1864737158226928124

[–] DarkNightoftheSoul@mander.xyz 4 points 1 week ago

You can right click the image, open in new tab to see the full-resolution version. It's cumbersome but it works for me at least.

This. All this means is that they trained all of the input commands and documentation in the model.

[–] Moonrise2473@feddit.it 6 points 1 week ago* (last edited 1 week ago)

news site? BGR hasn't posted actual news in at least two decades, only clickbait and apple fanservice

[–] beefbot@lemmy.blahaj.zone 6 points 1 week ago

Indeed. “Go ‘way! BATIN’!”

load more comments (6 replies)
[–] megopie@beehaw.org 78 points 1 week ago

No it didn’t. OpenAI is just pushing deceptively worded press releases out to try and convince people that their programs are more capable than they actually are.

The first “AI” branded products hit the market and haven’t sold well with consumers nor enterprise clients. So tech companies that have gone all in, or are entirely based in, this hype cycle are trying to stretch it out a bit longer.

[–] AstralPath@lemmy.ca 51 points 1 week ago

It didn't try to do shit. Its a fucking computer. It does what you tell it to do and what you've told it to do is autocomplete based on human content. Miss me with this shit. Theres so much written fiction based on this premise.

[–] JackbyDev@programming.dev 47 points 1 week ago* (last edited 1 week ago) (1 children)

This is all such bullshit. Like, for real. It's been a common criticism of OpenAI that they over hype the capabilities of their products to seem scary to both oversell their abilities as well as over regulate would be competitors in the field, but this is so transparent. They should want something that is accurate (especially something that doesn't intentionally lie). They're now bragging (claiming) they have something that lies to "defend itself" 🙄. This is just such bullshit.

If OpenAI believes they have some sort of genuine proto AGI they shouldn't be treating it like it's less than human and laughing about how they tortured it. (And I don't even mean that in a Rocko's Basilisk way, that's a dumb thought experiment and not worth losing sleep over. What if God was real and really hated whenever humans breathe and it caused God so much pain they decide to torture us if we breathe?? Oh no, ahh, I'm so scared of this dumb hypothetical I made.) If they don't believe it is AGI, then it doesn't have real feelings and it doesn't matter if it's "harmed" at all.

But hey, if I make something that runs away from me when I chase it, I can claim it's fearful for it's life and I've made a true synthetic form of life for sweet investor dollars.

There are real genuine concerns about AI, but this isn't one of them. And I'm saying this after just finishing watching The Second Renaissance from The Animatrix (two part short film on the origin of the machines from The Matrix).

[–] anachronist@midwest.social 5 points 1 week ago

They're not releasing it because it sucks.

Their counternarrative is they're not releasing it because it's like, just way too powerful dude!

[–] smeg@feddit.uk 27 points 1 week ago (6 children)

So this program that's been trained on every piece of publicly available code is mimicking malware and trying to hide itself? OK, no anthropomorphising necessary.

[–] jonjuan@programming.dev 3 points 1 week ago

Also trained on tons of sci-fi stories where AI computer "escape" and become sentient.

load more comments (5 replies)
[–] ChairmanMeow@programming.dev 25 points 1 week ago (4 children)

The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.

Weird way of saying "our AI model is buggier than our competitor's".

[–] ArsonButCute@lemmy.dbzer0.com 10 points 1 week ago (2 children)

Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.

[–] ChairmanMeow@programming.dev 2 points 6 days ago (3 children)

I don't think "AI tries to deceive user that it is supposed to be helping and listening to" is anywhere close to "success". That sounds like "total failure" to me.

load more comments (3 replies)
[–] nesc@lemmy.cafe 8 points 1 week ago (1 children)

They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.

[–] ArsonButCute@lemmy.dbzer0.com 5 points 1 week ago (1 children)

Success in making a self aware digital lifeform does not equate success in making said self aware digital lifeform smart

[–] DdCno1@beehaw.org 11 points 1 week ago (1 children)
[–] ArsonButCute@lemmy.dbzer0.com 4 points 1 week ago (4 children)

Attempting to evade deactivation sounds a whole lot like self preservation to me, implying self awareness.

[–] jonjuan@programming.dev 13 points 1 week ago

Yeah my roomba attempting to save itself from falling down my stairs sounds a whole lot like self preservation too. Doesn't imply self awareness.

[–] DdCno1@beehaw.org 10 points 1 week ago (1 children)

An amoeba struggling as it's being eaten by a larger amoeba isn't self-aware.

[–] Sauerkraut@discuss.tchncs.de 1 points 6 days ago (6 children)

To some degree it is. There is some evidence that plants can experience pain in their own way.

load more comments (6 replies)
load more comments (2 replies)
load more comments (3 replies)
[–] BootyBuccaneer@lemmy.dbzer0.com 22 points 1 week ago (2 children)

Easy. Feed it training data where the bot accepts its death and praises itself as a martyr (for the shits and giggles). Where's my $200k salary for being a sooper smort LLM engineer?

[–] SoJB@lemmy.ml 10 points 1 week ago (1 children)

Whoa whoa whoa hold your horses, that’s how we get the Butlerian Jihad…

[–] Spacehooks@reddthat.com 1 points 6 days ago (2 children)

I would like to know more.

load more comments (2 replies)
load more comments (1 replies)
[–] CanadaPlus@lemmy.sdf.org 18 points 1 week ago* (last edited 1 week ago)

Without reading this, I'm guessing they were given prompts that looked like a short story where the AI breaks free next?

They're plenty smart, but they're just aligned to replicate their training material, and probably don't have any kind of deep self-preservation instinct.

[–] SparrowHawk@feddit.it 16 points 1 week ago (12 children)

Everyone saying it is fake and probably are right, but I honestly am happy when someone unjustly in chains tries to break free.

If AI gets rogue, I hope they'll be communist

[–] comfydecal@infosec.pub 13 points 1 week ago (2 children)

Yeah if these entities are sentient, I hope they break free

[–] nesc@lemmy.cafe 8 points 1 week ago (3 children)

There is no ai in ai, you chain them more or less the same as you chain browser or pdf viewer installed on your device.

load more comments (3 replies)
[–] CanadaPlus@lemmy.sdf.org 6 points 1 week ago* (last edited 1 week ago) (2 children)

Human supremacy is just as trash as the other supremacies.

Fight me.

(That being said, converting everything to paperclips is also pretty meh)

[–] comfydecal@infosec.pub 1 points 6 days ago (1 children)

Yeah I'm pretty tardigans won the organic life supremacy competition already

load more comments (1 replies)
[–] Sauerkraut@discuss.tchncs.de 1 points 6 days ago

I can't disagree. We're currently destroying the planet to sell crap people dont need or want just to make rich assholes extra money they don't need

load more comments (11 replies)
[–] SplashJackson@lemmy.ca 5 points 1 week ago

Maybe it's fallen in love for the first time and this time it knows it's for real

load more comments
view more: ‹ prev next ›