this post was submitted on 08 Aug 2025
775 points (96.6% liked)

Technology

74098 readers
2684 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Or my favorite quote from the article

"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.

top 50 comments
sorted by: hot top controversial new old
[–] Taleya@aussie.zone 19 points 6 days ago (1 children)

You're not a species you jumped calculator, you're a collection of stolen thoughts

[–] DancesOnGraves@lemmy.ca 2 points 6 days ago

I'm pretty sure most people I meet ammount to nothing more than a collection of stolen thoughts.

"The LLM is nothing but a reward function."

So are most addicts and consumers.

[–] Agent641@lemmy.world 12 points 6 days ago

We did it fellas, we automated depression.

[–] flamingo_pinyata@sopuli.xyz 268 points 1 week ago (4 children)

Google replicated the mental state if not necessarily the productivity of a software developer

[–] kinther@lemmy.world 112 points 1 week ago (3 children)

Gemini has imposter syndrome real bad

[–] Canconda@lemmy.ca 59 points 1 week ago

As it should.

load more comments (2 replies)
load more comments (3 replies)
[–] JoMiran@lemmy.ml 136 points 1 week ago (23 children)

I was an early tester of Google's AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google's search. Now I use Kagi.

[–] Guidy@lemmy.world 1 points 6 days ago (1 children)

Weird because I’ve used it many times fr things not related to coding and it has been great.

I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.

I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.

I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.

[–] JoMiran@lemmy.ml 2 points 6 days ago

Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).

load more comments (22 replies)
[–] InstructionsNotClear@midwest.social 106 points 1 week ago (7 children)

Is it doing this because they trained it on Reddit data?

[–] baronvonj@lemmy.world 65 points 1 week ago (1 children)

That explains it, you can't code with both your arms broken.

load more comments (1 replies)
load more comments (6 replies)
[–] ur_ONLEY_freind@lemmy.zip 90 points 1 week ago (3 children)

AI gains sentience,

first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion

[–] IcyToes@sh.itjust.works 1 points 6 days ago

It didn't. It probably was coded not to admit it didn't know. So first it responded with bullshit, and now denial and self-loathing.

It feels like it's coded this way because people would lose faith if it admitted it didn't know.

It's like a politician.

load more comments (2 replies)
[–] The_Picard_Maneuver@piefed.world 84 points 1 week ago (15 children)
[–] pirat@lemmy.world 2 points 6 days ago

I remember often getting GPT-2 to act like this back in the "TalkToTransformer" days before ChatGPT etc. The model wasn't configured for chat conversations but rather just continuing the input text, so it was easy to give it a starting point on deep water and let it descend from there.

[–] Chozo@fedia.io 53 points 1 week ago

Pretty sure Gemini was trained from my 2006 LiveJournal posts.

[–] FauxLiving@lemmy.world 46 points 1 week ago (1 children)

I-I-I-I-I-I-I-m not going insane.

Same buddy, same

load more comments (1 replies)
[–] unbuckled_easily933@lemmy.ml 38 points 1 week ago

Damn how’d they get access to my private, offline only diary to train the model for this response?

load more comments (11 replies)
[–] ZILtoid1991@lemmy.world 64 points 1 week ago (2 children)

call itself "a disgrace to my species"

It starts to be more and more like a real dev!

load more comments (2 replies)
[–] Canconda@lemmy.ca 53 points 1 week ago (2 children)
[–] Kolanaki@pawb.social 34 points 1 week ago (2 children)

Next on the agenda: Doors that orgasm when you open them.

load more comments (2 replies)
load more comments (1 replies)
[–] Agent641@lemmy.world 37 points 1 week ago (1 children)

So it's actually in the mindset of human coders then, interesting.

[–] MashedTech@lemmy.world 12 points 6 days ago

It's trained on human code comments. Comments of despair.

[–] ArchmageAzor@lemmy.world 35 points 1 week ago (1 children)

"Look what you've done to it! It's got depression!"

load more comments (1 replies)
[–] Showroom7561@lemmy.ca 35 points 1 week ago (2 children)

I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.

load more comments (2 replies)
load more comments
view more: next ›