this post was submitted on 25 Jan 2026
476 points (97.4% liked)

Programmer Humor

28883 readers
1829 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] ZILtoid1991@lemmy.world 8 points 1 day ago

Five Nights at Altman's

[–] kamen@lemmy.world 3 points 1 day ago

If software was your kid.

Credit: Scribbly G

[–] jwt@programming.dev 5 points 1 day ago

Reminds me of that "have you ever had a dream" kid.

[–] DylanMc6@lemmy.dbzer0.com 2 points 1 day ago

The AI touched that lava lamp

[–] stsquad@lemmy.ml 120 points 3 days ago (4 children)

If you have ever read the "thought" process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I'm not even sure this isn't by design.

[–] dream_weasel@sh.itjust.works 4 points 1 day ago

This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there's not enough data in the training set, but it's not an intentional add. Output length is a whole deal.

[–] swiftywizard@discuss.tchncs.de 79 points 3 days ago (1 children)

I dunno, let's waste some water

[–] gtr@programming.dev 7 points 3 days ago (1 children)

They are trying to get rid of us by wasting our resources.

[–] MajorasTerribleFate@lemmy.zip 13 points 3 days ago

So, it's Nestlé behind things again.

[–] SubArcticTundra@lemmy.ml 20 points 3 days ago (1 children)

I'm pretty sure training is purely result oriented so anything that works goes

load more comments (1 replies)
[–] Feathercrown@lemmy.world 9 points 3 days ago (2 children)

Why would it be by design? What does that even mean in this context?

[–] MotoAsh@piefed.social 5 points 2 days ago (2 children)

You have to pay for tokens on many of the "AI" tools that you do not run on your own computer.

[–] Feathercrown@lemmy.world 7 points 2 days ago* (last edited 2 days ago) (4 children)

Hmm, interesting theory. However:

  1. We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.

  2. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.

[–] jerkface@lemmy.ca 3 points 2 days ago (1 children)

it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)

[–] MotoAsh@piefed.social 1 points 1 day ago* (last edited 1 day ago)

No, it wasn't a virtue signal, you fucking dingdongs.

Capitalism is rife with undercooked products, because getting a product out there starts the income flowing sooner. They don't have to be making a profit for a revenue stream to make sense. Some money is better than no money. Get it?

Fuck, it's like all you idiots can do is project your lack of understanding on others...

load more comments (3 replies)
[–] piccolo@sh.itjust.works 1 points 2 days ago (1 children)

Dont they charge be input tokens? E.g. your prompt. Not the output.

[–] MotoAsh@piefed.social 4 points 2 days ago* (last edited 2 days ago) (1 children)

I think many of them do, but there are also many "AI" tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it's not handing back hallucinations.

It really adds up in their attempt to make fancy autocomplete seem "intelligent".

[–] piccolo@sh.itjust.works 1 points 2 days ago (1 children)

Yes, reasoning models... but i dont think they would charge on that... that would be insane, but AI executives are insane, so who the fuck knows.

[–] MotoAsh@piefed.social 1 points 1 day ago* (last edited 1 day ago)

Not the models. AI tools that integrate with the models. The "AI" would be akin to the backend of the tool. If you're using Claude as the backend, the tool would be asking claude more questions and repeat questions via the API. As in, more input.

load more comments (1 replies)
[–] Darohan@lemmy.zip 78 points 3 days ago
[–] ChaoticNeutralCzech@feddit.org 20 points 3 days ago* (last edited 3 days ago)

Nah, too cold. It stopped moving and the computer can't generate any more random numbers to pick from the LLM's weighted suggestions. Similarly, some LLMs have a setting called "heat": too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.

[–] Kyrgizion@lemmy.world 52 points 3 days ago

Attack of the logic gates.

[–] ideonek@piefed.social 35 points 3 days ago (6 children)
[–] FishFace@piefed.social 107 points 3 days ago (3 children)

LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model's view of "context" doesn't change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don't work perfectly.

*Token

[–] ideonek@piefed.social 20 points 3 days ago (22 children)

That was the answer I was looking for. So it's simmolar to "seahorse" emoji case, but this time.at some point he just glitched that most likely next world for this sentence is "or" and after adding the "or" is also "or" and after adding the next one is also "or", and after a 11th one... you may just as we'll commit. Since thats the same context as with 10.

Thanks!

load more comments (22 replies)
[–] bunchberry@lemmy.world 1 points 1 day ago (1 children)

This happened to me a lot when I tried to run big models with low context windows. It would effectively run out of memory so each new token wouldn't actually be added to the context so it would just get stuck in an infinite loop repeating the previous token. It is possible that there was a memory issue on Google's end.

[–] FishFace@piefed.social 1 points 1 day ago (1 children)

There is something wrong if it's not discarding old context to make room for new

[–] bunchberry@lemmy.world 1 points 1 day ago (1 children)

At least llama.cpp doesn't seem to do that by default. If it overruns the context window it just blorps.

[–] FishFace@piefed.social 1 points 17 hours ago

I think there are parameters for that, from googling.

load more comments (1 replies)
[–] ch00f@lemmy.world 56 points 3 days ago (1 children)

Gemini evolved into a seal.

load more comments (1 replies)
[–] ech@lemmy.ca 27 points 3 days ago (2 children)

It's like the text predictor on your phone. If you just keep hitting the next suggested word, you'll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.

load more comments (2 replies)
[–] Arghblarg@lemmy.ca 31 points 3 days ago

LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.

[–] palordrolap@fedia.io 18 points 3 days ago (1 children)

Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an "A, B and/or C" structure tend to sound more punchy, knowledgeable and authoritative.

Yes, I did do that on purpose.

[–] Cevilia@lemmy.blahaj.zone 11 points 3 days ago (1 children)

Not only that, but also "not only, but also" constructions, which sound more emphatic, conclusive, and relatable.

load more comments (1 replies)
[–] kogasa@programming.dev 17 points 3 days ago

Turned into a sea lion

[–] RVGamer06@sh.itjust.works 7 points 3 days ago

O cholera, czy to Freddy Fazbear?

load more comments
view more: next ›