this post was submitted on 08 Apr 2026
593 points (96.5% liked)

Programmer Humor

31037 readers
1674 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] minorkeys@lemmy.world 172 points 1 week ago* (last edited 1 week ago) (18 children)

The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can't do math or problem solve. It can only tell you what the most likely answer would be. It can't do function things. It's like Family Feud where it says what the most people surveyed said.

[–] Clent@lemmy.dbzer0.com 89 points 1 week ago (1 children)

Some of them will "do math" but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What's great is when it outputs results, it's not clear if it engaged the math engine or just guessed.

[–] hikaru755@lemmy.world 16 points 1 week ago (2 children)

when it outputs results, it's not clear if it engaged the math engine or just guessed

That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that's directly shown to the user, or if you only see the LLM's final response based on it.

load more comments (2 replies)
[–] 1D10@lemmy.world 30 points 1 week ago (1 children)

I explain it as asking 100 people to Google something and taking the most common answer.

[–] minorkeys@lemmy.world 13 points 1 week ago (1 children)

Yeah, that's basically exactly what family feud does.

[–] 1D10@lemmy.world 23 points 1 week ago (2 children)

Yep but instead of "name something a woman keeps in her purse" it's "write my legal document" or "is it ok to lick a lamp socket"

load more comments (2 replies)
load more comments (16 replies)
[–] Ganbat@lemmy.dbzer0.com 50 points 1 week ago (1 children)

Okay, so, in case the headline is confusing anyone else, it's literal. Like, you know how there are those cringe-ass Alexa ads that are about how it does AI language processing and assistant shit? Yeah, ChatGPT can't I guess.

load more comments (1 replies)
[–] paraphrand@lemmy.world 49 points 1 week ago (3 children)

Wow, the only thing Siri is generally competent at.

[–] verdare@piefed.blahaj.zone 9 points 1 week ago

My first thought as well, lol.

load more comments (2 replies)
[–] MousePotatoDoesStuff@lemmy.world 42 points 1 week ago (2 children)

Even if it could, it would be an order of magnitude more inefficient in terms of convenience than the stopwatch we already have on our phones.

"Hey ChatGPT, do the thing I could have done in 3-4 clicks on my clock app."

Not to mention the sheer wastefulness in terms of energy. A MINECRAFT REDSTONE MACHINE TIMER WOULD BE MORE EFFICIENT. (Not to mention that, unlike SOTA LLMs, it can run offline on a phone)

[–] FuglyDuck@lemmy.world 17 points 1 week ago (2 children)

minecraft is turing-complete, so, like, you can do a whole lot more than just be a timer.

[–] MousePotatoDoesStuff@lemmy.world 8 points 1 week ago (6 children)

Absolutely. I was thinking of getting back into minecraft Redstone but I'd rather do it in a non-Microsoft alternative. Not to mention at least a dozen other projects on my backlog

load more comments (6 replies)
load more comments (1 replies)
[–] Jhex@lemmy.world 7 points 1 week ago (1 children)

You are correct but I think you are missing the point.

Remember, from the perspective of all AI companies (OpenAi probably more than most), AI is this monster tech that will surely replace all workers and even your Grandma as it can bake better cookies.

This is yet another display of how lacking AI is in a simple, everyday task... but more importantly, it is a gigantic demonstration of how AI is completely blind to its own weaknesses which is what makes it really really dangerous when used as prescribed by OpenAi and the others

This situation is basically the same as when the brand new $700 iPhones (back when that was eye watering expensive for a phone) could not run an alarm in the mornings and Apple's answer was something like "why are you using your Cadillac phone as a cheap alarm?"... it should fucking wake me up with a massage for that cost!

load more comments (1 replies)
[–] robocall@lemmy.world 40 points 1 week ago (2 children)

He's going to ask US Congress for a bailout with taxpayers money when this all fails and Congress is going to most likely give it to him because this one company is a huge part of the US economy

[–] frank@sopuli.xyz 19 points 1 week ago (5 children)

I don't think so, and I'm on the Ed Zitron train of thought why not.

The financial instruments got a bailout in 08, because the economy itself would stop functioning. That's different than the stocks would drop. Also, there's like nothing to bail out? OpenAI and their ilk are just sucking down capital and returning nothing. Even if they get one bailout, they need a continuous stream of unlimited money forever? I don't think it'll happen.

I hope I'm right, cuz damn that shit is cancerous

load more comments (5 replies)
load more comments (1 replies)
[–] jobbies@lemmy.zip 39 points 1 week ago (8 children)

Makes me so angry. All the problems that couldve been solved with that kinda money. Climate crisis. World hunger. Population migration. Housing affordability.

If Trump triggered WW3 and we all got nuked id be fine with it. We don't deserve to exist.

[–] numberskull@lemmy.zip 11 points 1 week ago

There was an ad during the Super Bowl that succinctly sums up how I feel right now: “America deserves Pepsi”

[–] skuzz@discuss.tchncs.de 11 points 1 week ago

Instead, all that money is being used to accelerate our doom. AI datacenters unnecessarily consuming power and drinking water in small towns everywhere. Many just dumping humidity into the air and letting that water literally blow away via lazy evaporative cooling. Most "normal" water consuming processes consume, treat, and return water to the downstream-traveling aquifer.

Now, couple that with an overall warming climate. When air is warmer, the more moisture the air can hold. So we end up with more water vapor in the air than normal. With the weirding factor of climate change, this means more water energy for more powerful and destructive storms the likes of which humanity has never seen. Which feeds back into more ice melting, oceans rising, permafrost melting, cycle, accelerate, cycle, accelerate.

Also, real curious to see how millions of warehouses belching humidity and heat into the air across the surface of the globe can affect the general weather patterns, but that sadly won't be known until after the damage is done.

load more comments (6 replies)
[–] favoredponcho@lemmy.zip 37 points 1 week ago

Just make Codex write the code for it. Should be easy. Don’t even need humans. Right?

[–] ductTapedWindow@lemmy.zip 29 points 1 week ago (3 children)

I just used the voice feature in my truck to enter an address for Google maps like always, it came up as Gemini with a long speech. I repeated the address, it asked me if I wanted the location in my home city or one in a city over 400 miles away. Regression with exponential cost.

[–] skuzz@discuss.tchncs.de 15 points 1 week ago

And every fake-friendly long-winded response consumes more electricity and water than it should, while also being useless.

load more comments (2 replies)
[–] yopp@infosec.pub 27 points 1 week ago* (last edited 1 week ago) (3 children)

This is most unhinged take from both sides.

Time can’t exist in LLM by design: it’s just a thing that predicts next token based on previous tokens. There is no temporal relation between tokens. You can stop and resume generation at any point. How anyone expect it to “count time”? Based on what? The best you can do is add time mark to model input at some interval.

Simplifying, somewhat complex biological systems have some kind of clocks that actually chemically tick and induce some kind of signal that they can react on.

LLMs can’t do that like at all. They never will. Some other architecture that runs in cycles? Maybe. But transformer shit? Never ever.

[–] MysticKetchup@lemmy.world 25 points 1 week ago

The issue is that ChatGPT will tell you that it can do those things. Most of the hype for "AI" has been predicated on treating it like actual artificial intelligence and not the LLM parrot it truly is

[–] mrgoosmoos@lemmy.ca 8 points 1 week ago (1 children)

I don't think anybody is expecting an LLM to do it

what they are expecting is the product, chatGPT, to be a one-stop spot that can do basic tasks like that

load more comments (1 replies)
load more comments (1 replies)
[–] JeeBaiChow@lemmy.world 23 points 1 week ago (2 children)

Lol. Why dont they ask the AI how to program an AI?

[–] yakko@feddit.uk 18 points 1 week ago (1 children)

They should just vibe code the feature. They'll have it done in an afternoon, right?

load more comments (1 replies)
load more comments (1 replies)
[–] lobut@lemmy.ca 20 points 1 week ago (1 children)

Why's this need to be on the LLM? They control the app, can't they just make a tool call out?

[–] NotMyOldRedditName@lemmy.world 11 points 1 week ago (1 children)

Hey, set a timer for 60 seconds.

ChatGPT analyzes text

You want a timer for 600 seconds, got it!

Sets timer for 600 seconds with api.

load more comments (1 replies)
[–] TheV2@programming.dev 19 points 1 week ago (2 children)

Shit like this is a reminder to me that a large portion behind some AI products' hype are people who have no clue what these products even do. I wonder how the world would change, if these jack of all trades who ~~invest~~ waste so much time into collecting ideas to fill up their pockets, instead spent more time on actually understanding the ideas they have chosen and build at least a fundamental knowledge.

load more comments (2 replies)
[–] core@leminal.space 19 points 1 week ago (1 children)

Its a Large Language Model, not a Large Number Model.

[–] moseschrute@lemmy.world 8 points 1 week ago (1 children)
load more comments (1 replies)
[–] sunbeam60@feddit.uk 17 points 1 week ago* (last edited 1 week ago) (7 children)

Everyone’s getting their knickers in a twist over nothing here.

Of course an AI can track time, if it’s given access to a timer MCP server.

Can we track time without tools, just in our heads? Certainly not very accurately. We can, however, track it reasonably accurately if given access to a quartz stop watch (typically +/-15 s/year)

A language model is based around language and reasoning by words/symbols. It’s not a surprise it doesn’t have timing capability.

What Altman SHOULD be embarrassed about is that the model lies about its capabilities. That implies that the context is still not right - it should be adequately trained and given context to prevent the lying. That implies a much more worrying issue - and something that Anthropic handles far better, IMHO (when asked if it can track time, if says “no, not on my own”, and then proceeds to build a JavaScript timer that it offers up to track time).

[–] TexasDrunk@lemmy.world 7 points 1 week ago

I don't use them but I follow the news about them loosely. The reason for this is epistemic humility. Claude has a pretty good idea of what its capabilities are and where the ceiling is. Chatgpt has no clue what its limits are so it believes it can do everything. Basically chatgpt has a lot of info and no idea where the gaps live and Claude has a fair idea when to search or use some external function to handle something. Gemini has less than Claude but more than chatgpt. Grok has little to no epistemic humility, but it did manage to accurately portray Musk as a world champion piss drinker, something none of the others were able to do.

I say that, but it's been a few months since I looked. That could have changed because shit moves fast. By the looks of what it's trying to do with the timer chatgpt has less than it used to. Possibly because of the way the model is trained to be helpful and confident.

load more comments (6 replies)
[–] 1984@lemmy.today 15 points 1 week ago

Sam Altman wants funding right?

Here is an idea. I would pay 1000 dollars to get in a boxing ring with this guy, and probably a lot of other people would love to get a shot at that punchable face, no?

We have solved funding.

[–] Avicenna@programming.dev 15 points 1 week ago

You would already be doing a great service to the world if you produced a really well tuned search engine / information digger with LLMs but no you had to periodically hype it as AGI because it can memorize entire text books with some accuracy. You did this to yourselves and if you fall it will be because of these expectations which are not met.

[–] Deceptichum@quokk.au 13 points 1 week ago (1 children)

Odd because home assistant can use a local run LLM to do so?

load more comments (1 replies)
[–] craftrabbit@lemmy.zip 11 points 1 week ago

Scam Altman sounds like it's a name straight from an hltv comment section, I love it

[–] transporter_ii@programming.dev 11 points 1 week ago (2 children)

To be fair, timers are hard.

[–] HerbalGamer@sh.itjust.works 7 points 1 week ago (11 children)

Lets give it a try and see how far we get:

00:00:01

load more comments (11 replies)
load more comments (1 replies)
[–] wrinkle2409@lemmy.cafe 10 points 1 week ago

Dear Scat Altman, just add a timestamp at each response that the LLM can read

[–] pfried@reddthat.com 10 points 1 week ago (3 children)

This will actually be solved in a week. All it takes is to add the current time to each input.

load more comments (3 replies)
[–] axx@slrpnk.net 10 points 1 week ago (2 children)

It's become more and more obvious that the reason he regularly looks like a rabbit caught in headlights is because he is, in fact, a fraud and not the tech genius he would like everyone to believe.

load more comments (2 replies)
load more comments
view more: next ›