this post was submitted on 25 Sep 2025
325 points (88.8% liked)

Memes

12500 readers
1541 users here now

Post memes here.

A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.

An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.


Laittakaa meemejä tänne.

founded 3 years ago
MODERATORS
 
(page 2) 48 comments
sorted by: hot top controversial new old
[–] Sanctus@lemmy.world 50 points 1 week ago* (last edited 1 week ago) (6 children)

Literally never had this happen. Every time I have caved after exhausting all other options the LLM has just made it worse. I never go back anymore.

[–] MentalEdge@sopuli.xyz 33 points 1 week ago* (last edited 1 week ago) (1 children)

They're by no means the end-all solution. And they usually aren't my first choice.

But when I'm out of ideas prompting gemini with a couple sentences hyper-specifically describing a problem, has often given me something actionable. I've had almost no success with asking it for specific instructions without specific details about what I'm doing. That's when it just makes shit up.

But a recent example. I was trying to re-install windows on a lenovo ARM laptop. Lenovos own docs were generic for all their laptops, and intended for x86. You could not use just any windows iso. While I was able to figure out how to create the recovery image media for the specific device at hand, there were no instructions on how to actually use it, and entering the BIOS didn't have any relevant entries.

Writing half a dozen sentences describing this into Gemini, instantly informed me that there is a tiny pin-hole button on the laptop that boots into a special separate menu that isn't in the bios. A lo, that was it.

Then again, if normal search still worked like it did a decade ago, and didn't give me a shitload of irrelevant crap, I wouldn't have needed an LLM to "think" it's way to this factoid. I could have found it myself.

[–] Sanctus@lemmy.world 1 points 1 week ago* (last edited 6 days ago)

I do use LLMs if I forget to plan one of my tabletop sessions. I will fully admit they are great at that. Love 'em for making encounters. But thats fundamentally different than real world searches or knowledge. I'm asking it to make stuff up for me, it koves to hallucinate.

[–] idunnololz@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (4 children)

They seem to be pretty good at language. One time i forgot the word "tact" and I was trying to remember it. I even asked some people and no one could think of the word I was thinking of even after I described approximately what it meant. But I asked AI and it got it in one go.

load more comments (4 replies)
[–] abfarid@startrek.website 6 points 1 week ago (1 children)

Happened to me yesterday. I have an old 4K TV, every component I used to connect to it had HDMI 2.0+ capabilities. Neither laptop nor Steam Deck would output 4K60, only 4K30. Tried getting another cable and a hub, same result. And I know that my Chromecast outputs 4K60 to this TV, so I was extra confused. In my desperation, asked GPT-5 what was I missing, and it plainly told me that those old Samsung TVs turn off HDMI 2.0 support unless you explicitly turn it on in TV settings under "UHD Color". Apparently Chromecast was doing chroma subsampling, but computers refused and wanted full HDMI 2.0 bandwidth...

[–] _g_be@lemmy.world 1 points 1 week ago (1 children)

That's rather cool, glad to hear it worked. My experience with it is often:

Where can I find this setting to change for *this thing*? "Gladly! I know how frustrating this process scan be! First, open the settings page, find the page that says "*\thing setting* and change it there" There is no page like that " You're absolutely right!"

load more comments (1 replies)
[–] TehBamski@lemmy.world 1 points 1 week ago

Context is highly important in this scenario. Asking it how many people live in [insert country and then province/state], and it'll be accurate a high percentage of the time. As compared to asking it, [insert historical geo-political question], and it won't be able to.

Also, I have found it can depend on which LLM you ask said question to. I have found Perplexity to be my go to LLM of choice, as it acts like an LLM 'server' in selecting the best LLM for the task at hand. Here's Perplexity's Wikipedia page if you want to learn more.

[–] Eheran@lemmy.world -3 points 1 week ago (4 children)

When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time. Same with "actual" engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.

[–] BroBot9000@lemmy.world 10 points 1 week ago (1 children)

Really good until you stop double checking it and it makes shit up. 🤦‍♂️

Go take your Ai apologist bullshit and feed it to the corporate simps.

[–] Eheran@lemmy.world -4 points 1 week ago (1 children)

The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.

You can keep your hatred to yourself, let alone the bullshit you make up.

[–] AmbiguousProps@lemmy.today 9 points 1 week ago* (last edited 1 week ago) (2 children)

Until it leaves a security issue that isn't immediately visible and your users get pwned.

Funny that you say "bullshit you make up", when all LLMs do is hallucinate and sometimes, by coincidence, have a "correct" result.

I use them when I'm stumped or hit "writer's block", but I certainly wouldn't have them produce 500 lines and then assume that just because it works, it must be good to go.

[–] onslaught545@lemmy.zip -2 points 1 week ago (2 children)

No one ever said push it to production without a code review.

[–] supersquirrel@sopuli.xyz 6 points 1 week ago

That is EXACTLY what this mindset leads to, it doesn't need to be said out loud.

[–] AmbiguousProps@lemmy.today 2 points 1 week ago* (last edited 1 week ago) (1 children)

"my coworkers should have to read the 500 lines of slop so I don't have to"

That also implies that code reviews are always thoroughly scrutinized. They aren't, and if a whole team is vibecoding everything, they especially aren't. Since you've got this mentality, you've definitely got some security issues you don't know about. Maybe go find and fix them?

[–] onslaught545@lemmy.zip 3 points 1 week ago

If your QA process can let known security flaws into production, then you need to redesign your QA process.

Also, no one ever said that the person generating 500 lines of code isn't reviewing it themselves.

[–] Eheran@lemmy.world -3 points 1 week ago (2 children)

Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.

I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.

The hostility here against anyone using LLMs/AI is absurd.

[–] AmbiguousProps@lemmy.today 4 points 1 week ago* (last edited 1 week ago) (1 children)

Then why do you bring up code reviews and 500 lines of code? We were not talking about your "simulations" or whatever else you bring up here. We're talking about you saying it can create 500 lines of code, and that it's okay to ship it if it "just works" and have someone review your slop.

I have no idea what you're trying to say with your first paragraph. Are you trying to say it's impossible for it to coincidentally get a correct result? Because that's literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that's how they work. That's why OpenAI had to admit that they are unable to stop hallucinations, because it's impossible given that's how LLMs work.

load more comments (1 replies)
load more comments (1 replies)

Also did you adequately describe your problem? Treat it like a human who knows how to program, but has no idea what the fuck you're talking about. Just like a human you have to sit it down and talk to it before you have it write code.

[–] Donkter@lemmy.world 1 points 1 week ago (1 children)

I've come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were "anti-gmo". When you dig into it you understand that they're against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can't discuss the merits of GMOs whatsoever even though they're purportedly progressive.

It's a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It'll take a few years probably as the generations cycle.

[–] Eheran@lemmy.world -1 points 1 week ago

Perhaps, yes.

[–] lectricleopard@lemmy.world -1 points 1 week ago (2 children)

It gave you the wrong answer. One you called absurd. And then you said "Really good stuff."

Not to get all dead internet, but are you an LLM?

I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.

[–] Eheran@lemmy.world -2 points 1 week ago (1 children)

So you physically read what I said and then just went with "my bias against LLMs was proven" and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?

But seriously. If you ask someone on the phone "is it raining" and the person says "not now but it did a moment ago", do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?

[–] lectricleopard@lemmy.world 3 points 1 week ago (1 children)

You seem upset by my comment, which i dont understand at all. Im sorry if I've offended you. I don't have a bias against LLMs. They're good at talking. Very convincing. I dont need help creating text to communicate with people, though.

Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.

I'll also note, since you mentioned it in your initial comment, LLMs dont think. They can't think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.

[–] Eheran@lemmy.world -1 points 1 week ago (2 children)

Can you define "thinking"? This is such a broad statement with so many implications. We have no idea how our brain functions.

I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, ... Instead of having to write all of that code myself, it only takes 5 minutes now.

[–] lectricleopard@lemmy.world 1 points 1 week ago

Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.

LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by "training" with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer "i dont know" to questions they see. I see this as at least one source of what they call "hallucinations." The model confidently answers incorrectly because that's what it's seen in training.

The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn't capable and has no means of reasoning beyond its internal statistical model.

[–] Clent@lemmy.dbzer0.com 1 points 1 week ago

We have no idea how our brain functions.

This isn't even remotely true.

You should have asked your LLM about it before making such a ridiculous statement.

load more comments (1 replies)
[–] BuboScandiacus@mander.xyz 16 points 1 week ago (1 children)

They are great if you know what the right answer is just don't know how to get it right now

[–] Kolanaki@pawb.social 19 points 1 week ago (1 children)

Asking genAI questions I already know the answer to is how I know the AI is wrong more than it is right.

[–] SaharaMaleikuhm@feddit.org 1 points 1 week ago

Yeah, but I can copy paste the code and then fix it quickly.

[–] CabbageRelish@midwest.social 15 points 1 week ago (27 children)

They’re regularly properly useful to me but it’s pointless to get in arguments in their defense. 🤷

load more comments (27 replies)
[–] 87Six@lemmy.zip 11 points 1 week ago

Me yesterday, except I only thought it figured it out, then found out hours later I must revert back to my workaround because it didn't really work fully and was fragile as fuck.

[–] swagmoney@lemmy.ca 5 points 1 week ago

me, vibe-debugging my Debian machine

[–] ChocolateFrostedSugarBombs@lemmy.world 2 points 1 week ago (1 children)

AI doesn't figure anything out. It guesses the next letter in the word.

[–] Alloi@lemmy.world 2 points 1 week ago (2 children)

no offense, i understand what you are trying to say here. im not a massive fan of the implications of things like AI and its effects on society.

but oversimplifying and infantalising your enemy wont stop it from out performing you.

like i can say "all AI does it put words on a screen based on a statistical analysis and prediction algorithm based on context and available training data, and its only accurate between 95% to 97% of the time, and it lies when it doesnt know something, or wants to save power for the sake of efficiency and cost reduction"

and it would still be far more likely to give a comprehensive breakdown and step by step analysis of systems well beyond my personal understanding. way faster than i ever could.

we can chalk it up to stolen info and guessing letters, but itll still outperform most people in most subjects, especially in terms of time/results.

dont get me wrong i dont think its intelligent in the way that a human can be, or as nuanced as a human can be. but that doesnt necessarily mean it cant be forever. and the way the technology is evolving across the board, seemingly faster and faster each day. with some plateaus here and there. its hard to imagine a world where we just say "well, we tried, its a dead end, oh well" and just completely abandon it for the idea of human exceptionalism.

overall humans, as smart as they are, are also pretty fucking dumb. which is why we are ignoring things like climate change, for what are essentially IOUs made out of 1s and 0s (money). and also succumbing to a global increase in fascist ideals even though we historically know what it entails and how it ends. and its in part due to the ability of AI to manipulate the masses, in its current "primitive" state.

i dont like AI, but im not going to pretend it wont be able to replace the output of most humans, or automate most jobs, or be used to enslave us and brainwash us further than it already has.

the human mind simply cannot compete with the computational speed, and in some cases, quality, of what is, and what is yet to come.

slop it may be, but if you cover the veritable feast of human creativity with enough slop, humanity will soon have no choice but to eat it or starve. everything else will get drowned out in time.

something really fucking big would have to happen to change this outcome. ww3, nuclear war, solar flare. who the fuck knows.

but what i do know is that those in power need the system to function as is, and in newer more efficient ways, while they still need us, in order for them to have the highest potential survival rate when it all comes crashing down at the end of this century. so, we may just avoid total annihilation unless its deemed necessary for their survival. lets hope we rise up before they take that opportunity.

load more comments (2 replies)
[–] DrDystopia@lemy.lol -3 points 1 week ago (2 children)

Ah, to live a life where ones problems can be solved by an LLM. It sounds so... simple and pleasant. 🫀

[–] craftrabbit@lemmy.zip 4 points 1 week ago

That's the world we all dream of, right? We work on what we want to with the robots keeping the houses in check and taking care of the menial admin- and paperwork and in the evenings we all sit together by the campfire with the robots bringing us food and drink as we rejoice in talking to each other about the day's experiences.

That doesn't seem to be the world that we're moving towards though...

[–] somerandomperson@lemmy.dbzer0.com 4 points 1 week ago (1 children)

...NOT!

It's just big tech selling convenience for the trillionth time, this time in another form. They are NOT doing out of good will; they're doing it to sell your data, to train their ai on it (alongaide their pirated media), and do other nefarious stuff with everything you have.

[–] DrDystopia@lemy.lol 5 points 1 week ago (1 children)

...NOT!

I promise you, as someone overcome with sadness from watching the so-far unsolvable problems of mankind that will lead to the end of the world as we know it - Living a life where one believe simulated intelligence could solve anything at all is a dream. Ignorance is bliss.

It’s just big tech selling convenience for the trillionth time

No, once more they're selling the impression of convenience. I.e. having the entire backend exposed to hackers because it was so convenient to vibe-code access control is not a real convenience.

They are NOT doing out of good will

Only idiots argue for such an intention.

they’re doing it to sell your data, to train their ai on it

No, they're doing it to harvest our data. This allows them to use machine learning on the datasets but more traditionally, build profiles on their users. Access to the profiles is what they're selling, not direct access to log data.

and do other nefarious stuff with everything you have

Then they need to step up their game as I'm self-hosting everything on a home-server. But I know what you mean. They want to do downright evil stuff with everything they can get their dirty, sticky paws on.

[–] somerandomperson@lemmy.dbzer0.com 2 points 1 week ago (1 children)

Ignorance is bliss

How do you even ignore the shitshow that is the world, which you live in; and therefore must care some amount?

load more comments (1 replies)
load more comments
view more: ‹ prev next ›