this post was submitted on 10 May 2026
286 points (91.8% liked)

Ask Lemmy

39492 readers
2186 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I've noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to "When will people stop being afraid of AI" or "Can we please acknowledge AI was very needed for X"

Can't tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

you are viewing a single comment's thread
view the rest of the comments
[–] GarboDog@lemmy.world 3 points 22 hours ago (1 children)

If your point was to say “LLMs are good because it can hack into people’s PCs and make the world worse” I think you gotta start setting priorities towards finding some empathy.

Besides it was not discovered by an LLM or AI. It was discovered by Taeyang Lee, researcher at Theori and then later refined into an exploit chain by the Xint Code Research Team, whom both used an “AI”-assisted analysis. So no LLMs didn’t magically find a decade old exploit, LLMs simply was used as a search function based on its trained module of the past coding assets and the logic bug in the Linux kernel.

So yeah it’s basically a glorified search function at that point and if you can find peace fucking a search bar- hey man that’s your thing 🤷🏻‍♀️

Our sources:

[–] mirshafie@europe.pub -3 points 21 hours ago* (last edited 21 hours ago) (1 children)

Holy shit, are you a professional strawman builder? Because you're really good.

An LLM helped fix a bug. That's all we need to know. It's useful. Saying so has nothing to do with empathy, lack thereof, or robosexuality or whatever the shit kids are in to these days.

[–] GarboDog@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

idk about being a straw-man, but regardless the reply was addressing the misleading and not giving proper credit to the researchers and further giving that LLMs were used for analysis, not full on finding the exploit, so no LLMs aren't good at finding exploits without clear search inquires by humans.

As for the empathy and the robo-sexuality- It was the intentional point of the original comment that people find heavy social relation towards LLMs or other objects that are able to communicate back to them. Even in our examples of the movies they touch on romantically/sexual relations towards robots and a couple others point towards the empathy of them as well. PS these are topics from 1950's not "whatever the shit kids are in to these days." Most people affected by this are older generations and young adults without social netting.

Turning it around phrasing that LLMs are useful towards finding exploits makes it sound more like your wanting to use LLMs for using said exploits rather than using LLMs for better use cases. Regardless its still not possible nor ever will be because again LLMs can only use predetermined variables based on its previous learning data set and random variables (PS those random variables that are undesirable are what is commonly called hallucination, its just unwanted variables in a huge spaghetti code.) Its even on the site your sourced:

"Was this AI-found? AI-assisted. The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee."

If we misread your interpretation then our mistake, however the phrasing felt more that you were praising AI for finding exploits and not for actual good use and it read out to us like an ethical issue.

If making this stance clear that LLMs make more harm than good in the case of chat Bots and being used as full on replacements of people makes us a Straw-man than IG we're a straw-man or whatever lol.

Though we can probably agree that Machine Learning can, should and have been used since the 1950s as glorified search and calculation engines for complex equations and datasets. They can make really good use for generating and categorizing random protein molecules, find patterns in cancer research and even filter out examples astronomers find in the night sky; however its overall useless without a qualified and passionate researcher who knows their stuff and can double check their ML sifters.

Sources for the saucy beans:

^edit, fixed a bit of formatting lol^