this post was submitted on 10 May 2026
286 points (91.8% liked)

Ask Lemmy

39492 readers
2206 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I've noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to "When will people stop being afraid of AI" or "Can we please acknowledge AI was very needed for X"

Can't tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

you are viewing a single comment's thread
view the rest of the comments
[–] GarboDog@lemmy.world 1 points 4 hours ago* (last edited 4 hours ago) (1 children)

idk about being a straw-man, but regardless the reply was addressing the misleading and not giving proper credit to the researchers and further giving that LLMs were used for analysis, not full on finding the exploit, so no LLMs aren't good at finding exploits without clear search inquires by humans.

As for the empathy and the robo-sexuality- It was the intentional point of the original comment that people find heavy social relation towards LLMs or other objects that are able to communicate back to them. Even in our examples of the movies they touch on romantically/sexual relations towards robots and a couple others point towards the empathy of them as well. PS these are topics from 1950's not "whatever the shit kids are in to these days." Most people affected by this are older generations and young adults without social netting.

Turning it around phrasing that LLMs are useful towards finding exploits makes it sound more like your wanting to use LLMs for using said exploits rather than using LLMs for better use cases. Regardless its still not possible nor ever will be because again LLMs can only use predetermined variables based on its previous learning data set and random variables (PS those random variables that are undesirable are what is commonly called hallucination, its just unwanted variables in a huge spaghetti code.) Its even on the site your sourced:

"Was this AI-found? AI-assisted. The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee."

If we misread your interpretation then our mistake, however the phrasing felt more that you were praising AI for finding exploits and not for actual good use and it read out to us like an ethical issue.

If making this stance clear that LLMs make more harm than good in the case of chat Bots and being used as full on replacements of people makes us a Straw-man than IG we're a straw-man or whatever lol.

Though we can probably agree that Machine Learning can, should and have been used since the 1950s as glorified search and calculation engines for complex equations and datasets. They can make really good use for generating and categorizing random protein molecules, find patterns in cancer research and even filter out examples astronomers find in the night sky; however its overall useless without a qualified and passionate researcher who knows their stuff and can double check their ML sifters.

Sources for the saucy beans:

^edit, fixed a bit of formatting lol^

[–] mirshafie@europe.pub 1 points 1 hour ago* (last edited 1 hour ago)

The strawman-building is that you're extrapolating really, really far based on a tiny comment, and so you're making wild assumptions that aren't relevant to the conversation. The accusation that I'm hoping to be able to use LLMs to find bugs for nefarious reasons is far out. In fact, ironically, your text reads like something a badly (or maliciously) configured LLM would produce.

I never claimed that somehow, unprompted, an LLM went out and found a bug. But LLMs are increasingly used as important tools in finding all kinds of problems in code. Going forward, as we get better at how to use these models, more bugs will likely be found. And if we can train other ML models on other kinds of data but with similar size, I think we'd be right to expect a lot.

I have no doubt that misuse of LLMs and other machine learning models is widespread. The parapsychology aside, I'm worried about how it's being used in war and targeting, which will only get worse.

However I think it's a bit disingenuous to portray LLMs as glorified search engines or autocorrect. It's not wrong, it's technically correct, but the utility is way beyond find-and-replace. It's a bit like calling humans glorified tapeworms. Doesn't really make for an interesting discussion.

I also think you're wrong in asserting that LLMs or other ML models can only be useful for researchers on the edge of their fields. I guess we'll see.