this post was submitted on 12 Apr 2026
687 points (95.3% liked)
Technology
83966 readers
4760 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They don't, just like they don't with human submitted stuff. The point of the Signed-off-by is the author attests they have the rights to submit the code.
Which I'm guessing they cannot attest, if LLMs truly have the 2-10% plagiarism rate that multiple studies seem to claim. It's an absurd rule, if you ask me. (Not that I would know, I'm not a lawyer.)
Where are you seeing the 2-10% figure?
In my experience code generation is most affected by the local context (i.e. the codebase you are working on). On top of that a lot of code is purely mechanical - code generally has to have a degree of novelty to be protected by copyright.
If you had a contributor that plagiarized at a 2-10%, would you really go "eh it has to have a degree of novelty to be a problem" rather than just ban them? The different standards baffle me sometimes.
You can find various rates mentioned here: https://dl.acm.org/doi/10.1145/3543507.3583199 and here: https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
If the 2-10% is just boilerplate syscall number defines or trivial MIN/MAX macros then it's just the common way to do things.
So do you want to legally review every line by an LLM to see if it meets the fair use criterion, since you have to assume it was probably stolen? And would you do this for a known plagiarizing human contributor too...?
No, that's why the author asserts that with their signed-of-by. It's what I do if I use any LLM content as the basis of my patches.
So what does the signed-off-by magically solve here, that doesn't require either you or the contributor to legally review every line by an LLM? If you're not a lawyer, is your contributor going to be one?
They don't have to be. They know what they asked the LLM to do. They know how much they adapted the output. You usually have to work to get the models to spit out significant chunks of memorised text.
I don't have much more to say other than I doubt the data backs up what you're saying at all.
Imagine how broken it would be otherwise. The first person to write a while loop in any given language would be the owner of it. Anyone else using the same concept would have to write an increasingly convoluted while loop with extra steps.
Sounds like an origin story for recursion.