this post was submitted on 13 Apr 2026
74 points (97.4% liked)

Linux

13325 readers
679 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
all 16 comments
sorted by: hot top controversial new old
[–] onlinepersona@programming.dev 40 points 1 week ago (1 children)

AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO). The human submitter is responsible for:

  • Reviewing all AI-generated code
  • Ensuring compliance with licensing requirements
  • Adding their own Signed-off-by tag to certify the DCO
  • Taking full responsibility for the contribution

That's fair. Nobody has to know you wrote it with a not, that's impossible to detect, but you have to own it and be ready for the discussions that follow.

[–] peterhorvath@mastodon.de 2 points 1 week ago (1 children)

@onlinepersona @Innerworld And what do you think, will the AI agent be ready to the following discussion?

[–] towerful@programming.dev 9 points 1 week ago (1 children)

Well, no. But then it gets rejected, and further PRs that also fail the check will likely get you banned from contributing.

The human is responsible.
If the code or PR fails, the human has to own that.
If the human fails to own that, the human gets banned

[–] peterhorvath@mastodon.de 0 points 1 week ago

@towerful Also you know, it is only matter of time and it will.

[–] heliotrope@retrofed.com 20 points 1 week ago* (last edited 1 week ago) (2 children)

Rule-wise, this seems fair.

Regardless, if AI usage continues to increase in this manner, I'll likely be driving NetBSD, AROS, and FreeDOS by the end of the decade.

Maybe even a little TempleOS or ZealOS, for flavour.

[–] Dumhuvud@programming.dev 6 points 1 week ago* (last edited 1 week ago) (1 children)
[–] magikmw@piefed.social 3 points 1 week ago (1 children)

It's just regarding labeling. It's unenforcable to have a project "clean" of AI.

[–] Dumhuvud@programming.dev 5 points 1 week ago (1 children)
[–] magikmw@piefed.social 4 points 1 week ago (1 children)

Ok I see the intent of BDFL is different, but the linked document only mentions labeling - I can only assume the low quality etc. issues are handled as a judgement call, and in that way I consider the "No AI whatsoever" rule unenforceable.

If I use an LLM to generate code under my suprvision, review, quality check and test to be up to standard, how would it be detected I used AI if I don't label it so? They'll look for em-dashes in comments?

[–] soc@programming.dev 3 points 6 days ago (1 children)

"Let's not have rules, because some may break them!"

🤡

[–] magikmw@piefed.social 0 points 6 days ago (1 children)

Rules without enforcement are just self-deception.

[–] soc@programming.dev 1 points 5 days ago

Then keep deceiving yourself. 🤷

[–] misk@piefed.social 5 points 1 week ago

Given that nobody is able to guarantee that code used for training was used according to it’s license, this means no hallucinated code in Linux. Nice.

[–] g_blob@programming.dev 4 points 1 week ago

It will comply but will not compile

[–] abcdqfr@lemmy.world 2 points 1 week ago

It really has come a long way in a relatively very short time in terms of quality and, well, shitting under the rug