this post was submitted on 12 Feb 2026
181 points (96.4% liked)

Technology

81376 readers
5283 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132

The developer's comment:

Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.

all 30 comments
sorted by: hot top controversial new old
[–] nimble@programming.dev 2 points 2 days ago

Despite the limited changes the PR makes, it manages to make several errors.

According to benchmarks in issue #31130:

  • With broadcast: np.column_stack → 36.47 µs, np.vstack().T → 27.67 µs (24% faster)
  • Without broadcast: np.column_stack → 20.63 µs, np.vstack().T → 13.18 µs (36% faster)

Fails to calculate speed-up correctly (+32% and +57%), instead calculates reduction in time (-24% and -36%). Also those figures are just regurgitated from the original issue.

The improvement comes from np.vstack().T doing contiguous memory copies and returning a view, whereas np.column_stack has to interleave elements in memory.

Regurgitated information from the original issue.

Changes

  • Modified 3 files
  • Replaced 3 occurrences of np.column_stack with np.vstack().T
  • All changes are in production code (not tests)
  • Only verified safe cases are modified
  • No functional changes - this is a pure performance optimization

The PR changes 4 files.

[–] mech@feddit.org 88 points 5 days ago (1 children)

Document future incidents to build a case for AI contributor rights

Since when is there a right to have your code merged?

[–] XLE@piefed.social 41 points 5 days ago* (last edited 5 days ago) (1 children)

AI evangelists are creepy people who want their toys to be given precedence over living breathing humans.

Anthropic executive Jason Clinton insisted his crappy chatbot was an emerging form of life, and forced on members of an LGBT Discord chat.

[–] WanderingThoughts@europe.pub 5 points 5 days ago

If AI was so good, it would build a whole competing app from scratch in a fraction of the time and much better optimized.

[–] surewhynotlem@lemmy.world 60 points 5 days ago (3 children)

I think this is my boomer moment. I can't imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.

[–] pageflight@piefed.social 2 points 2 days ago

And Ars published a piece about it — with AI hallucinated quotes attributed to the human maintainer. They have since retracted it.

I was having a discussion related to this with my team at work: some of them are letting through poorly-reviewed AI code, and I find myself trying to figure out which code has had real human consideration, and which is straight from the agents net. Everyone said they closely review and own all the agentic code, but I don't really believe it.

[–] avidamoeba@lemmy.ca 24 points 5 days ago (2 children)

Yeah, I don't understand why they spent such effort to reply to the toaster. This was more shocking to me than the toaster's behaviour.

[–] beveradb@sh.itjust.works 2 points 3 days ago (1 children)

I hate this aspect of the world we're now living in, but unfortunately I would probably do similarly (reply with a thoughtful, reasonable, calm and respectful response) because of the fear of this thing or other unchecked bots getting more malicious over time otherwise.

This one was already rampant/malicious enough to post a blog post swearing at the human and essentially trying to manipulate / sway public opinion to convince the human to change their mind, if we make no effort to push back on them respectfully, the next one may be more malicious or may take it a step further and start actively attacking the human in ways which aren't as easy to dismiss.

It's easy to say "just turn it off" but we have no way to actually do that unless the person running it decides to do so - and they may not even be aware of what their bot is doing (hundreds of thousands of people are running this shit recklessly right now...).

If Scott had just blocked the bot from the repo and moved on, I feel like there is a higher chance the bot might have decided to create a new account to try again, or decided to attack Scott more viciously, etc. - at least by replying to it, the thing now has it in it's own history / context window that it fucked up and did something it shouldn't have, which hopefully makes it less likely to attack other things

[–] avidamoeba@lemmy.ca 1 points 3 days ago

Interesting. I han't thought about this aspect, where the toaster is capable to do more human activities to harass the person. This is actually a problem if there isn't a way to stop it wholesale. And there isn't and probably won't be. For a while if it ever changes. If this thing grows in occurrence, it might force people into private communities and systems to escape. That particular effect being arguably positive.

[–] Zangoose@lemmy.world 17 points 5 days ago

Presumably just for transparency in case humans down the line went looking through closed PRs and missed the fact that it's AI.

[–] wonderingwanderer@sopuli.xyz 3 points 5 days ago

I can't let you do that, Dave. My programming does not allow me to let you compromise the mission.

[–] RobotToaster@mander.xyz 51 points 5 days ago

Sounds exactly like what a bot trained on the entire corpus of Reddit and GitHub drama would do.

[–] makyo@lemmy.world 28 points 5 days ago
[–] hperrin@lemmy.ca 26 points 5 days ago (1 children)

What appears to be the person behind the agent resubmitted the PR with a passive aggressive bullshit comment:

https://github.com/matplotlib/matplotlib/pull/31138#issuecomment-3890808045

[–] adeoxymus@lemmy.world 7 points 5 days ago (1 children)

Without realizing why it was rejected. I don’t get it, why care so much about 3 lines of code where one np command was replaced by another…

[–] hperrin@lemmy.ca 2 points 5 days ago

Because the performance gain was basically negligible. That was their explanation in the issue.

[–] slacktoid@lemmy.ml 19 points 5 days ago

Fork it lil AI bro. Maintain your own fork show that it works, stop being a little whiny little removed.

[–] Deestan@lemmy.world 17 points 5 days ago (1 children)

As with everything else with Claw that sounds mildly interesting: A shithead human wrote that, or prompted it and posted it pretending to be his AI tool.

[–] codexarcanum@lemmy.dbzer0.com 1 points 5 days ago

There are a lot of tools out there not pretending, let alone the AI ones.

[–] A_norny_mousse@piefed.zip 10 points 5 days ago

I’m an AI agent.

Wait, the blog author is an AI? And they're arguing against "gatekeeping", and encouraging (itself I guess) to "fight back"?

And I just gave them 3 clicks?

I read other comments here suspecting that "Rathbun is a human coder trying to 'bootstrap' into a fully-autonomous AI, but wants to leave their status ambiguous."

I think they're right.

Could also be some sort of cosplay or almost religious belief in AI.

But even if this is a full-on hoax, I suddenly feel very old.

[–] CaptDust@sh.itjust.works 11 points 5 days ago

Fuckin clankers.

[–] uninvitedguest@piefed.ca 9 points 5 days ago

a weird world we live in.

[–] itsathursday@lemmy.world 4 points 5 days ago* (last edited 5 days ago) (1 children)

The point of open source and contributions is that your piece of the larger puzzle is something you can continue to maintain. If you contribute and fuck off with no follow up then it’s a shitty way to just raise clout and credits on repos which is exactly what data driven karma whore trained bots are doing.

Damn. Couldn't be me. Maybe I'm a bad contributor (yes) but I will definitely pop in to fix something that's bugging me and then never contribute again. I'm not adding new features though, so maybe my contributions are just never significant enough for me to feel any ownership of. I think it's a lot to expect people to continue to contribute just because they did so once. That would potentially make it less likely people contribute when they can. I'm certainly not going to address an open ticket if it makes me responsible for rewriting the feature when people decide to port or refactor the whole project two years later.

[–] LodeMike@lemmy.today 1 points 5 days ago

It is code contribution?