this post was submitted on 27 Apr 2026
1545 points (99.0% liked)

Programmer Humor

31190 readers
2343 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ivn@tarte.nuage-libre.fr 2 points 1 day ago (1 children)

I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.

[–] subnormal@lemmy.dbzer0.com 1 points 1 day ago (1 children)

Okay fair enough.

Since Maven's entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?

What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don't know and have no source. The US military would have to be really stupid to make these info public.

[–] ivn@tarte.nuage-libre.fr 1 points 1 day ago (1 children)

There is nothing that indicates that Anthropic's AI is used to analyze data, I'm not saying it's not, just that we don't know. I'm going to quote a smaller section of a quote I made earlier of the same Guardian article:

In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.

But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it's not Claude, it's Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that's only speculation.

[–] subnormal@lemmy.dbzer0.com 1 points 22 hours ago (1 children)

Okay. I guess we at least agree on the facts.

You are giving the company a huge amount of benefit of doubt and I don't understand why. May I ask: If it was Elon Musk's xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?

[–] ivn@tarte.nuage-libre.fr 1 points 21 hours ago (1 children)

It wouldn't change anything and I'm confused as to why you think it would and why you think I'm "giving a huge amount of benefit of doubt".

I'm just pointing at what we know, what we don't know and what you are just making up.

[–] subnormal@lemmy.dbzer0.com 1 points 20 hours ago (1 children)

Facts:

  • Anthropic supplies some AI system to Maven
  • Maven analyzes data and determine bombing targets

My conclusion: Anthropic's AI is in the US military's kill chain which killed 120 children.

Your conclusion: The LLM did not directly target the school. We don't know how it was used. It was also not there from the beginning so probably not probably part of the "core system."

[–] ivn@tarte.nuage-libre.fr 1 points 20 hours ago

That's not my conclusion, that's just mostly coming from the Guardian article. I say mostly because you're missing one part, we know how the LLM is used.

That's why I'm asking you to source your "conclusion".