It DOES matter. Directly. Fully.
If people think that the unthinking "AI" actually has autonomy, they will be less likely to hold the people responsible to account.
Why do you not understand that? It is a critical fact of the matter that modern day "AI" does not think nor want, because then responsibility of its actions should then rightfully fall on to who set up the Rube Goldberg machine with machetes on it.
This is not a machine going postal. It's a dangerous product they've been allowed to sell.
We're trying to impress on you the importance of culpability. If it thinks for itself, then it becomes a defective product. If it doesn't, it's a dangerous product.
It's the difference between someone selling a car that happens to break down easily, and one where the brake lines randomly fall off because they fucked up the design and didn't want to spend the money to do it right... It's the difference between accidents and neglegence. This "AI" shit? Pure greed-fed neglegence.
The wording in the article is on purpose. They want you to think it doesn't matter while they're anthropomorphizing it, FFS. They want you to blame the bot, not the guy who made the obviously dangerous bot and then sold it to the world for billions.
That's why it's even more important to realize the machine has no intent. Its actions are solely the result of its creator's actions in creating it.
I point out anthropomorphization so much because not only will it innoculate people against the advertising for it that WILL anthroporphize it, but when it fucks up, the appropriate people will be punished.
This isn't a thinking machine going postal. It's a dangerous product being pushed out with little regard for consequences.
Selling dangerous products used to mean something before billionaires bought the government...