this post was submitted on 02 Feb 2024
216 points (98.2% liked)

Technology

59157 readers
2446 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] themurphy@lemmy.world 7 points 9 months ago* (last edited 9 months ago) (3 children)

ITT: People who are scared of things they don't understand, which in this case is AI.

In this case, the "AI" program is nothing more than pattern recognition software setting a timestamp where it believes there's something to be looked at. Then an officer can take a look.

It saves so much time, and it filters out anything irrelevant. But be careful because it's labelled "AI". Scarry.

EDIT: Comments to this comment confirms that you don't understand AI, because if you did, you'd know that this system who scans video is not a LLM (large language model). It's not even the same system in its core.

[–] Voroxpete@sh.itjust.works 11 points 9 months ago* (last edited 9 months ago) (1 children)

This is an astonishingly bad take.

Almost every AI system is a black box. Even if you open source the code and the training data, it's almost impossible to know anything about the current state of a machine learning model.

So the entire premise here is that a completely unaccountable system - whose decisions are basically impossible to understand or scrutinize - gets to decide what data is or isn't relevant.

When an AI says "No crime spotted here", who gets to even know that it did that? If a human is reviewing all of the footage, then why have the AI? You're doing the same amount of human work anyway. So as soon as you introduce this system, you remove a huge amount of human oversight, and replace it with decisions that dramatically affect human lives - that could potentially be life or death if it's the difference between a bad cop being taken off the street or not - being made by a completely unaccountable system.

Whose to say if the training data fed into this system results in it, say, becoming effectively blind to police violence against black people?

And if that doesn't scare you, it absolutely should.

[–] Misconduct@lemmy.world 0 points 9 months ago* (last edited 9 months ago)

It's not impossible to understand or scrutinize. They give it specific things to look for. It does what it's told. You can make the argument that ANY tool used by the police will be misused in their favor. AI isn't special for that by any means. It's not like we bother to hold anyone accountable for anything else now anyway. Maybe the AI will be less biased

It's definitely not doing the same work as a human if humans are spared sifting through hours upon hours of less useful footage. I'm sure they're testing it etc. Nobody goes all in on this stuff. Really, you guys can be so very dramatic lol

[–] Killing_Spark@feddit.de 10 points 9 months ago

It's also potentially skipping some of the parts that should be looked at. It depends on the training set.

[–] fluxion@lemmy.world 4 points 9 months ago

It's not that AI is scary, it's that AI is dumb as fuck.