this post was submitted on 27 Feb 2025
851 points (98.5% liked)

Technology

63313 readers
4829 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

you are viewing a single comment's thread
view the rest of the comments
[–] lepinkainen@lemmy.world -4 points 18 hours ago* (last edited 3 hours ago) (5 children)

This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

[–] lka1988@lemmy.dbzer0.com 13 points 13 hours ago* (last edited 13 hours ago)

I have 5 kids. I'm almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge. Plus, I have quite a few "intimate" photos of my wife in there as well.

I refuse to consent to a corporation searching through my device on the basis of "well just in case", as the ramifications of false positives can absolutely destroy someone's life. The unfortunate truth is that "for your security" is a farce, and people who are actually stupid enough to intentionally create that kind of material are gonna find ways to do it regardless of what the law says.

Scanning everyone's devices is a gross overreach and, given the way I've seen Google and other large corporations handle reports of actually-offensive material (i.e. they do fuck-all), I have serious doubts over the effectiveness of this program.

[–] Ulrich@feddit.org 19 points 16 hours ago (1 children)

Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.

[–] Ledericas@lemm.ee 1 points 6 hours ago

im not surprised if they are also using an AI, which is very error prone.

[–] noxypaws@pawb.social 18 points 16 hours ago (1 children)

it had a ridiculous amount of safeties to protect people’s privacy

The hell it did, that shit was gonna snitch on its users to law enforcement.

[–] lepinkainen@lemmy.world 0 points 3 hours ago (1 children)

Nope.

A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

[–] noxypaws@pawb.social 1 points 1 hour ago

That's a fucking wiretap, yo

[–] Natanael@infosec.pub 15 points 16 hours ago* (last edited 16 hours ago) (2 children)

Apple had it report suspected matches, rather than warning locally

It got canceled because the fuzzy hashing algorithms turned out to be so insecure it's unfixable (easy to plant false positives)

[–] lepinkainen@lemmy.world 0 points 3 hours ago (1 children)

They were not “suspected” they had to be matches to actual CSAM.

And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.

So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed

[–] Natanael@infosec.pub 1 points 51 minutes ago* (last edited 49 minutes ago)

Yeah so here's the next problem - downscaling attacks exists against those algorithms too.

https://scaling-attacks.net/

Also, even if those attacks were prevented they're still going to look through basically your whole album if you trigger the alert

[–] Clent@lemmy.dbzer0.com 0 points 10 hours ago

The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it's Google.

The feature was re-added as a child safety feature called "Comminication Saftey" that is optional on a child accounts that will automatically block nudity sent to children.

[–] Modern_medicine_isnt@lemmy.world 0 points 17 hours ago

Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.