this post was submitted on 26 Aug 2024
78 points (100.0% liked)
Privacy
31974 readers
362 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
Chat rooms
-
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They are certainly very creepy but I doubt companies like Google or Meta even need this kind of data from a third party. If they truly wanted to have mic access, they could for a long time, and it would have been known. The reality is it is too expensive and risky to run this kind of spying, and I don't think the benefit is worth the risk to them. To me this screams "SCAMMERS".
agreed
are you sure?
imo this commonly repeated view has never been substantiated.
we've yet to see a technical explanation for why it's "impossible/too expensive" which addresses the modern realities of efficient voice codecs, even rudimentary signal processing and modern speech-to-text network models.
how so? previously invasive features are simply written off as "a bug". they barely even need to issue some b̶r̶i̶b̶e̶s̶ fines (typical corporate solution to getting caught), that is the level we're currently at:
"whoops it was a bug, we'll switch it off"
"whoops another update switched it on again" (if caught, months/years later)
"whoops some other opt-in surveillance switched itself on again, just another bug ¯_(ツ)_/¯"
as long as they have deniability as a bug, there's almost zero repercussions and thus virtually zero risk. that is perhaps why a company out and talking about it openly is such a no-no. discussing intent makes 'bug' deniability more difficult.
in my experience when reading past the "they're not listening" headlines, and into the actual technical reports, noone has been able to conclusively rule it out. if you know of conclusive documentation, please post.
then there's the "they have enough data already" argument. which is entirely without foundation, as we all know very well: nothing is ever enough for these pathologically greedy entities. 'enough' simply isn't in their vocabulary. we all know this already.
[i didn't downvote you btw]
I simply think that until now (maybe they will start tomorrow), the PR and lawsuit risk of listening to people is too high, for the benefit they would get out of it. Much simpler metrics are enough for them to get a very good profile of the user. Voice data isn't like in the test scenarios where the person will repeat 45x the word cat food, people talk about the weather and about gas prices which is pretty useless for creating an ad profile if you ask me. But the scary part is now with AI models and on device AI everything, local processing of the mic data into topics that then get sent to their servers is more concerning is not much more feasible.
And for the lawsuits I am not sure they could write it off as a bug everywhere other than the us and Canada because there are actually normal laws in most other countries
what risk? facebook & others conducted illegal human experiments. this is an enormous crime and was widely reported yet all fb had to do from a pr perspective was apologise.
as we all know, fb even interfered with with the electoral process of arguably the world's most powerful nation, and all they had to do was some rebranding to meta and it's business as usual. this is exactly how powerful these organisations are. go up against a global superpower & all you need to do is change your business name??? they don't face justice the same way anyone else would, therefore we cannot assess the risk for them as we would another entity - and they know it.
So, while i personally disagree for above reasons, I can accept in your opinion they wouldn't take the legal risk.
when has 'enough' ever satisfied these entities? we merely need to observe the rate of evolution of various surveillance methods, online, in our devices, in shopping centers to see 'enough' is never enough. its always increasing, and at an alarming rate.
sorry i didn't quite understand, are you saying its not feasible or it is feasible? from the way the sentence started i thought you were going to say it could be, but then you said 'not much more feasible'?
voice conversations are near-universally prized in surveillance & intelligence. There hasn't been any convincing argument for any generalised exception to that.
it's already been written off as a bug. i didn't follow that story indefinitely but i'm not aware of even a modest fine being paid in relation to the above story. if it can accidentally transcribe and send your conversations to your contact list without your knowledge or consent (literally already happened - with impunity(?)), they can 1000% "accidentally" send it to some 'debug' server somewhere.
Are they actually doing it? It ofc remains to be seen. Imo the fallout if it was revealed would roughly look like this
Google already has a fleet of "Hello Google" enabled devices that do listen all the time. Some phones surely also support always-on for this. My TV supports it. Users are already deliberately enabling this. There is no need for shady tactics.