this post was submitted on 05 May 2025
115 points (91.4% liked)

Technology

2633 readers
324 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 42 points 1 week ago* (last edited 1 week ago)

I'm not opposed to A"I"; far from that, I actually use text generators a fair bit, sometimes image gens. It's simply a technology and I use it as such. And I still bloody hate how corporations handle it:

  • Always two weights, two measures. If you violate their IP, you're a filthy criminal; if they violate yours, you're overreacting and a luddite and harming progress. I want to see copyright gone, but if it is not, then apply it consistently to all sides. (By the way, fuck "Open"A"I" and their Bob Dylan defence.)
  • Always nagging you to use it. If you're nagging me to use something, it's because it's in yours best interests that I use it, not mine. No means "no" dammit.
  • Always implicitly lying about its abilities. No, I'm not going to ask it anything where a bullshit answer might ruin my day, stop misleading me to do so.
  • Always downplaying issues. Yeah, nah, I'm not blind to the environmental concerns around training those huge models. Or that corporations - that don't understand what "consent" means - basically DDoS sites to train their models.

But of course they won't talk about this, right? This sort of questionnaire is not made to genuinely obtain feedback; it's made to mislead you.

[–] Goretantath@lemm.ee 12 points 1 week ago

"Original Character plz do not steal" Sonic but purple with glasses

[–] Sendpicsofsandwiches@sh.itjust.works 10 points 1 week ago (2 children)

Took me a minute to understand everything that was going on with the open ai logo in the thumbnail...

[–] chicken@lemmy.dbzer0.com 8 points 1 week ago (3 children)

I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

I don't know if I'm understanding this argument right, but the idea that integrating locally run AI is inherently privacy destroying in the same way as live service AI doesn't make a lot of sense to me.

[–] lime@feddit.nu 5 points 1 week ago

think of apple's on-device image scanner ai that flagged people as perverts after they had taken photos of sand dunes.

[–] knightly@pawb.social 2 points 1 week ago

Microsoft Recall

[–] Umbrias@beehaw.org 2 points 1 week ago (1 children)

building and centralizing pii is indeed a privacy point of failure. what's not to understand?

[–] chicken@lemmy.dbzer0.com 3 points 1 week ago* (last edited 1 week ago) (1 children)

The use of local AI does not imply doing that, especially not the centralizing part. Even if some software does collect and store info locally (not inherent to the technology and anything with autosave already qualifies here), that is not close to as bad privacywise as filtering everything through a remote server, especially if there is some guarantee they won't just randomly start exfiltrating it, like being open source.

[–] Umbrias@beehaw.org 2 points 1 week ago (11 children)

I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

emphasis mine from the text you quoted…

load more comments (11 replies)
[–] possiblylinux127@lemmy.zip 6 points 1 week ago (1 children)

That image is kind of off putting

load more comments
view more: next ›