this post was submitted on 25 Oct 2023
3 points (100.0% liked)

TechNews

4136 readers
1 users here now

Aggregated tech news.

founded 1 year ago
MODERATORS
 

[ sourced from TechCrunch ]

you are viewing a single comment's thread
view the rest of the comments
[–] autotldr@lemmings.world 1 points 1 year ago

This is the best summary I could come up with:


Google is taking aim at potentially problematic generative AI apps with a new policy, to be enforced starting early next year, that will require developers of Android applications published on its Play Store to offer the ability to report or flag offensive AI-generated content.

For instance, an app that went viral this summer for AI headshots, Remini, was found to be greatly enhancing the size of some women’s breasts or cleavage, and thinning them.

Then there were the more recent issues with Microsoft’s and Meta’s AI tools, where people found ways to bypass the guardrails to make images like Sonic the Hedgehog pregnant or fictional characters doing 9/11.

And with the coming elections, there are also concerns around using AI to create fake images, aka deepfakes, to mislead or misinform the voting public.

Google, in its announcement, reminded developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit restricted content like CSAM and others that enable deceptive behavior.

The ability to pop up full-screen notifications has been abused by many apps in an attempt to upsell users into paid subscriptions or other offers, when really the functionality should be limited to real-world priority use cases, like receiving a phone call or video call.


The original article contains 636 words, the summary contains 210 words. Saved 67%. I'm a bot and I'm open source!