this post was submitted on 16 Jan 2024
85 points (100.0% liked)

Technology

37719 readers
323 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] megopie@beehaw.org 34 points 10 months ago* (last edited 10 months ago) (1 children)

If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world could they have found huge amounts of pictures of women on the internet with the specific label of “Asian”?

Almost like, most of what determines the quality of the output is not “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous, easier to hide, and thus easier to exploit and devalue.

Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

[–] webghost0101@sopuli.xyz 7 points 10 months ago (1 children)

They did literally nothing and seem to use the default stable diffusion model which is supposed to be a techdemo. Would have been easy to put "(((nude, nudity, naked, sexual, violence, gore)))" as the negative prompt

[–] megopie@beehaw.org 7 points 10 months ago

The problem is that negative prompts can help, but when the training data is so heavily poisoned in one direction, stuff gets through.