riskable

joined 2 years ago
[–] riskable@programming.dev 1 points 3 months ago (1 children)

Nope. In fact, if you generate a lot of images with AI you'll sometimes notice something resembling a watermark in the output. Demonstrating that the images used to train the model did indeed have watermarks.

Removing such imaginary watermarks is trivial in image2image tools though (it's just a quick extra step after generation).

[–] riskable@programming.dev 34 points 3 months ago (3 children)

...or trying to get away with as much as possible and seeing what sticks.

[–] riskable@programming.dev 144 points 3 months ago (4 children)

Reminder: The Bill of Rights applies to all persons living or residing in the United States. Whether or not you're a citizen is irrelevant. Green card, visa, no visa, etc doesn't matter.

Everyone gets freedom of speech in the US. Everyone.

[–] riskable@programming.dev 8 points 3 months ago (2 children)

To be fair, when it comes to stock photos the creatives already got paid. You're just violating the copyright of a big corporation at that point (if you distribute the images... If you never distribute the images then you've committed no crime).

[–] riskable@programming.dev 2 points 3 months ago (2 children)

Why stop at "AI-generated"? Why not have the individual post their entire workflow, showing which model they used, the prompt, and any follow-up editing or post-processing they did to the image?

In the 90s we went through this same shit with legislators trying to ban photoshopped images (hah: They still try this from time to time). Then there were attempts at legislating mandatory watermarks and similar concepts. It's all the same concept: New technology scary, regulate and restrict it.

In a few years AI-generated content will be as common as photoshopped images and no one will bat an eye because it'll "just be normal". A photographer might take a picture of a model (or a number of them) for a cover or something then they'll use AI to change the image after. Or they'll use AI to generate an image from scratch and then have models try to copy it. Or they'll just use AI to change small details in the image such as improving lighting conditions or changing eye color.

AI is very rapidly becoming just another tool in photo/video editing and soon it will be just another tool in document writing and audio recording/music creation.

[–] riskable@programming.dev 21 points 3 months ago* (last edited 3 months ago) (8 children)

Not a bad law if applied to companies and public figures. Complete wishful thinking if applied to individuals.

For companies it's actually enforceable but for individuals it's basically impossible and even if you do catch someone uploading AI-generated stuff: Who cares. It's the intent that matters when it comes to individuals.

Were they trying to besmirch someone's reputation by uploading false images of that person in compromising situations? That's clear bad intent.

Were they trying to incite a riot or intentionally spreading disinformation? Again, clear bad intent.

Were they showing off something cool they made with AI generation? It is of no consequence and should be treated as such.

[–] riskable@programming.dev 2 points 3 months ago

If they really want to get Trump and Musk to care the picture should have white children.

[–] riskable@programming.dev 17 points 3 months ago

I'd argue that the most inhuman behavior is coming from the Texas legislature.

[–] riskable@programming.dev 20 points 3 months ago

I was going to say... It is kidnapping. It's just state-sanctioned kidnapping.

[–] riskable@programming.dev 13 points 3 months ago (2 children)

I hear the void is lovely this time of year.

[–] riskable@programming.dev 13 points 3 months ago

Why is the Trump administration doing this? When you're a huge piece of shit anything labeled, "clean" seems like fiction.

[–] riskable@programming.dev 63 points 3 months ago

Tell me you're planning an insurrection without telling me you're planning an insurrection.

view more: ‹ prev next ›