riskable

joined 2 years ago
[–] riskable@programming.dev 4 points 1 month ago (3 children)

Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it's training database and says, "this collection of pixels is probably what they want".

This is incorrect. Generative image models don't contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever.

As an example model, FLUX.dev is 23.8GB:

https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

It's a general-use model that can generate basically anything you want. It's not perfect and it's not the latest & greatest AI image generation model, but it's a great example because anyone can download it and run it locally on their own PC (and get vastly superior results than ChatGPT's DALL-E model).

If you examine the data inside the model, you'll see a bunch of metadata headers and then an enormous array of arrays of floating point values. Stuff like, [0.01645, 0.67235, ...]. That is what a generative image AI model uses to make images. There's no database to speak of.

When training an image model, you need to download millions upon millions of public images from the Internet and run them through their paces against an actual database like ImageNET. ImageNET contains lots of metadata about millions of images such as their URL, bounding boxes around parts of the image, and keywords associated with those bounding boxes.

The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.

It's actually a lot more complicated than that (there's pretraining steps and classifiers and verification/safety stuff and more) but that's the gist of it.

I see soooo many people who think image AI generation is literally pulling pixels out of existing images but that's not how it works at all. It's not even remotely how it works.

When an image model is being trained, any given image might modify one of those floating point values by like ±0.01. That's it. That's all it does when it trains on a specific image.

I often rant about where this process goes wrong and how it can result in images that look way too much like some specific images in training data but that's a flaw, not a feature. It's something that every image model has to deal with and will improve over time.

At the heart of every AI image generation is a random number generator. Sometimes you'll get something similar to an original work. Especially if you generate thousands and thousands of images. That doesn't mean the model itself was engineered to do that. Also: A lot of that kind of problem happens in the inference step but that's a really complicated topic...

[–] riskable@programming.dev 5 points 1 month ago

I'm ok with rich people getting charged more. But anyone who isn't making like $1 million/year should get the normal price.

[–] riskable@programming.dev 60 points 1 month ago (7 children)

This will definitely encourage more people to have kids.

[–] riskable@programming.dev -3 points 1 month ago (1 children)

It's close enough. The key is that it's not something that was just regurgitated based on a single keyword. It's unique.

I could've generated hundreds and I bet a few would look a lot more like a banana.

[–] riskable@programming.dev -3 points 1 month ago* (last edited 1 month ago) (3 children)

Hard disagree. You just have to describe the shape and colors of the banana and maybe give it some dimensions. Here's an example:

A hyper-realistic studio photograph of a single, elongated organic object resting on a wooden surface. The object is curved into a gentle crescent arc and features a smooth, waxy, vibrant yellow skin. It has distinct longitudinal ridges running its length, giving it a soft-edged pentagonal cross-section. The bottom end tapers to a small, dark, organic nub, while the top end extends into a thick, fibrous, greenish-brown stalk that appears to have been cut from a larger cluster. The yellow surface has minute brown speckles indicating ripeness.

It's a lot of description but you've got 4096 tokens to play with so why not?

Remember: AI is just a method for giving instructions to a computer. If you give it enough details, it can do the thing at least some of the time (also remember that at the heart of every gen AI model is a RNG).

A terrible image of a banana generated by AI using a prompt that did not use the word banana

Note: That was the first try and I didn't even use the word "banana".

[–] riskable@programming.dev -2 points 1 month ago

It's more like this: If you give a machine instructions to construct or do something, is the end result a creative work?

If I design a vase (using nothing but code) that's meant to be 3D printed, does that count as a creative work?

https://imgur.com/bdxnr27

That vase was made using code (literally just text) I wrote in OpenSCAD. The model file is the result of the code I wrote and the physical object is the output of the 3D printer that I built. The pretty filament was store-bought, however.

If giving a machine instructions doesn't count as a creative process then programming doesn't count either. Because that's all you're doing when you feed a prompt to an AI: Giving it instructions. It's just the latest tech for giving instructions to machines.

[–] riskable@programming.dev 1 points 1 month ago

Like I said initially, how do we legally define "cloning"? I don't think it's possible to write a law that prevents it without also creating vastly more unintended consequences (and problems).

Let's take a step back for a moment to think about a more fundamental question: Do people even have the right to NOT have their voice cloned? To me, that is impersonation; which is perfectly legal (in the US). As long as you don't make claims that it's the actual person. That is, if you impersonate someone, you can't claim it's actually that person. Because that would be fraud.

In the US—as far as I know—it's perfectly legal to clone someone's voice and use it however TF you want. What you can't do is claim that it's actually that person because that would be akin to a false endorsement.

Realistically—from what I know about human voices—this is probably fine. Voice clones aren't that good. The most effective method is to clone a voice and use it in a voice changer, using a voice actor that can mimick the original person's accent and inflection. But even that has flaws that a trained ear will pick up.

Ethically speaking, there's really nothing wrong with cloning a voice. Because—from an ethics standpoint—it is N/A: There's no impact. It's meaningless; just a different way of speaking or singing.

It feels like it might be bad to sing a song using something like Taylor Swift's voice but in reality it'll have no impact on her or her music-related business.

[–] riskable@programming.dev -2 points 1 month ago (4 children)

I've seen original sources reproduced that show exactly what an AI copied to make images.

Show me. I'd honestly like to see it because it means that something very, very strange is taking place within the model that could be a vulnerability (I work insecurity).

The closest thing to that I've seen is false watermarks: If the model was trained on a lot of similar images with watermarks (e.g. all images of a particular kind of fungus might have come from a handful of images that were all watermarked), the output will often have a nonsense watermark that sort of resembles the original one. This usually only happens with super specific things like when you put the latin name of a plant or tree in your prompt.

Another thing that can commonly happen is hallucinated signatures: On any given image that's supposed to look like a painting/drawing, image models will sometimes put a signature-looking thing in the lower right corner (because that's where most artist signatures are placed).

The reason why this happens isn't because the image was directly copied from someone's work, it's because there's a statistical chance that the model (when trained) associated the keywords in your prompt with some images that had such signatures. The training of models is getting better at preventing this from happening though, as they apply better bounding box filtering to the images as a pretraining step. E.g. a public domain Audibon drawing of a pelican would only use the bird itself and not the entire image (which would include the artist signature somewhere).

The reason why the signature should not be included is because the resulting image would not be drawn by that artist. That would be tantamount to fraud (bad). Instead, what image models do (except OpenAI with ChatGPT/DALL-E) is tell the public exactly what their images were trained on. For example, they'll usually disclose that they used ImageNET (which you yourself can download here: https://www.image-net.org/download.php ).

Note: I'm pretty sure the full ImageNET database is also on Huggingface somewhere if you don't want to create an account with them.

Also note: ImageNET doesn't actually contain images! It's just a database of image metadata that includes bounding boxes. Volunteers—for over a decade—spent a lot of time drawing bounding boxes with labels/descriptions on public images that are available for anyone to download for free (with open licenses!). This means that if you want to train a model with ImageNET, you have to walk the database and download all the image URLs it contains.

If anything was "stolen", it was the time of those volunteers that created the classification system/DB in order for things like OpenCV to work so that your doorbell/security camera can tell the difference between a human and a cat.

[–] riskable@programming.dev 14 points 1 month ago

especially the ones made over the injections of workers

Well there's the problem! As good as it sounds, you actually lose a lot of the nutrition when employees are processed into injectable paste. Ultra processed workers are bad for you.

Eat them raw as capitalism intended!

[–] riskable@programming.dev -1 points 1 month ago (11 children)

If someone has never seen a banana they wouldn't be able to draw it either.

Also, AIs aren't stealing anything. When you steal something you have deprived the original owner of that thing. If anything, AIs are copying things but even that isn't accurate.

When an image AI is trained, it reads though millions upon millions of images that live on the public Internet and for any given image it will increment a floating point value by like 0.01. That's it. That's all they do.

For some reason people have this idea in their heads that every AI-generated image can be traced back to some specific image that it somehow copied exactly then modified slightly and combined together for a final output. That's not how the tech works at all.

You can steal a car. You can't steal an image.

[–] riskable@programming.dev 5 points 1 month ago

These are the same people that would download a car!

view more: ‹ prev next ›