this post was submitted on 10 Jun 2024
326 points (89.0% liked)

Technology

59377 readers
5843 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key points:

  • Cara's Rapid Growth: The app gained 600,000 users in a week

  • Artists Leaving Instagram: The controversy around Instagram using images to train AI led many artists to seek an alternative

  • Cara's Features: The app is designed specifically for artists and offers a 'Portfolio' feature. Users can tag fields, mediums, project types, categories, and software used to create their work

  • While Cara has grown quickly, it is still tiny compared to Instagram's massive user base of two billion.

  • Glaze Integration: Cara is working on integrating Glaze directly in the app to provide users with an easy way to protect their work from be used by any AI

more about: https://blog.cara.app/blog/cara-glaze-about

you are viewing a single comment's thread
view the rest of the comments
[–] General_Effort@lemmy.world 2 points 5 months ago

I'm sure it works fine in the lab. But it really only targets one specific AI model; that one specific Stable Diffusion VAE. I know that there are variants of that VAE around, which may or may not be enough to make it moot. The "Glaze" on an image may not survive common transformations, such as rescaling the image. It certainly will not survive intentional efforts to remove it, such as appropriate smoothing.

In my opinion, there is no point in bothering in the first place. There are literally billions of images on the net. One locks up gems because they are rare. This is like locking up pebbles on the beach. It doesn't matter if the lock is bad.

Saw a post on Bluesky from someone in tech saying that eventually, if it’s human-viewable it’ll also be computer-viewable, and there’s simply no working around that, wonder if you agree on that or not.

Sort of. The VAE, the compression, means that the image generation takes less compute; ie cheaper hardware and less energy. You can have an image generator that works on the same pixels, visible to humans. Actually, that's simpler and existed earlier.

By Moore's law, it would be many years, even decades, before that efficiency gain is something we can do without. But I think, maybe, this becomes moot once special accelerator chips for neural nets are designed.

What makes it obsolete is the proliferation of open models. EG Today Stable Diffusion 3 becomes available for download. This attack targets 1 specific model and may work on variants of it. But as more and more rather different models become available, the whole thing becomes increasingly pointless. Maybe you could target more than one, but it would be more and more effort for less and less effect.