this post was submitted on 09 Feb 2024
143 points (95.0% liked)
Technology
59219 readers
4663 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Me again, re:watermarks.
Frauds, liars and even pranksters will not watermark their content, or remove watermarks. Best you can do is get genAI services to implement one, which they already do. It's an insignificant business expense.
So there is a situation where most genAI content is marked, except that content which you actually want to identify. The result of watermarks must be to make fraudulent content more credible. It makes the situation worse.
My biggest worry is that we reach a situation similar to the war on drugs, where unthinking moral panic causes society to double down on harmful "solutions". You have to think about how this could possibly be enforced and against whom.
GenAI models are about the size of movie. That is to say that they can be torrented just as easily. Stopping people from sharing non-watermarked generators would require an unprecedented level of internet surveillance. The people caught would, IMHO, be the same kind of people caught torrenting movies; mainly kids. The fraudsters can be prosecuted for fraud, anyway, if you catch them. A seriously enforced watermarking law would, IMHO, only prosecute kids and other, basically, harmless people (though they may be using genAI to bully and harass their peers).
Training AI models is not as expensive as one may think. The expensive part is the custom-made training data, as well as the research; the trial and error. Even something as massive as ChatGPT could be trained for less than $5 million. An image generator can probably be trained for less than $100k. In light of that report that someone defrauded a company for $25 million, that's a cheap investment; maybe something you could monetize on the dark net. You'd have to successfully crack down on the dark net in unprecedented ways.
You'd need close monitoring of anything happening with cloud computing. You'd need to require licenses for high-end GPUs.
The problem isn't that new in principle. I remember police advice to hang up and call back under a known number, if someone identified as a police office on the phone. I also remember the 1964 move Fail Safe; a Cold War classic. A squadron of US bombers is accidentally sent on a nuclear raid against Moscow. IDK if the depiction of military practice is in any way accurate. The bombers pass the fail-safe point, after which they can no longer be recalled. In an attempt to stop them, they are radioed by the president and even their wives. They ignore it as trained, because it might be a soviet trick, imitating the voices. So, IDK if bomber crews were really ever trained to expect voice imitators, but even in the early 60ies it must have seemed sufficiently credible to movie audiences.