this post was submitted on 28 Feb 2024
403 points (97.6% liked)
Technology
59135 readers
2532 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Maybe we need to label AI-generated content to, you know, avoid confusion.
Sorry, best we can do is a race to the bottom fueled by greed and incompetence.
That will be a refeshing change.
That's what has been happening, and is likely what will continue to happen. Not much change there really...
Yes that’s the joke.
Wouldn't it have to be funny to be a joke?
I thought I was being funny. Sorry if it didn't tickle you just right.
Please respect my personal space and refrain from tickling me.
Comedy is hard.
I'm sure we can compromise on a mandatory database of registered AI-generated content that only the corporations can read from but everyone using AI-generated content is required by law to write to, with hefty fines (but only for regular people).
Oh goody. I've been wanting to use this since my slashdot days... today is my first chance!
I traced this baby back to January 19th, 2004: https://craphound.com/spamsolutions.txt
Thanks, I was wondering how old it was when they said "Slashdot days."
Oh, do me next, do me. Open source adversarial models trained to detect and actively label things which it detects as belonging to AI. Probably would end up looking like a browser extension or something. Ublock, but for AI, basically.
Sounds great, how do we enforce it?
If the AIs want to avoid digital incest they'll enforce it for themselves.
The AIs dont want anything themselves and those who make the decisions about them want the most profit, what costs more, verifying training data or AI incest?
Sounds like something an advanced language learning model would say....
It's important to understand that a language modelling AI can only produce responses based on its inputs.
Ah, you're suggesting using RFC 3514. Good thinking.
Thank you for bringing that standard to my attention.
Far too late for that now.