this post was submitted on 11 Aug 2024
264 points (95.5% liked)

Technology

59188 readers
2561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 3 points 2 months ago

They can cycle a some biases (dozens?) and test them all. Detokenization is super cheap to run, its not AI or anything.

I'm trying to think of a good analogy for how this would work, and I kinda came up with one. This would be kinda like an image encoder that biases itself towards coding RGB values (0-255) as even numbers. Subtly, say 30% odd 70% even.

That's totally imperceptile to humans. And even a "small" sample of the image would carry this bias if pasted into a larger image verbatim, since the sample size is so large (just as the sample size for a bunch of tokens in text is pretty big.

And I'm not saying its fullproof... but if thats indeed what they're doing, I think its a decent way to detect "lazy" OpenAI abusers who aren't working so hard to scramble and defeat it.