this post was submitted on 17 Feb 2024
298 points (97.8% liked)
Technology
59402 readers
3048 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'd be very surprised if people weren't already scraping Reddit for this.
it's all but guaranteed. Reminds me of this Computerphile video: https://youtu.be/WO2X3oZEJOA?t=874 TL;DW: there were "glitch tokens" in GPT (and therefore ChatGPT) which undeniably came from Reddit usernames.
Note, there's no proof that these reddit usernames were in the training data (and there's even reasons to assume that they weren't, watch the video for context) but there's no doubt that OpenAI already had scraped reddit data at some point prior to training, probably mixed in with all the rest of their text data. I see no reason to assume they completely removed all reddit text before training. The video suggest reasons and evidence that they removed certain subreddits, not all of reddit.
Here is an alternative Piped link(s):
https://piped.video/WO2X3oZEJOA?t=874
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.