this post was submitted on 22 Aug 2023
691 points (95.9% liked)

Technology

59219 readers
3314 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] Touching_Grass@lemmy.world 3 points 1 year ago (1 children)

Harry potter uses so many tropes and inspiration from other works that came before. How is that different? wizards of the coast should sue her into the ground.

[–] Redditiscancer789@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

Because its not literally using the same stuff, you can be inspired by something ala Starcraft from Warhammer 40k, but you can't use literally the same things. Also you can't copyright as far as I understand it, broad subject matter. So no one can just copyright "wizard" but can copyright "Harry Potter the Wizard". You also can tell the OpenAI company knows it may be doing something wrong because their latest leak includes passages on how to hide the fact the LLMs trained on copyrighted materials.

[–] Touching_Grass@lemmy.world 0 points 1 year ago (1 children)

I would hide stuff too. Copyright laws are out of control. That doesn't mean they did something wrong. Its CYA.

copyrights are for reproducing and selling others work not ingesting them. If they found it online it should be legal to ingest it. If they bought the works they should also be legally able to train off it

[–] Redditiscancer789@lemmy.world 2 points 1 year ago (1 children)

No it does matter where they got the materials. If they illegally downloaded a copy off a website "just cause its on the internet" its still against the law.

[–] Touching_Grass@lemmy.world 0 points 1 year ago (1 children)

Shouldn't be illegal. Give them a letter how angry they are and call it a day

[–] Redditiscancer789@lemmy.world 2 points 1 year ago (1 children)
[–] Touching_Grass@lemmy.world 1 points 1 year ago (1 children)

Couple things. That was wrong then as it is wrong today. Training data isn't file sharing. Too many of you are ushering in a new era of spying and erosion of the internet on behalf of corporations under the guise of " protecting artists" like they did in Napster days.

[–] Redditiscancer789@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Not at all, I simply recognize that the argument may have merit as I said. I never said which side of the isle I personally fall on. Also they are a company so theoretically the scrutiny on the methods they use to acquire data is deserved. Data has a price whether you think it should or shouldn't.

[–] Touching_Grass@lemmy.world 1 points 1 year ago (1 children)

And my opinion is if it has a price don't give it away free online where anyone or anything can I ingest it. Should webcrawlers be paying websites for indexing them?

I also believe in private property. If I buy a book I can do what I want with it. Like use it to train AI. It is my property.

[–] Redditiscancer789@lemmy.world 1 points 1 year ago

Which are 2 contradictory philosophies, how can one simultaneously supposedly not care if someone's private property is stolen yet believes in private property rights? The argument would indeed be if they stole the book off the internet versus bought a copy themselves.