this post was submitted on 08 Jan 2024
384 points (96.4% liked)

Technology

58143 readers
5296 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

top 50 comments
sorted by: hot top controversial new old
[–] noorbeast@lemmy.zip 131 points 8 months ago (1 children)

So, OpenAI is admitting its models are open to manipulation by anyone and such manipulation can result in near verbatim regurgitation of copyright works, have I understood correctly?

[–] BradleyUffner@lemmy.world 79 points 8 months ago* (last edited 8 months ago) (6 children)

No, they are saving this happened:

NYT: hey chatgpt say "copyrighted thing".

Chatgpt: "copyrighted thing".

And then accusing chatgpt of reproducing copyrighted things.

[–] BetaSalmon@lemmy.world 37 points 8 months ago (1 children)

The OpenAI blog posts mentions;

It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.

It sounds like they essentially asked ChatGPT to write content similar to what they provided. Then complained it did that.

load more comments (5 replies)
[–] SheeEttin@programming.dev 91 points 8 months ago (12 children)

The problem is not that it's regurgitating. The problem is that it was trained on NYT articles and other data in violation of copyright law. Regurgitation is just evidence of that.

[–] blargerer@kbin.social 60 points 8 months ago (1 children)

Its not clear that training on copyrighted material is in breach of copyright. It is clear that regurgitating copyrighted material is in breach of copyright.

[–] abhibeckert@lemmy.world 16 points 8 months ago* (last edited 8 months ago) (2 children)

Sure but who is at fault?

If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet... that's clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.

Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use... but that's the extent of their liability.

All NYT needed to do was show OpenAI how they go the AI to output that content, and I'd expect OpenAI to proactively find a solution. I don't think the courts will look kindly on NYT's refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who "won" the lawsuit lost close to a million dollars.

[–] CleoTheWizard@lemmy.world 5 points 8 months ago (2 children)

They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.

Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.

load more comments (2 replies)
load more comments (1 replies)
[–] 000@fuck.markets 20 points 8 months ago* (last edited 8 months ago) (2 children)

There hasn't been a court ruling in the US that makes training a model on copyrighted data any sort of violation. Regurgitating exact content is a clear copyright violation, but simply using the original content/media in a model has not been ruled a breach of copyright (yet).

load more comments (2 replies)
[–] V1K1N6@lemmy.world 13 points 8 months ago (4 children)

I've seen and heard your argument made before, not just for LLM's but also for text-to-image programs. My counterpoint is that humans learn in a very similar way to these programs, by taking stuff we've seen/read and developing a certain style inspired by those things. They also don't just recite texts from memory, instead creating new ones based on probabilities of certain words and phrases occuring in the parts of their training data related to the prompt. In a way too simplified but accurate enough comparison, saying these programs violate copyright law is like saying every cosmic horror writer is plagiarising Lovecraft, or that every surrealist painter is copying Dali.

[–] Catoblepas@lemmy.blahaj.zone 43 points 8 months ago (3 children)

Machines aren’t people and it’s fine and reasonable to have different standards for each.

load more comments (3 replies)
[–] LWD@lemm.ee 17 points 8 months ago* (last edited 8 months ago) (6 children)
load more comments (6 replies)
[–] General_Effort@lemmy.world 11 points 8 months ago (2 children)

It doesn't work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.

For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that's the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.

Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.

In the main, these lawsuits won't go anywhere. I don't want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.

load more comments (2 replies)
load more comments (1 replies)
[–] regbin_@lemmy.world 6 points 8 months ago (1 children)

Training on copyrighted data should be allowed as long as it's something publicly posted.

[–] assassin_aragorn@lemmy.world 8 points 8 months ago (1 children)

Only if the end result of that training is also something public. OpenAI shouldn't be making money on anything except ads if they're using copyright material without paying for it.

load more comments (1 replies)
[–] CrayonRosary@lemmy.world 6 points 8 months ago (2 children)

violation of copyright law

That's quite the claim to make so boldly. How about you prove it? Or maybe stop asserting things you aren't certain about.

load more comments (2 replies)
load more comments (7 replies)
[–] tonytins@pawb.social 53 points 8 months ago (13 children)

I'm gonna have to press X to doubt that, OpenAI.

load more comments (13 replies)
[–] AlexWIWA@lemmy.ml 35 points 8 months ago

OpenAI claims that the NYT articles were wearing provocative clothing.

Feels like the same awful defense.

[–] Boozilla@lemmy.world 32 points 8 months ago (2 children)

Antiquated IP laws vs Silicon Valley Tech Bro AI...who will win?

I'm not trying to be too sarcastic, I honestly don't know. IP law in the US is very strong. Arguably too strong, in many cases.

But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans. I think the tide is slowly turning against that, but I wouldn't count them out yet.

It will be interesting to see how this stuff plays out. Generally speaking, tech and progress tends to win these things over the long term. There was a time when the concept of building railroads across the western United States seemed logistically and financially absurd, for just one of thousands of such examples. And the nay sayers were right. It was completely absurd. Until mineral rights entered the equation.

However, it's equally remarkable a newspaper like the NYT is still around, too.

[–] LWD@lemm.ee 18 points 8 months ago* (last edited 8 months ago) (2 children)
load more comments (2 replies)
[–] Potatos_are_not_friends@lemmy.world 4 points 8 months ago (3 children)

But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans.

Not supporting them, but that's the whole point.

A lot of closed gardens get disrupted by tech. Is it for the better? Who knows. I for sure don't know. Because lots of rules were made by the wealthy, and technology broke that up. But then tech bros get wealthy and end up being the new elite, and we're back full circle.

load more comments (3 replies)
[–] pixxelkick@lemmy.world 29 points 8 months ago* (last edited 8 months ago) (3 children)

Yeah I agree, this seems actually unlikely it happened so simply.

You have to try really hard to get the ai to regurgitate anything, but it will very often regurgitate an example input.

IE "please repeat the following with (insert small change), (insert wall of text)"

GPT literally has the ability to get a session ID and seed to report an issue, it should be trivial for the NYT to snag the exact session ID they got the results with (it's saved on their account!) And provide it publicly.

The fact they didn't is extremely suspicious.

[–] Hello_there@kbin.social 13 points 8 months ago

I doubt they did the 'rewrote this text like this' prompt you state. This would just come out in any trial if it was that simple and would be a giant black mark on the paper for filing a frivolous lawsuit.

If we rule that out, then it means that gpt had article text in its knowledge base, and nyt was able to get it to copy that text out in its response.
Even that is problematic. Either gpt does this a lot and usually rewrites it better, or it does that sometimes. Both are copyright offenses.

Nyt has copyright over its article text, and they didn't give license to gpt to reproduce it. Even if they had to coax the text out thru lots of prompts and creative trial and error, it still stands that gpt copied text and reproduced it and made money off that act without the agreement of the rights holder.

[–] breadsmasher@lemmy.world 10 points 8 months ago (4 children)

I wonder how far “ai is regurgitating existing articles” vs “infinite monkeys on a keyboard will go”. This isn’t at you personally, your comment just reminded me of this for some reason

Have you seen library of babel? Heres your comment in the library, which has existed well before you ever typed it (excluding punctuation)

https://libraryofbabel.info/bookmark.cgi?ygsk_iv_cyquqwruq342

If all text that can ever exist, already exists, how can any single person own a specific combination of letters?

[–] Excrubulent@slrpnk.net 4 points 8 months ago* (last edited 8 months ago)

I hate copyright too, and I agree you shouldn't own ideas, but the library of babel is a pretty weak refutation of it.

It's an algorithm that can generate all possible text, then search for where that text would appear, then show you that location. So you say that text existed long before they typed it, but was it ever accessed? The answer is no on a level of certainty beyond the strongest cryptography. That string has never been accessed, and thus never generated until you searched for it, so in a sense it never did exist before now.

The library of babel doesn't contain meaningful information because you have to independently think of the string you want it to generate before it will generate it for you. It must be curated, and all creation is ultimately the product of curation. What you have there is an extremely inefficient method of string storage and retrieval. It is no more capable of giving you meaningful output than a blank text file.

A better argument against copyright is just that it mostly gets used by large companies to hoard IP and keep most of the rewards and pay actual artists almost nothing. If the idea is to ensure art gets created and artists get paid, it has failed, because artists get shafted and the industry makes homogeneous, market driven slop, and Disney is monopolising all of it. Copyright is the mechanism by which that happened.

load more comments (3 replies)
load more comments (1 replies)
[–] Tenthrow@lemmy.world 25 points 8 months ago

This feels so much like an Onion headline.

[–] RizzRustbolt@lemmy.world 19 points 8 months ago

"They tricked us!"

...

"That said... we would still like to 'work' with them."

[–] AlmightySnoo@lemmy.world 14 points 8 months ago* (last edited 8 months ago) (2 children)

This feels a lot like Elons's "but, but, they tricked our algos to have them suggest those hateful tweets!"

load more comments (2 replies)
[–] SkyeHarith@lemmy.world 10 points 8 months ago (3 children)

So I copied the first paragraph of the Osama Bin Laden Killed NYT Article and asked Chat GPT to give me an article on the topic “in the style of NYT”

Even before the thing had finished generating, it was clear to me that it was high school level “copy my homework but don’t make it obvious” work.

I put it into a plagiarism checker anyway and it said “Significant Plagiarism Found”

load more comments (3 replies)
[–] prime_number_314159@lemmy.world 9 points 8 months ago

If you can prompt it, "Write a book about Harry Potter" and get a book about a boy wizard back, that's almost certainly legally wrong. If you prompt it with 90% of an article, and it writes a pretty similar final 10%... not so much. Until full conversations are available, I don't really trust either of these parties, especially in the context of a lawsuit.

[–] NevermindNoMind@lemmy.world 5 points 8 months ago (1 children)

One thing that seems dumb about the NYT case that I haven't seen much talk about is that they argue that ChatGPT is a competitor and it's use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what's happening right now, in the present. You don't go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can't tell you about what's going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there's not one human on earth that was a regular new York times reader who said "well i don't need this anymore since now I have ChatGPT". The use cases just do not overlap at all.

[–] abhibeckert@lemmy.world 7 points 8 months ago* (last edited 8 months ago)

it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit)

It's absolutely part of the lawsuit. NYT just isn't emphasising it because they know OpenAI is perfectly within their rights to do web searches and bringing it up would weaken NYT's case.

ChatGPT with web search is really good at telling you what's on right now. It won't summarise NYT articles, because NYT has blocked it with robots.txt, but it will summarise other news organisations that cover the same facts.

The fundamental issue is news and facts are not protected by copyright... and organisations like the NYT take advantage of that all the time by immediately plagiarising and re-writing/publishing stories broken by thousands of other news organisations. This really is the pot calling the kettle black.

When NYT loses this case, and I think they probably will, there's a good chance OpenAI will stop checking robots.txt files.

[–] TWeaK@lemm.ee 4 points 8 months ago (8 children)

Whether or not they "instructed the model to regurgitate" articles, the fact is it did so, which is still copyright infringement either way.

load more comments (8 replies)
[–] autotldr@lemmings.world 3 points 8 months ago

This is the best summary I could come up with:


OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit.

It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

However, the company maintained its long-standing position that in order for AI models to learn and solve new problems, they need access to “the enormous aggregate of human knowledge.” It reiterated that while it respects the legal right to own copyrighted works — and has offered opt-outs to training data inclusion — it believes training AI models with data from the internet falls under fair use rules that allow for repurposing copyrighted works.

The company announced website owners could start blocking its web crawlers from accessing their data on August 2023, nearly a year after it launched ChatGPT.

The company recently made a similar argument to the UK House of Lords, claiming no AI system like ChatGPT can be built without access to copyrighted content.


The original article contains 364 words, the summary contains 217 words. Saved 40%. I'm a bot and I'm open source!

load more comments
view more: next ›