this post was submitted on 29 Sep 2023
398 points (93.8% liked)

Technology

59157 readers
2312 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

top 50 comments
sorted by: hot top controversial new old
[–] Soundhole@lemm.ee 87 points 1 year ago* (last edited 1 year ago) (11 children)

Any AI model that uses publically available information for training should be open source by law.

This business where corporations (that includes authors, who are published by huge corporations) fight over who "owns" ideas is assinine. When it comes down to it, this is a fight about money being wrapped in an argument about "ideas."

AI models were developed with the collective knowledge and wisdom of society. They're like libraries and should be public like libraries. OpenAI, Google, all those fucks should be forced to open source their models, end of story.

[–] dangblingus@lemmy.world 26 points 1 year ago (2 children)

Trick is educating the octogenarians in the senate to understand any of what you just wrote.

[–] Soundhole@lemm.ee 5 points 1 year ago

Yup! My ideas about what should happen are so far removed from what will actually happen they could be Planet X.

But that doesn't make me wrong, dammit!

[–] FontMasterFlex@lemmy.world 3 points 1 year ago

One less to educate now. Hopefully replaced by someone that doesn't need diapers.

[–] kibiz0r@midwest.social 7 points 1 year ago (1 children)

I'd say they should have to follow the most-restrictive license of all of their training data, and that existing CC/FOSS licenses don't count because they were designed for use in a pre-LLM world.

It seems like a pretty reasonable request. But people like free stuff, and when they think about who will get screwed by this they like to imagine that they're sticking it to the biggest publishers of mass media.

But IRL, those publishers are giddy with the idea that instead of scouting artists and bullying them into signing over their IP, they can just summon IP on demand.

The people who will suffer are the independents who refused to sign over their IP. They never got their payday, and now they never will either.

load more comments (1 replies)
[–] Smoogs@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

The people I’m seeing outraged are artists and authors who did not sign their ideas over for public access or for disingenuous use. not a faceless publisher with cloth bags and dollar signs painted on them. Also I don’t think you understand what public and private ownership means. A person is allowed to privately own their own creation. They don’t owe that to the world. The world isn’t entitled to it.

load more comments (8 replies)
[–] mojo@lemm.ee 46 points 1 year ago* (last edited 1 year ago)

Here's an idea, legally force companies like OpenAI to rely on opt-in data, rather then build their entire company on stealing massive amounts of data. That includes requiring to retrain from scratch. Sam Altman was crying for regulations for scary AI, right?

[–] 0ddysseus@lemmy.world 43 points 1 year ago (1 children)

This is no different than every other capitalist enterprise. The whole system works on taking a public resource, claiming private ownership of it, and then selling it back to the public for profit.

First it was farmland, then coal and minerals, oil, seafood, and now ideas. Its how the system works and is the whole reason people have been trying to stop it for the past 150 years.

The people making the laws are there because they and/or their parents and/or grandparents did the exact same thing. As despicable and corrupt as it is you won't change it by complaining and no-one is going to make a law to stop it.

[–] Franzia@lemmy.blahaj.zone 12 points 1 year ago

God damned right. Every "new" thing tends to be stolen. In more event history, its stolen from other capital, or from innovation with a free license, rather than artwork. Publishers might actually be able to make a problem out of this.

[–] Gibdos@feddit.de 24 points 1 year ago (4 children)

I certainly hope that none of these authors have ever read a book before or have been inspired by something written by another author.

[–] adriaan@sh.itjust.works 33 points 1 year ago (2 children)

That would be a much better comparison if it was artificial intelligence, but these are just reinforcement learning models. They do not get inspired.

[–] Hackerman_uwu@lemmy.world 9 points 1 year ago (1 children)

More to the point: they replicate patterns of words.

[–] lloram239@feddit.de 7 points 1 year ago (1 children)
load more comments (1 replies)
[–] Shurimal@kbin.social 8 points 1 year ago (2 children)

just reinforcement learning models

...like the naturally occuring neural networks are.

[–] Khalic@kbin.social 26 points 1 year ago (2 children)

The brain does not work the way you think… (I work in the field, bio-informatics). What you call “neural networks” come from an early misunderstanding of how the brain stores information. It’s a LOT more complicated and frankly, barely understood.

[–] canihasaccount@lemmy.world 11 points 1 year ago (1 children)

Yeah, accurately simulating a single pyramidal neuron requires an eight-layer deep neural network:

https://www.cell.com/neuron/pdf/S0896-6273(21)00501-8.pdf

load more comments (1 replies)
load more comments (1 replies)
[–] lemmyvore@feddit.nl 10 points 1 year ago (1 children)

Tell you what, you get a landmark legal decision classifying LLM as people and then we'll talk.

Until then it's software being fed content in a way not permitted by its license i.e. the makers of that software committing copyright infringement.

[–] Touching_Grass@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (1 children)

What exactly was not permitted by the license? Reading?

[–] sab@lemmy.world 14 points 1 year ago (6 children)

Using it to (create a tool to) create derivatives of the work on a massive scale.

[–] SirGolan@lemmy.sdf.org 8 points 1 year ago

Wikipedia: In copyright law, a derivative work is an expressive creation that includes major copyrightable elements of a first, previously created original work.

I think you may be off a bit on what a derivative work is. I don't see LLMs spouting out major copyrightable elements of books. They can give a summary sure, but Cliff Notes would like to have a word if you think that's copyright infringement.

[–] lloram239@feddit.de 5 points 1 year ago (1 children)

Better tell that Google and their search index, book scanning project and knowledge graph.

load more comments (1 replies)
load more comments (4 replies)
[–] newthrowaway20@lemmy.world 29 points 1 year ago* (last edited 1 year ago) (2 children)

That's an interesting take, I didn't know software could be inspired by other people's works. And here I thought software just did exactly as it's instructed to do. These are language models. They were given data to train those models. Did they pay for the data that they used to train for it, or did they scrub the internet and steal all these books along with everything everyone else has said?

load more comments (2 replies)
[–] elbarto777@lemmy.world 18 points 1 year ago (14 children)

These are machines, though, not human beings.

I guess I'd have to be an author to find out how I'd feel about it, to be fair.

[–] Touching_Grass@lemmy.world 8 points 1 year ago

Machines that aren't reproducing or distributing works

load more comments (13 replies)
[–] Wander@kbin.social 15 points 1 year ago (4 children)

Are you saying the writers of these programs have read all these books, and were inspired by them so much they wrote millions of books? And all this software is doing is outputting the result of someone being inspired by other books?

load more comments (4 replies)
[–] pavnilschanda@lemmy.world 24 points 1 year ago (4 children)

I hope they can at least get compensated.

[–] Fredselfish@lemmy.world 6 points 1 year ago (1 children)

So where can I check to see if my book was used? I published a book.

load more comments (1 replies)
load more comments (3 replies)
[–] Smoogs@lemmy.world 11 points 1 year ago (1 children)

Ok so it’s been stealing art now it’s coming for authors. At what point do we hold the coalition who started this shit culpable for numerous accounts of plagiarism?

[–] pazukaza@lemmy.ml 4 points 1 year ago

TIL "culpable" is an English word too. Culpable means guilty in Spanish and I thought you were a Spanish speaker doing spanglish. Now I know you're just a man of culture.

There's an idea by Barath Raghavan about an AI dividend that companies pay each netizen a share for the data they use to train these models.

I am into this idea if companies can't even do a simple opt-in mechanism.

[–] Pyr_Pressure@lemmy.ca 7 points 1 year ago (1 children)

Curious if the AI company actually bought those books or if they just came across them by pirating.

[–] threadloose@midwest.social 5 points 1 year ago

Oh, they're 100% pirated. Sorry this isn't open, but the preview should give you enough information. The database is available elsewhere, IIRC. https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/

[–] Gutless2615@ttrpg.network 4 points 1 year ago (2 children)

Everyone’s a fan of fair use until it’s their work that is transformed.

load more comments (2 replies)
load more comments
view more: next ›