this post was submitted on 26 Jul 2024
230 points (96.7% liked)

science

21821 readers
129 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Hamartiogonic@sopuli.xyz 24 points 1 year ago* (last edited 1 year ago) (6 children)

A few years ago, people assumed that these AIs will continue to get better every year. Seems that we are already hitting some limits, and improving the models keeps getting harder and harder. It’s like the linewidth limits we have with CPU design.

[–] ArcticDagger@feddit.dk 11 points 1 year ago (3 children)

I think that hypothesis still holds as it has always assumed training data of sufficient quality. This study is more saying that the places where we've traditionally harvested training data from are beginning to be polluted by low-quality training data

[–] HowManyNimons@lemmy.world 20 points 1 year ago (2 children)

It's almost like we need some kind of flag on AI-generated content to prevent it from ruining things.

[–] Hamartiogonic@sopuli.xyz 1 points 1 year ago (1 children)

If that gets implemented, it would help AI devs and common people hanging online.

[–] HowManyNimons@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

File it under "too good to happen". Most writing jobs are proofreading AI-generated shit these days. We'll need to wait until there's real money in writing scripts to de-pollute content.

load more comments (2 replies)