this post was submitted on 16 Dec 2023
180 points (99.5% liked)

Technology

59219 readers
4492 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

TikTok's parent company, ByteDance, has been secretly using OpenAI's technology to develop its own competing large language model (LLM). "This practice is generally considered a faux pas in the AI world," writes The Verge's Alex Heath. "It's also in direct violation of OpenAI's terms of service, which state that its model output can't be used 'to develop any artificial intelligence models that compete with our products and services.'"

you are viewing a single comment's thread
view the rest of the comments
[–] cmnybo@discuss.tchncs.de 60 points 11 months ago (8 children)

Training one AI with the output of another AI will just make an even crappier AI.

[–] ripe_banana@lemmy.world 19 points 11 months ago

There is actually a whole subsection of AI focused on training one model with the output of another called knowledge distillation.

[–] ech@lemm.ee 9 points 11 months ago

Depends how it's done. GAN (Generative Adversarial Network) training works with exactly that, having networks train against each other, each improving the other over time.

[–] altima_neo@lemmy.zip 9 points 11 months ago

Works kinda neat with stable Diffusion tho

[–] CaptainSpaceman@lemmy.world 7 points 11 months ago

Like photocopying a picture of a terd

[–] SeaJ@lemm.ee 7 points 11 months ago* (last edited 11 months ago) (1 children)

I've watched Multiplicity enough times to know you get a slightly less functional copy.

[–] cybersandwich@lemmy.world 4 points 11 months ago

She touched my peppy Steve.

[–] LWD@lemm.ee 4 points 11 months ago* (last edited 9 months ago)
[–] betterdeadthanreddit@lemmy.world 3 points 11 months ago

Sounds like what you'd get if you ordered a ChatGPT off of Wish dot com. Cheap knock-offs that blatantly steal ideas/designs and somewhat work are kinda their thing.

[–] ZickZack@fedia.io 2 points 11 months ago

Not necessarily: there have been recent works that indicate that filtering effects of fine tuned LLMs greatly improves the data efficiency (e.g phi-1). Further, if you have e.g. human selection on top of LLM generated content you can get great results as the LLM generation can be used as a soft curriculum, with the human selection biasing towards higher quality.