this post was submitted on 29 Jan 2025
962 points (98.6% liked)

Technology

61227 readers
4465 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?

Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.

It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.

“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”

OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.

you are viewing a single comment's thread
view the rest of the comments
[–] Takumidesh@lemmy.world 15 points 1 day ago* (last edited 1 day ago) (1 children)

I'll just say, it's ok to not know, but saying 'obviously' when you in fact have no clue is a bad look. I think it's a good moment to reflect on how over confident we can be on the internet, especially about incredibly complex topics that cross into multiple disciplines and touch multiple fields.

To answer your question. The model is in fact run entirely locally. But the model doesn't have all of the data. The model is the output of the processed training data, kind of like how a math expression 1 + 2 has more data than its output '3' the resulting model is orders of magnitude smaller.

The model consists of a bunch of variables, like knobs on panel, and the training process is turning the knobs, the knobs themselves are not that big, but they require a lot of information to know where to be turned too.

Not having access to the dataset is ok from a privacy standpoint, even if you don't know how the data was used or where it was obtained from, the important aspect here is that your prompts are not being transmitted anywhere, because the model is being used locally.

In short using the model and training the model are very different tasks.

Edit: additionally, it's actually very very easy to know if a piece of software running on hardware you own, is contacting specific servers. The packet has to leave your computer and your router has to tell it to go somewhere, you can just watch it. I advise you check out a piece of software called Wireshark.

[–] ddplf@szmer.info 10 points 1 day ago (1 children)

You made me look ridiculously stupid and rightfully so. Actually, I take that back, I made myself look stupid and you made it obvious as it gets! Thanks for the wake up call

If I understand correctly, the model is in a way a dictionary of questions with responses, where the journey of figuring out the response is skipped. As in, the answer for the question "What's the point of existence" is "42", but it doesn't contain the thinking process that lead to this result.

If that's so, then wouldn't it be especially prone to hallucinations? I don't imagine it would respond adequately to the third "why?" in the row.

[–] Takumidesh@lemmy.world 6 points 1 day ago

You kind of get it, it's not really a dictionary, it's more like a set of steps to transform noise that is tinted with your data, into more coherent data. Pass this input through a series of valves that are all open a different amount.

If we set the valves just perfectly, the output will kind of look like what we want it to.

Yes, LLMs are prone to hallucinations, which isn't always actually a bad thing, it's only bad if you are trying to do things that you need 100% accuracy for, like specific math.

I recommend 3blue1browns videos on LLMs for a nice introduction into how they actually work.