this post was submitted on 29 Jan 2025
958 points (98.6% liked)
Technology
61227 readers
4789 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is effing hilarious. First, OpenAI & friends steal creative works to “train” their LLMs. Then they are insanely hyped for what amounts to glorified statistics, get “valued” at insane amounts while burning money faster than a Californian forest fire. Then, a competitor appears that has the same evil energy but slightly better statistics.. bam. A trillion of “value” just evaporates as if it never existed.
And then suddenly people are complaining that DeepSuck is “not privacy friendly” and stealing from OpenAI. Hahaha. Fuck this timeline.
It never did exist. This is the problem with the stock market.
That's why "value" is in quotes. It's not that it didn't exist, is just that it's purely speculative.
Hell Nvidia's stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.
Stock investing is just gambling on whatever is public opinion, which is notoriously difficult because people are largely dumb and irrational.
It's the same hardware, the problem for them is that deepseek found a way to train their AI for much cheaper using a lot less than the hundreds of thousands of GPUs from Nvidia that openai, meta, xAi, anthropic etc. uses
The way they found to train their AI cheaper isn't novel, they just stole it from OpenAI (not that I care). They still need GPUs to process the prompts and generate the responses.
Common wisdom said that these models need CUDA to run properly, and DeepSeek doesn't.
CUDA being taken down a peg is the best part for me. Fuck proprietary APIs.
They replaced it with a lower level nvidia exclusive proprietary API though.
People are really misunderstanding what has happened.
That's a damn shame.
Sure but Nvidia still makes the GPUs needed to run them. And AMD is not really competitive in the commercial GPU market.
AMD apparently has the 7900 XTX outperforming the 4090 in Deepseek.
Those aren't commercial GPUs though. These are:
https://developer.nvidia.com/blog/introducing-hgx-a100-most-powerful-accelerated-server-platform-for-ai-hpc/
Someone should just an make AiPU. I'm tired of all GPUs being priced exorbitantly.
Okay, but then why would anyone make non-AiPUs if the tech is the same and they could sell the same amount at a higher cost?
Because you could charge more for "AiPUs" than you already are for GPUs since capitalists have brain rot. Maybe we just need to invest in that open source GPU project if its still around.
That's what I said.
If a GPU and a hypothetical AiPU are the same tech, but nVidia could charge more for the AiPU, then why would they make and sell GPUs?
It's the same reason why they don't clamp down on their pricing now: they don't care if you are able to buy a GPU, they care that Twitter or Tesla or OpenAI are buying them 10k at a time.
Yeah and then in this "free market" system someone can come make cheaper GPUs marketed at gamers and there ya go. We live again.
Except "free market" ideals break down when there are high barriers to entry, like... chip fabrication.
Also, that's already what's happening? If you don't want to pay for nVidia, you can get AMD or Intel ARC for cheaper. So again, there's literally no reason for nVidia to change what they're doing.
I know you're right. But I'm just making pro consumer suggestions, like anybody but us scrubs at the bottom gives a fuck about those. Moving the marketing to a different component would lower the perceived and real value of GPUs for us lowly consumers to once again partake. But its not like it matters because we're at some strange moment in time where the VRAM on cards isn't matching what the games say they need.
they need less powerful and less hardware in general tho, they acted like they needed more
Chinese GPUs are not far behind in gflops. Nvidia advantage is CUDA, drivers, interconnection clusters.
AFAIU, deepseek did use cuda.
In general, computing advances have rarely resulted in using half the computers, though I could be wrong at the datacenter/hosting level at the maturity stage.
Not cuda, but a lower level nvidia proprietary API, your point still stands though.
"valuation" I suppose. The "value" that we project onto something whether that something has truly earned it.
You know what else isn’t privacy friendly? Like all of social media.
I hear tulip bulbs are a good investment...
How much for two thousands?
Tree fiddy 🦕
Nah bitcoin is the future
Edit: /s I was trying to say bitcoin = tulips
Capitalism basics, competition of exploitation
You can also just run deepseek locally if you are really concerned about privacy. I did it on my 4070ti with the 14b distillation last night. There's a reddit thread floating around that described how to do with with ollama and a chatbot program.
That is true, and running locally is better in that respect. My point was more that privacy was hardly ever an issue until suddenly now.
Wasn't zuck the cuck saying "privacy is dead" a few years ago 🙄
Absolutely! I was just expanding on what you said for others who come across the thread :)
I'm an AI/comp-sci novice, so forgive me if this is a dumb question, but does running the program locally allow you to better control the information that it trains on? I'm a college chemistry instructor that has to write lots of curriculum, assingments and lab protocols; if I ran deepseeks locally and fed it all my chemistry textbooks and previous syllabi and assignments, would I get better results when asking it to write a lab procedure? And could I then train it to cite specific sources when it does so?
in a sense: if you don't let it connect to the internet, it won't be able to take your data to the creators
I'm not all that knowledgeable either lol it is my understanding though that what you download, the "model," is the results of their training. You would need some other way to train it. I'm not sure how you would go about doing that though. The model is essentially the "product" that is created from the training.
And how does that help with the privacy?
If you're running it on your own system it isn't connected to their server or sharing any data. You download the model and run it on your own hardware.
From the thread I was reading people tracked packets outgoing and it seemed to just be coming from the chatbot program as analytics, not anything going to deepseek.
How do you know it isn't communicating with their servers? Obviously it needs internet connection to work, so what's stopping it from sending your data?
Why do you think it needs an Internet connection? Why are you saying 'obviously'
How else does it figure out what to say if it doesn't have the access to the internet? Genuine question, I don't imagine you're dowloading the entire dataset with the model.
I'll just say, it's ok to not know, but saying 'obviously' when you in fact have no clue is a bad look. I think it's a good moment to reflect on how over confident we can be on the internet, especially about incredibly complex topics that cross into multiple disciplines and touch multiple fields.
To answer your question. The model is in fact run entirely locally. But the model doesn't have all of the data. The model is the output of the processed training data, kind of like how a math expression 1 + 2 has more data than its output '3' the resulting model is orders of magnitude smaller.
The model consists of a bunch of variables, like knobs on panel, and the training process is turning the knobs, the knobs themselves are not that big, but they require a lot of information to know where to be turned too.
Not having access to the dataset is ok from a privacy standpoint, even if you don't know how the data was used or where it was obtained from, the important aspect here is that your prompts are not being transmitted anywhere, because the model is being used locally.
In short using the model and training the model are very different tasks.
Edit: additionally, it's actually very very easy to know if a piece of software running on hardware you own, is contacting specific servers. The packet has to leave your computer and your router has to tell it to go somewhere, you can just watch it. I advise you check out a piece of software called Wireshark.
You made me look ridiculously stupid and rightfully so. Actually, I take that back, I made myself look stupid and you made it obvious as it gets! Thanks for the wake up call
If I understand correctly, the model is in a way a dictionary of questions with responses, where the journey of figuring out the response is skipped. As in, the answer for the question "What's the point of existence" is "42", but it doesn't contain the thinking process that lead to this result.
If that's so, then wouldn't it be especially prone to hallucinations? I don't imagine it would respond adequately to the third "why?" in the row.
You kind of get it, it's not really a dictionary, it's more like a set of steps to transform noise that is tinted with your data, into more coherent data. Pass this input through a series of valves that are all open a different amount.
If we set the valves just perfectly, the output will kind of look like what we want it to.
Yes, LLMs are prone to hallucinations, which isn't always actually a bad thing, it's only bad if you are trying to do things that you need 100% accuracy for, like specific math.
I recommend 3blue1browns videos on LLMs for a nice introduction into how they actually work.
To add a tiny bit to what was already explained by Takumidesh: you do actually download quite a bit of data to run it locally. The "smaller" 14b model I used was a 9GB download. The 32b one is 20GB and being all "text", that's a lot of information.