this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why does no one use it?
I've used petals a ton.
Because LLama2-70B is similar or better in most metrics, and it small enough to not need distributed inference.
check out chat.petals.dev
Bad marketing. I only seen it recently.
Plus you get one model no loras (unless something changed recently).
It runs a few models and if others decide to run models it runs with em just try the chat we app or the dashboard to see what’s currently running issue is not enough people donating compute
It’s terribly inefficient in many ways. Data centers with best GPUs are the most efficient hardware and energy wise. They are often built in places with access to cheap/green energy and subsidies. Also for research/development cash is cheap, so there’s little incentive to play with some decentralized stuff which adds a level of technical abstraction + needing a community. Opportunity cost wayyy outweighs running this in a data center for the vast majority of use cases.
Aren't there a lot of people who don't run their GPUs 24/7? That would put the marginal cost of equipment at zero, and electricity costs what, something around $1/W-yr?
Transferring the state over the internet so the next card can take over is sloooow. You'd want cards that can take a lot of layers to minimize that.
In other words, you want few and big gpu's in the network, not a bunch of small ones.
Yes, for actually dividing models across machines, which was the original idea. I'd shifted to a different (and less technically interesting) question of sharing GPUs without dividing the model.
For dividing training, though, see this paper:
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
Distributed inference IS indeed slower BUT its definitely not too slow for production use. I've used it and it's still faster than GPT4 with the proper cluster.