ColorlessCrowfeet

joined 10 months ago
[โ€“] ColorlessCrowfeet@alien.top 1 points 10 months ago

Yes, for actually dividing models across machines, which was the original idea. I'd shifted to a different (and less technically interesting) question of sharing GPUs without dividing the model.

For dividing training, though, see this paper:

SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient

[โ€“] ColorlessCrowfeet@alien.top 1 points 10 months ago (2 children)

some niche community uses where the budget is none and people will just distribute the electricity/GPU cost

Aren't there a lot of people who don't run their GPUs 24/7? That would put the marginal cost of equipment at zero, and electricity costs what, something around $1/W-yr?