this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

Hello,

Does anyone know if there are any significant efforts in distributing the computing efforts to train LLMs to different consumer GPUs spread on the internet? It seems impossible to match the computing capabilities of big tech, and it got me wondering if there was a large-scale effort to share compute for one(or more) mega-open-source project.

top 1 comments
sorted by: hot top controversial new old
[–] FlishFlashman@alien.top 1 points 2 years ago

Someone asked something very similar in the past day, but I think it was from the angle of training as proof of work.

There is this https://petals.dev