Melodic_Gur_5913

joined 2 years ago
 

Hello,

Does anyone know if there are any significant efforts in distributing the computing efforts to train LLMs to different consumer GPUs spread on the internet? It seems impossible to match the computing capabilities of big tech, and it got me wondering if there was a large-scale effort to share compute for one(or more) mega-open-source project.