this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I saw an idea about getting a big LLM (30/44 Gb) running fast in a cloud server.

What if this server would be scalable in potency and the renting shared in a group of united users?

Some sort of DAO to get it started? Personally i would love to link advanced LMS's up to SD generation etc. And OpenAI is too sensitive for my liking. What do you think?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] DanIngenius@alien.top 1 points 9 months ago (1 children)

Thanks for your detailed reply, I don't think crowd sourcing GPUs is feasible or desired but the idea of only using different LORAs is interesting, can the LORAs be loaded separately from the models? Be able to load the model once and use two separate LORAs?

[โ€“] georgejrjrjr@alien.top 1 points 9 months ago

One base model, dozens maybe hundreds of adapters would be the goal.