Thanks for your detailed reply, I don't think crowd sourcing GPUs is feasible or desired but the idea of only using different LORAs is interesting, can the LORAs be loaded separately from the models? Be able to load the model once and use two separate LORAs?
DanIngenius
joined 11 months ago
That's a great idea and approach, how would that work?
I really like the idea, i think multiple 13b models would be awesome! Managed by a highly configured routing model that is completely uncensored is something i want to do, i want to crowd fund a host with this, DM if you are interested!
This is something I'm interested in working on, i want to crowd fund a good LLM + SD + TTSvoice host, DM me if you are interested in taking part!
Amazing work! Thanks!
I like the idea, i think it's similar to something I'm already discussing with some other people, dm me if you want and I'll introduce you