DanIngenius

joined 10 months ago
[–] DanIngenius@alien.top 1 points 9 months ago

I like the idea, i think it's similar to something I'm already discussing with some other people, dm me if you want and I'll introduce you

[–] DanIngenius@alien.top 1 points 9 months ago (1 children)

Thanks for your detailed reply, I don't think crowd sourcing GPUs is feasible or desired but the idea of only using different LORAs is interesting, can the LORAs be loaded separately from the models? Be able to load the model once and use two separate LORAs?

[–] DanIngenius@alien.top 1 points 9 months ago (3 children)

That's a great idea and approach, how would that work?

[–] DanIngenius@alien.top 1 points 10 months ago

I really like the idea, i think multiple 13b models would be awesome! Managed by a highly configured routing model that is completely uncensored is something i want to do, i want to crowd fund a host with this, DM if you are interested!

[–] DanIngenius@alien.top 1 points 10 months ago (1 children)

This is something I'm interested in working on, i want to crowd fund a good LLM + SD + TTSvoice host, DM me if you are interested in taking part!

 

I saw an idea about getting a big LLM (30/44 Gb) running fast in a cloud server.

What if this server would be scalable in potency and the renting shared in a group of united users?

Some sort of DAO to get it started? Personally i would love to link advanced LMS's up to SD generation etc. And OpenAI is too sensitive for my liking. What do you think?

[–] DanIngenius@alien.top 1 points 10 months ago

Amazing work! Thanks!