this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Has anyone tried to combine a server with a moderately powerful GPU with a server with a lot of RAM to run inference? Especially with llama. Cpp where you can offload just some of the layers to GPU?

โ€‹

https://github.com/Juice-Labs/Juice-Labs/wiki

top 1 comments
sorted by: hot top controversial new old
[โ€“] Brave-Decision-1944@alien.top 1 points 11 months ago

I seen something like that in LOLLMs UI, it's called petal, and basically it bandwidth the processing along computers connected to that network. There was also other remote "binding" from same maker as the UI. But I didn't tired those.