this post was submitted on 18 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Was wondering if theres anyway to use a bunch of old equipment like this to build an at home crunch center for running your own LLM at home, and whether it would be worth it.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] croholdr@alien.top 1 points 11 months ago (1 children)

I tried it. Something like 1.2 tokens inference on lamma 70b with a mix of cards (but 4 1080s). Would process would crash occasionally. Ideally every card would have the same vram.

Going to try it with 1660 TI's. I think it may be the 'sweet spot' in power to price to performance.

[โ€“] FullOf_Bad_Ideas@alien.top 1 points 11 months ago

Did you use some q3 gguf quant with this?