this post was submitted on 24 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I had the money, I'd go with the cpu.
Also, I'm not sure a 4090 could run 33B modes at full precision. Wouldn't that require like 70GB of vRAM?