this post was submitted on 24 Nov 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What are you using to run them?
In any case, larger context models require *a lot* more RAM/VRAM.
I'm using ooba, I haven't bothered much with KoboldCPP because I'm not really running GGUF