this post was submitted on 18 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It isn't practical for most people to make their own models, that requires industrial hardware. The "b" is billion, which indicates the size and potential intelligence of the model. Right now, the Yi-34b models are the best for that size.
I recommend using a Mistral 7b as your introduction to LLM. They are small but fairly smart for their size. Get your model from Huggingface. For your model, something like Mistral Dolphin should do fine.
I recommend KoboldCPP for running a model, as it is very simple to use. It uses GGUF format, allowing you to use your GPU, RAM, and CPU. Other formats are exclusively GPU, offering greater speed but less flexibility.