this post was submitted on 09 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Many people say that local model are really close or even exceeds ChatGPT, but I personally dot see it although tried many different models. But you still can run something "comparable" with ChatGPT, it would be much much weaker though.
It works other way, you run a model that your hardware able to run. For example if you have 16Gb Ram than you can run 13B model. You even dont need GPU to run it, it just runs slower on CPU.
So to run your model locally you need to install software to run it locally, like this one:
https://github.com/oobabooga/text-generation-webui
And then you need a model, you can start with this one:
https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/tree/main you download one of the variants and put it in models folder of text-generation-webui installed on previous step.