this post was submitted on 29 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
For not biggest models are the best, so there is no best model for CPU, there is best model you are ready and willing to wait an answer.
Like Goliath-120B is great, and I am using it on i5-12400, having 0.4 tokens/second and I don't want anything less now.
All right - the wait emphasises the quality of the question then. Will have a go - thanks!