this post was submitted on 14 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's just a low-parameter problem. If you've got the RAM for it, I highly suggest dolphin-2_2-yi-34b. Especially now that koboldcpp has context shifting, you don't have to wait for all that prompt reprocessing. Also be sure you're using an instruct mode like Roleplay (which is Alpaca format) or whatever official prompt format that LLM uses.