I use a Ryzen 12 core and can use llama.cpp with the 70b 8bit fine. Do not bother with hyper-threads, though.
jeffwadsworth
joined 1 year ago
Funny. Airoboros 70b runs perfectly fine for me with llama.cpp. Curious how you initialized it.
The 13b’s don’t surpass the 70b Airoboros model. Not even close.
Hard to believe but can’t wait to try.
Well, it depends on how well it keeps the context resolution. Did you see that comparison sheet on Claude and GPT-4? Astounding.