jeffwadsworth

joined 10 months ago
[–] jeffwadsworth@alien.top 1 points 9 months ago (1 children)

Well, it depends on how well it keeps the context resolution. Did you see that comparison sheet on Claude and GPT-4? Astounding.

[–] jeffwadsworth@alien.top 1 points 9 months ago

I use a Ryzen 12 core and can use llama.cpp with the 70b 8bit fine. Do not bother with hyper-threads, though.

[–] jeffwadsworth@alien.top 1 points 9 months ago (1 children)

Funny. Airoboros 70b runs perfectly fine for me with llama.cpp. Curious how you initialized it.

[–] jeffwadsworth@alien.top 1 points 9 months ago

The 13b’s don’t surpass the 70b Airoboros model. Not even close.

[–] jeffwadsworth@alien.top 1 points 9 months ago

Hard to believe but can’t wait to try.