this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

.

top 2 comments
sorted by: hot top controversial new old
[โ€“] hwpoison@alien.top 1 points 10 months ago

on my experience, mistral 7b is so unrestricted

[โ€“] Pashax22@alien.top 1 points 10 months ago

Best? Goliath-120b. It's good, better than the 70b models I've used. Currently available on KoboldHorde if you want to try it, or there are GGUFs etc if you have the compute to run it locally. If that's just a little too rich for your tastes, then Xwin-70b is probably the go-to at high parameter counts.

Best with any sort of reasonable hardware requirements? Mlewd-20b is good, Xwin-Mlewd-13b is good, and some of the Mistral-7b merges are punching way above their weight. Check out Dolphin-2.2.1-Mistral-7b, and be amazed at the comparison with 7b models from 3 months back.