this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hey !

so as the title suggests , I downloaded the bloke's wizard vicuna 7b , and I was hoping to use it in some sort of RP or smth , yet the model is too bad !!

like seriously , it barely stays in character for more than one line , and even this one line is full of random weird stuff like what the actual hell !!

to be fair , I am a complete beginner in using LLM s , so I might be doing something wrong , so please if you got any advice / tip / suggestion or really any idea please do share it as it would be much appreciated !

thanks in advance !

you are viewing a single comment's thread
view the rest of the comments
[–] RiotNrrd2001@alien.top 1 points 1 year ago

If you're just getting started, then go here and get KoboldCpp. It will run quantized models without installation.

If you want the best fast models, you want mistral 7b models. There are a bunch, but my favorite is Dolphin 2.1 Mistral 7b. It screams on a potato, and it's output is second to none.

Start up KoboldCpp, point it at the Dolphin file, and you should be good to go. I mean, there's a tiny bit more to it than that (picking your GPU and context settings and so on, but it's pretty easy).