this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I'm struggling to get the 7b models to do something useful, obviously I'm doing something wrong as it appears many people strive for 7b models.

But myself I can not get them to follow instructions, they keep repeating stuff and occasionally they start to converse with themselves.

Does anyone have any pointers what I'm doing wrong?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Naiw80@alien.top 1 points 11 months ago

Update on this topic...

I realised I've made some mistakes, the reason to start with I asked about 7b models is because the computer I'm using is resource constrained (and normally I use a frontend for the actual interaction)

But because I only have 8GB RAM in the computer I decided to go with llama.cpp and this is obviously where things went wrong.

First of all I obviously messed up the prompt, not that I notice any significant difference now when I realised but it did not follow the expected format for the model I was using.

โ€‹

But the key thing appeared to be I've been using the -i (interactive) argument and thought it would work like a chat session, well it appears to do for a few queries but as stated in the original post then all of sudden the model starts to converse with itself (filling in for my queries etc).
But it turns out I should have used --instruct all along, and after I realised now things started to work a lot better (although not perfect).

Finally I decided to give neural-chat a try and dang it appears to do most things I ask it to with great success.

Thanks all for your feedback and comments.