this post was submitted on 31 Oct 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think without fine tuning it on returning function calls you wonβt get any good results. Maybe trying it with validating the input and result again in a loop could do the job, but otherwise we need to wait for fine tuned models.
Yes good call π I'm gonna try that now.
I'm gonna add a system message to the chat history part of the prompt saying that the command is invalid and see if it corrects itself in the next iteration of the loop.
This could add a bandage π©Ή on the issue for now. Allowing it to seemlessly loop until a task is complete. Until I can find a better prompt or model.
So far, dolphin-mistral 2.1 7b is what I'm using ATM.