this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

So Im looking for references on how to do function calling using Dolphin or Mistral models.

With my current prompt, I'm able to get it to choose an appropriate command for the task sometimes. But often it'll add multiple commands in one response. But the other half of the time it produces correct commands & parameters in json format as request. Sometimes it makes up commands it want to use that doesn't exist in the command list.

I'm just looking for hints at a more concrete prompt that will make these models effective in function calling.

Should I try whatever format OpenAI use seeing as how these smaller models are usually trained on synthetic data produced by OpenAI models?

Any guidance is appreciated ๐Ÿ‘

you are viewing a single comment's thread
view the rest of the comments
[โ€“] 1EvilSexyGenius@alien.top 1 points 1 year ago

Yes good call ๐Ÿ‘ I'm gonna try that now.

I'm gonna add a system message to the chat history part of the prompt saying that the command is invalid and see if it corrects itself in the next iteration of the loop.

This could add a bandage ๐Ÿฉน on the issue for now. Allowing it to seemlessly loop until a task is complete. Until I can find a better prompt or model.

So far, dolphin-mistral 2.1 7b is what I'm using ATM.