this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

My understanding of LLM function calling is roughly as follows:

  1. You “list” all the functions the model can call in the prompt
  2. ???
  3. The model knows when to return the “function names” (either in json or otherwise) during conversation

Does anyone have any advice or examples on what prompt should I use?

you are viewing a single comment's thread
view the rest of the comments
[–] Hobofan94@alien.top 1 points 10 months ago

I've been having quite a good time with Airoboros and combining the example prompts they provide in the README for both function calling and ReWOO style execution planning. I mostly use the provided prompts as-is, but have also been toying around with having it output the plan with JSON and adding a bit richer information about each execution plan step.

My rough approach is to first to a call with a ReWOO prompt where I list all the functions as tools (+ their descriptions; no params at the moment). Based on that I parse the plan, and do an additional calls where I only provide it with the shape of the selected function and the raw input it gets from prior plan steps (this is basically just an adapter that creates a function call in the correct shape).

One problem I haven't been able to solve well so far are complex execution plan that involve e.g. executing single steps multiple times with different inputs (e.g. List N cities -> Get information X for each city -> Summarize all X). Would love to hear some input if anybody knows something about that. Apart from that I'm currently toying around with iterative plan prompting (asking for a new execution plan after each step of tool executions to allow for early exits or longer chains based on dynamically discovered information).