this post was submitted on 24 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
OpenHermes 2.5 is amazing from what I've seen. it can call functions, summarize text, is extremely competitive, all the works
Haven’t you noticed slower inference from OpenHermes 2.5 compared to other 7B models?
How does it function call? Some internal api?
it outputs the call
https://twitter.com/abacaj/status/1727747892922769751
It returns a JSON with function name and respective arguments which you can parse later in the program and call the function with those arguments given by the model.
I'm seconding that. I'm actually amazed by how it performs, frequently getting similar or better answers than bigger models. I start to think that we do lose a lot with quantization from the bigger models...
Can you provide the prompt for function call?