Llama.cpp supports batched inference since 4 weeks https://github.com/ggerganov/llama.cpp/issues/2813
-cb, --cont-batching enable continuous batching (a.k.a dynamic batching) (default: disabled)
Community to discuss about Llama, the family of large language models created by Meta AI.
Llama.cpp supports batched inference since 4 weeks https://github.com/ggerganov/llama.cpp/issues/2813
-cb, --cont-batching enable continuous batching (a.k.a dynamic batching) (default: disabled)
FYI, discussed here 11 days ago https://www.reddit.com/r/LocalLLaMA/comments/17m2lql/best_framework_for_llm_based_applications_in/
Three thoughts:
TGI is no longer free software (in the sense that their new license is not OSI approved, nor would it be remotely eligible).
LightLLM is another option that is permissively licensed, and reportedly fast. I haven’t tried it yet.
Speculative inference can yield a significant performance bump, but the devil’s in the details. Some implementations seem to work a lot better than others.
Vllm is performing good so far. Better than expected. Using distributed gpu and trying to work on extending gpu based on load. Need to figure out correct metric on which to trigger the scaling up/down