this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I'm currently exploring the deployment of Llama models in a production environment and I'm keen to hear from anyone who has ventured into this territory. My primary concern is managing multiple concurrent users while optimizing resources effectively.

While there are numerous methods to tweak Llama for testing with a single user, scaling up poses its own set of challenges. I'm particularly interested in learning how others have approached this problem.I'm curious about projects like vLLM and Huggingface TGI for faster inference. Has anyone had experience with these, and how have they contributed to your scaling efforts?

My goal is to implement an API utilizing Llama models for a small organization's private use. I'm eager to learn from your experiences and any advice or insights you can share on this topic.

you are viewing a single comment's thread
view the rest of the comments
[–] georgejrjrjr@alien.top 1 points 10 months ago (1 children)

Three thoughts:

TGI is no longer free software (in the sense that their new license is not OSI approved, nor would it be remotely eligible).

LightLLM is another option that is permissively licensed, and reportedly fast. I haven’t tried it yet.

Speculative inference can yield a significant performance bump, but the devil’s in the details. Some implementations seem to work a lot better than others.

[–] vicks9880@alien.top 1 points 10 months ago

Vllm is performing good so far. Better than expected. Using distributed gpu and trying to work on extending gpu based on load. Need to figure out correct metric on which to trigger the scaling up/down