this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I already tried to set up fastchat-t5 on a digitalocean virtual server with 32 GB Ram and 4 vCPUs for $160/month with CPU interference. The performance was horrible. Answers took about 5 seconds for the first token and then 1 word per second.

Any ideas how to host a small LLM like fastchat-t5 economically?

you are viewing a single comment's thread
view the rest of the comments
[–] vasileer@alien.top 1 points 11 months ago (1 children)

3 ideas

  1. quantization

fastchat-t5 is a 3B model on bfloat16, that means it needs at least at least 3B x 16bits ~ 6GB RAM only for the model itself, and 2K tokens limit for the context (for both prompt and answer),

a quick way to speed up is to use a quantized version:

8bit quant, with almost no quality lost, like https://huggingface.co/limcheekin/fastchat-t5-3b-ct2,

you will get a 2x smaller file and 2x faster inference,

but better read #2 :)

  1. a better model/finetune for better quality

a Mistral finetune like https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF, wich is 7B, quantized to 4bits, will have ~ the same size as 8bit fastchat-t5,

but a superior performance as it was most probably trained on more tokens than llama2 (~2T tokens), and flan-t5 (base model of the fastchat-t5) was only on 1T,

explanation why a larger model quantized is better than a smaller one even not quantized is explained here https://github.com/ggerganov/llama.cpp/pull/1684

  1. use HuggingFace as a hosting, it is ~20$/month for the same server you mentioned that costs 160$, so it is 8x cheaper

https://preview.redd.it/54x2ff87gk0c1.png?width=839&format=png&auto=webp&s=dae1d27376c9c858935c285dd765246af79a86a4

[–] HeronAI_com@alien.top 1 points 11 months ago

Wow thanks, thats really an in-depth comment I will try what you say!