this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I already tried to set up fastchat-t5 on a digitalocean virtual server with 32 GB Ram and 4 vCPUs for $160/month with CPU interference. The performance was horrible. Answers took about 5 seconds for the first token and then 1 word per second.

Any ideas how to host a small LLM like fastchat-t5 economically?

you are viewing a single comment's thread
view the rest of the comments
[–] Amgadoz@alien.top 1 points 2 years ago

How about T4 GPU or something like 3090 from runpod? the 3090 costs around 0.5$ per hour which is around 350 dollars per month and it gives you 24 GB which should be enough for t4