this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I have a query which costs around 300 tokens, and as 1000 tokens cost 0,06 USD that translates to roughly 0,02 USD for that request.

Let say I would deploy a LocalLLaMA on RunPod, on one of the cheaper machines, would that request be cheaper than running it on GPT4?

top 4 comments
sorted by: hot top controversial new old
[–] tenmileswide@alien.top 1 points 9 months ago

Depends entirely on what model you want. The llama-2 13b serverless endpoint would only cost $0.001 for that request on Runpod.

If you rent a cloud pod it's going to cost the same per hour no matter how much or little you send to it so it's based entirely on the number of requests you can get sent to it.

[–] FairSum@alien.top 1 points 9 months ago

If you're looking at cloud / API services, the best option is probably something like either TogetherAI or DeepInfra. TogetherAI tops out at 0.0009 / 1K for 70B models and DeepInfra tops out at 0.0007 / 1K input and 0.00095 output for 70B models. Both of those are well below Turbo and GPT4 price levels. Big caveat being this will only work if the model you want to use is up there. If it isn't and you want to deploy / use said model, RunPod is probably the "cheapest" option, but it charges money as long as the pod is active, and it'll burn through money very quickly. In that case, RunPod likely won't be much, if any, cheaper than using GPT4.

[–] hudimudi@alien.top 1 points 9 months ago

Can’t you use ChatGPT 3.5 for free? It would be the cheapest option and would surely beat any 70b model you can find on random websites.

[–] DarthNebo@alien.top 1 points 9 months ago

Try HuggingFace Endpoints with any of the cheap T4 based serverless instances these go to sleep as well in 15mins.