this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I have a query which costs around 300 tokens, and as 1000 tokens cost 0,06 USD that translates to roughly 0,02 USD for that request.

Let say I would deploy a LocalLLaMA on RunPod, on one of the cheaper machines, would that request be cheaper than running it on GPT4?

you are viewing a single comment's thread
view the rest of the comments
[–] tenmileswide@alien.top 1 points 11 months ago

Depends entirely on what model you want. The llama-2 13b serverless endpoint would only cost $0.001 for that request on Runpod.

If you rent a cloud pod it's going to cost the same per hour no matter how much or little you send to it so it's based entirely on the number of requests you can get sent to it.