this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

There has been a lot of movement around and below the 13b parameter bracket in the last few months but it's wild to think the best 70b models are still llama2 based. Why is that?

We have 13b models like 8bit bartowski/Orca-2-13b-exl2 approaching or even surpassing the best 70b models now

you are viewing a single comment's thread
view the rest of the comments
[–] Exotic-Estimate8355@alien.top 1 points 11 months ago (2 children)

$1/hour for an A100 ? Where? I can barely get one in GCE and it’s almost 4$ / hr

[–] __JockY__@alien.top 1 points 11 months ago

Yes, but you don't have Meta's purchasing power to rent 10,000 GPUs for a month. Economies of scale, my friend!

[–] toothpastespiders@alien.top 1 points 11 months ago

I'd like to know too if there's one for exactly $1. Even half a buck or so difference builds up over time.

But runpod's close at least, at $1.69/hour.