this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I'm only getting 0.8 tokens/second with my 3060 12gb using Zephyr 7b beta.

I'll admit I barely know what I'm doing, but was I wrong to expect a little more? I was hoping for something at least a quarter the speed of gpt-3.5...

you are viewing a single comment's thread
view the rest of the comments
[–] DarthNebo@alien.top 1 points 1 year ago (3 children)
[–] Aaaaaaaaaeeeee@alien.top 1 points 1 year ago (1 children)

What's the latest t/s on a 4bit model with TGI? is there a difference compared with HF transformer loader?

[–] DarthNebo@alien.top 1 points 1 year ago

The attention layers get replaced with flash attention 2, there's kv caching as well so you get way better batch1 & batchN results with continuous batching for every request

load more comments (1 replies)