Yeah 7B is no problem on phones even at 4tok/s
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
RemindMe! 10 months
Cramming mistral at 2.7bpw I get 2k. Are you talking about vram though?
Nope regular ram
Hmm, theoretically if you switch to a super light Linux distro, and get the q2 quantization 7b, using llama cpp where mmap is on by default, you should be able to run a 7b model, provided i can run a 7b on a shitty 150$ Android which has like 3 GB Ram free using llama cpp
Yes. There is an implementation that loads each layer as required - thereby reducing the VRAM requirements. Just Google it. LLaMa 70b with 4GB.
I have run 7B models with Q2_K on my raspberry pi with 4GB lol. It's kinda slow (still faster than I bargained for), but Q2_K models tend to be pretty stupid at the 7B size, no matter the speed. You can theoretically run a bigger model using swap-space (kind of like using your storage drive as ram), but then the token generation speeds come crawling to a halt.