this post was submitted on 10 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I ran a 13b Q_4 on a Raspberry Pi4 8Gb with Llama.cpp with no special settings, it just automatically cashed from disk... Was mega slow and got worse with more tokens, but did it. Don't know if it was Llama.cpp or Raspberry Pi OS that automatically cached.
You can cmake Llama.cpp on many platforms.