this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I got tired of slow cpu inference as well as Text-Generation-WebUI that's getting buggier and buggier.

Here's a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.

It's pretty fast! I get 28t/s.

https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb

you are viewing a single comment's thread
view the rest of the comments
[–] Cute_Confection6105@alien.top 0 points 10 months ago (2 children)
[–] herozorro@alien.top 0 points 10 months ago (1 children)

how do you reboot it after the coolab dies

[–] chibop1@alien.top 1 points 10 months ago

Just run it again.