this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I got tired of slow cpu inference as well as Text-Generation-WebUI that's getting buggier and buggier.

Here's a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.

It's pretty fast! I get 28t/s.

https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb

you are viewing a single comment's thread
view the rest of the comments
[โ€“] herozorro@alien.top 0 points 1 year ago (1 children)

how do you reboot it after the coolab dies

[โ€“] chibop1@alien.top 1 points 1 year ago

Just run it again.