this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I got tired of slow cpu inference as well as Text-Generation-WebUI that's getting buggier and buggier.

Here's a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.

It's pretty fast! I get 28t/s.

https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb

top 6 comments
sorted by: hot top controversial new old
[–] HadesThrowaway@alien.top 1 points 11 months ago
[–] herozorro@alien.top 1 points 11 months ago

how long does it stay alive/online?

[–] nullnuller@alien.top 1 points 11 months ago

Great work. It would be nice to have some caching and automatically detecting and re-running the Colab session, as others have pointed out.

[–] Cute_Confection6105@alien.top 0 points 11 months ago (1 children)
[–] herozorro@alien.top 0 points 11 months ago (1 children)

how do you reboot it after the coolab dies

[–] chibop1@alien.top 1 points 11 months ago

Just run it again.