chibop1

joined 1 year ago
 

I got Llama.cpp to work with BakLLaVA (Mistral+LLaVA 1.5) on colab.

Here's a working example that offloads all the layers of bakllava-1.Q8_0 to T4, a free GPU on Colab.

https://colab.research.google.com/gist/chigkim/a5be99a864c4196d5e379a1e6e280a9e/bakllava.ipynb

FYI, Colab has no persistent storage, and you cannot keep a Colab instance running for a long time. I guess it's on purpose for their business reason. You have to setup and download everything from scratch every time you run. Colab is more for demo/experimentation, not meant to run a server for production.

[–] chibop1@alien.top 1 points 11 months ago

Just run it again.

 

I got tired of slow cpu inference as well as Text-Generation-WebUI that's getting buggier and buggier.

Here's a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.

It's pretty fast! I get 28t/s.

https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb