vec1nu

joined 1 year ago
[–] vec1nu@alien.top 1 points 11 months ago

This is a really good question and i'd also like to understand how to use the knowledge base with an LLM

[–] vec1nu@alien.top 1 points 1 year ago

Use something like lmql, guidance or guiderails to get the model to say it doesn't know. I've also had some success with the airoboros fine-tuned models, which have this behaviour defined in the dataset using a specific prompt.

[–] vec1nu@alien.top 1 points 1 year ago

I think you don't have cuda properly setup. Use pip install --verbose to see the compilation messages when it's trying to build llamacpp with cuda. You might need to manually set the CUDA_HOME environment variable.

[–] vec1nu@alien.top 1 points 1 year ago (1 children)

I haven't used gptq in a while, but i can say that gguf has 8 bit quantization, which you can use with llamacpp. Furthermore, if you use the original huggingface models, the ones which you load using the transformers loader, you have options in there to load in either 8 or 4bit.

[–] vec1nu@alien.top 1 points 1 year ago

Which frontend is that?