this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I want to use the ExLlama models because it enables me to use the Llama 70b version with my 2 RTX 4090. I managed to get it to work pretty easily via text generation webui and inference is really fast! So far so good...

However, I need the model in python to do some large scale analyses. I cannot seem to find any guide/tutorial in which it is explained how to use ExLlama in the usual python/huggingface setup.

Is this just not possible? If it is, can someone pinpoint me to some examplary code in which ExLlama is used in python.

Much appreciated!

you are viewing a single comment's thread
view the rest of the comments
[–] ReturningTarzan@alien.top 1 points 11 months ago

Yes, the model directory is just all the files from a HF model, in one folder. You can download them directly from the "files" tab of a HF model by clicking all the little download arrows, or there's huggingface-cli. Also git can be used to clone models if you've got git-lfs installed.

It specifically needs the following files:

  • config.json
  • *.safetensors
  • tokenizer.model (preferable) or tokenizer.json
  • added_tokens.json (if the model has one)

But it may utilize other files in the future such as tokenizer_config.json, so best just to download all the files and keep them in one folder.