mano3-1

joined 10 months ago
 

Hello everyone,

I've fine-tuned llama2 using my own dataset and now I'm looking to deploy it. The adapter weights are uploaded to HF, and the base model I'm using is h2oai/h2ogpt-4096-llama2-13b-chat.

I've been exploring the vllm project, finding it quite useful initially. However, I've run into a snag with my LoRA fine-tuned model. It seems to be searching for config.json, but since I've uploaded LoRA adapters, there's no config.json available.

Am I overlooking something in my approach, or does vllm not support LoRA fine-tuned models? Any insights or guidance would be greatly appreciated.