I have been trying to learn about fine-tuning and lora training for the past couple weeks but I'm having trouble finding easy enough resources to learn from. Could you give me some pointers to what I can read to get started with finetuning llama2 or mistral?
I have tried training quantized models locally with oobabooga and llama.cpp and I also have access to runpod. Really appreciate any info!
I have been trying to learn about fine-tuning and lora training for the past couple weeks but I'm having trouble finding easy enough resources to learn from. Could you give me some pointers to what I can read to get started with finetuning llama2 or mistral?
I have tried training quantized models locally with oobabooga and llama.cpp and I also have access to runpod. Really appreciate any info!