this post was submitted on 17 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I want to train llama using qlora on multiple tasks in a sequential manner, ie task A-> B-> C. Would it be possible to combine, say adapter weights trained on A with B and subsequentially, A with C? How can i can go about doing this?

Has anyone tried doing so and achieving reasonable results? i am aiming for task A to be continual pretraining for domain adaptation, while B and C to be the downstream tasks.

top 1 comments
sorted by: hot top controversial new old
[–] WitchSayo@alien.top 1 points 1 year ago

You can merge lora A to the base model, and than to finetune B and C on the merged model.