manjimin

joined 1 year ago
 

I'm planning to fine-tune a mistral model with my own dataset. (full fine-tune, not LORAs)

The dataset is not that large, around 120 mb in jsonl format.

My questions are:

  1. Will I be able to fine-tune the model with 4 cards of 40G A100?
  2. If not, is using runpod the easiest approach?
  3. I'm trying to instill knowledge in a certain language, for a field it does not have sufficient knowledge in said language. Is fine-tuning my only option? RAG is not viable in my case.

Thanks in advance!

 

Newbie question, but is there a way to have 4*A100 40G cards run as one, with 160G VRAM in total?

I am not able to load a 70B model even with 4bit quantization because my lab has 40G cards.

edit) If this is possible, can I run 8*3090 24G cards as one also?