Choice_Diver_2585

joined 10 months ago
 

My longest context length is 4k. I want to fine tune Llama2-7b model on this dataset. How much RAM I will need if I load model with 4bit quantization using the bitsandbyte.

I faced with OutOfMemory Error with 24G RAM.

Thank you!

 

I have a dataset with context length of 6k. I want to fine tune Mistral model with 4 bit quantization and I faced with error while using 24 GIG RAM GPU.

Is there any base how much ram is needed for this large context length?

Thanks!