this post was submitted on 08 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I have a dataset with context length of 6k. I want to fine tune Mistral model with 4 bit quantization and I faced with error while using 24 GIG RAM GPU.

โ€‹

Is there any base how much ram is needed for this large context length?

Thanks!

top 1 comments
sorted by: hot top controversial new old
[โ€“] edsgoode@alien.top 1 points 10 months ago

If you don't want to lower the context length to fit on 24G, you can find A100_80GB (or 40GB) on shadeform's cloud marketplace