this post was submitted on 01 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi all, today Nvidia released a new driver version that appears to allow the GPU to use system memory instead of crashing when it runs out, seen here: https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion

I was wondering if this is compatible with LLM, and how I could enable that (or if it would just work by default).

you are viewing a single comment's thread
view the rest of the comments
[–] dampflokfreund@alien.top 1 points 10 months ago

Yes, it's system wide. You can set your prefered way in Nvidia control panel->global settings-> cuda systemem fallback policity.

Driver default is prefer systemmem fallback, which means it's going to offload to RAM instead of crashing when VRAM is full.

No System Mem fallback is basically the old memory management, it crashes once your VRAM is full.