I believe that gpu offloading in llama.cpp can be used to merge your vram and ram. I would suggest you to try some airoboros llama 2 70b q3_k_m quant and Tess-m-1.3 q5_k_m once TheBloke makes quants. There will be some leftover space in your RAM after loading Tess, but it's a model with 200k context, so you will need it for context. Max out your vram and maybe use batch size of -1 to trade prompt processing speed for more vram space, try offloading both with cublas and clBLAST. Last time I checked, it seemed like using clBLAST allowed to offload more layers to gpu in the same memory footprint.
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
llama.cpp and upload some layers to VRAM, you may be able to run 70B, depends on quantization.
!remindme 7 days
From my understanding, if you say you want to run the models without quality loss, then quantized models are not exactly what you are looking for, at least not below a certain threshold. With your setup you should be able to run 7B models in 8-bit.
For everything beyond that you'll need higher quantized models (e.g., 4-bit), which also introduce higher quality loss.
There was this post a while back which lined out the hardware requirements for 8-bit and 4-bit, for GPU and CPU setups. Of course you can go even higher with quantization and run even larger models, but it'll introduce more loss as well.