Currently P40, P100, MI25 each in pairs.
On one I had 2x P4 and 2x P40 but realized the mismatched VRAM was messing with loading LLMs Yanked the P4. May put in a R620.
Also had M40 but yanked due to slow speeds vs power consumed. At 24 cents a KWh if a pair of GPU runs 24/7 it adds about 300-350w on top of the base 100-140w. Or $53 a month. The spread between a M40 and P100 is paid in 2 months.
Other GPUs owned K40 and Grid K1 doesn’t have CUDA support in PyTorch releases and compiling is a pain.
Obtained some P102-100 on the Uber cheap. Supposedly 1080ti equivalent but only 5GB of VRAM. Haven’t tested yet due to dimensions and location of power connection doesn’t seem to jive with Dell R7x0 chassis.
You can pick up an old Xeon based server preconfigured with 512gb-1tb RAM for $350-1000. RAM will be slower 1033-2400 speed. AVX should be there by default, AVX2/AVX-512 even better. AVX2 on E5-2600 v3 series. The setup won’t rival an eight way SXM4 A100, but you can load some big models with slow responses.