this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

My current specs are 7950x cpu, 96gb 6000 mhz ram and a rtx 4090. Would a buying a p40 make bigger models run noticbly faster? If it does is there anything I should know about buying p40's? Like do they take normal connectors or anything like that. I'm pretty sure I'll have to buy a cooler for it but idk what else.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] simcop2387@alien.top 1 points 11 months ago (1 children)

The big issue is that you're going to have to disable 16bit floats for doing all the work and do it all in 32bit floats (not storing weights, but the calculations themselves) once you try to combine with a P40, you can still get alright performance on them (I'm using 4 of them) but you'll cripple the performance of the 4090 doing that. I don't know if any of the libraries for running things will handle conversion and different kernels on different cards to avoid that since it's a completely different set of code for that.

You'd do much much better with adding a used 3090 from ebay (assuming it works) really.

[โ€“] CertainlyBright@alien.top 1 points 11 months ago

Could you elaborate on "disabling 16 bit floats" alittle bit more?