this post was submitted on 09 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
my setup
EPYC Milan-X 7473X 24-Core 2.8GHz 768MB L3
512GB of HMAA8GR7AJR4N-XN HYNIX 64GB (1X64GB) 2RX4 PC4-3200AA DDR4-3200MHz ECC RDIMMs
MZ32-AR0 Rev 3.0 motherboard
6x 20tb WD Red Pros on ZFS with zstd compression
SABRENT Gaming SSD Rocket 4 Plus-G with Heatsink 2TB PCIe Gen 4 NVMe M.2 2280
you can probably get away with a non-x without really an performance difference. it might make a difference in very tiny models, but that's not the point of getting such a beastly machine.
I got the Milan-X because I also use it for cad, and circuit board development, and gaming, and video editing so it's an all in one for me.
also my electric bill went from $40 a month to $228 a month, but some of that is because I haven't setup the suspend states yet and the machine isn't sleeping the way I want it to yet. I just haven't gotten around to it. i imagine it would cut the bill in half, and then maybe choosing the right fan manager and governors might save me another $30 a month.
I can run falcon 180b unquantized and still have tons of ram left over.
I take it you live in a low cost electricity area if your bill was $40 before that. Where I live, people can pay 10 times that even if they just live in an apartment. So in high cost areas like mine, the power and thus electricity cost savings for something like a Mac would end up paying for it.
No way, you're that one guy I uploaded the f16 airoboros for ! I was hoping you'd get the model and I think you did it :)
sounds like me ;) Thanks!