this post was submitted on 21 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
When did they thaw you out of ice?!
Jokes aside, you probably mean 512 GB of RAM. That platform is slow and old, and that is at best DDR3 1333 dual channel, much worse than even bottom barrel DDR4 dual channel.
A 3090 will not care about it as long as you are doing pure GPU inferencing and not touching the GPU, if not DDR3 and PCI-E2 will kill the performance.