this post was submitted on 08 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
you gotta look at the internal memory clocks and data transfer rates on the GPUS and what you're going to see is that only the XX80 and XX90 cards have enough memory bandwidth to transfer all that vRAM so the 4060 with all that vRAM can't actually move that much memory around