I'd just be worried they will drop support for them in ROCm 6.0. They dropped the MI-50's already. Technically you can still run them and the other MI25 but ROCm is kernel specific so before long you might have to maintain a system with an old kernel to have it working. I have a pair of MI100s and while they do work fine, they are slower than NVidia 3090s when used with llama.cpp, exLLama, and Koboldcpp for some reason. It looks like with the new release of flashattention-2 the MI210 is the oldest card they support which I find very frustrating. I also have a couple W6800's and they are actually as fast or faster than the MI100s with the same software and about the same price and have built in cooling.
this post was submitted on 14 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS