this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I am looking to get a MI60 for both LLMs and other high compute tasks as some are going for $350 on ebay. It looks like a really good deal for my applications and with the 32GBs of RAM, but I was wondering what others have experienced with it for use with LLMs. I am curious on how compatibility was for OpenCL or ROCm, I mainly use Windows so am wondering if I can still use it with most of its speed through windows, and what kind of speeds people are getting using it with models.

Thank you!

you are viewing a single comment's thread
view the rest of the comments
[–] tu9jn@alien.top 1 points 10 months ago

I run 3X MI25, a 70b q4_k_m model starts from 7t/s and slows to ~3 t/s at full context. 7b_f16 is about 18t/s. As far as i know the Mi series only have linux drivers.