this post was submitted on 04 Dec 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes, that M1 Max should running LLMs really well including 70B with decent context. A M2 won't be much better. A M3, other than the 400GB/s model, won't be as good. Since everything but the 400GB/s has had the memory bandwidth cut from the M1/M2 models.
Are you seeing that $2400 at B&H? It was $200 cheaper there a couple of weeks ago. It might be worth it to see if the price goes back down.