this post was submitted on 31 Oct 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You probably need to wait for the Mac Studio refresh announcements for something more clearly relevant to LLM devs. Hopefully those will have 256GB or more unified memory configs, but likely something for 2024.
That said, it's handy to be able to run inference on a q8 70b model on your local dev box, so the 96GB & 128GBs are interesting for that.