this post was submitted on 25 Nov 2023
1 points (100.0% liked)
LocalLLaMA
4 readers
4 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Apple Silicon Macs are great options for running LLMs, especially so if you want to run a large LLM on a laptop. With that said, there aren't big performance differences between the M1 Max & M3 Max, at least not for text generation, prompt processing does show generational improvements.. Maybe this will change in future versions of MacOS if there are optimizations to unlock better Metal Performance Shader performance on later GPU generations, but for right now, they are pretty similar.
Apple Silicon Macs aren't currently a great option for training/fine tuning models. There isn't a lot of software support for GPU acceleration during training on Apple Silicon.