this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

In terms of AI use, especially LLMs.

$5000 USD for the 128GB ram M3 MacBook Pro is still much cheaper than A100 80 GB.

you are viewing a single comment's thread
view the rest of the comments
[–] FlishFlashman@alien.top 1 points 1 year ago (1 children)

Apple has further segmented the Apple silicon lineup with the M3.

With the M1 & M2 Max, all the GPU variants had the same memory bandwidth (400GB/s for the M2 Max). The top of the line M3 Max (16 CPU/ 40GPU cores) is still limited to 400GB/s max, but now the lower spec variants (14 CPU/30 GPU) are only 300GBs/max.

Inference is generally bound by memory bandwidth, so the M3 generation may not be much of an improvement. Apple claims improvements in the cache, but that may not mean much for inference. We'll know more once people have them in hand, which shouldn't take too long.

[–] TheRealDatapunk@alien.top 1 points 11 months ago

But what's the effect of using multiple graphics cards connected via relatively low bandwidth PCI express?