this post was submitted on 28 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It’s vaguely how the Mac’s work.
The current APUs are still quite slow but maybe it’ll change. Also in most cases you need to designate memory as gpu specific. So not quite shared
The new oryon cpu from Qualcomm looks pretty good, pretty much better than m2 but for windows.
A chip that won't be available for ~6 months will be better than a chip that came out a year ago? Amazing ;)