this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] candre23@alien.top 1 points 10 months ago

Is this the beginning of the end of CUDA dominance?

Not unless intel/AMD/MS/whoever ramps up their software API to the level of efficiency and just-works-edness that cuda provides.

I don't like nvidia/cuda any more than the next guy, but it's far and away the best thing going right now. If you have an nvidia card, you can get the best possible AI performance from it with basically zero effort on either windows or linux.

Meanwhile, AMD is either unbearably slow with openCL, or an arduous slog to get rocm working (unless you're using specific cards on specific linux distros). Intel is limited to openCL at best.

Until some other manufacturer provides something that can legitimately compete with cuda, cuda ain't going anywhere.