this post was submitted on 12 Feb 2024
517 points (99.1% liked)

Linux

48186 readers
2084 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LarmyOfLone@lemm.ee 11 points 9 months ago (4 children)

Do LLM or that AI image stuff run on CUDA?

[–] UraniumBlazer@lemm.ee 11 points 9 months ago* (last edited 9 months ago)

Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.

[–] brianorca@lemmy.world 11 points 9 months ago (1 children)

Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.

[–] LarmyOfLone@lemm.ee 1 points 9 months ago

Thanks that is what I was curious about. So good news!

[–] redcalcium@lemmy.institute 10 points 9 months ago

They are usually released for CUDA first, and if the projects got popular enough, someone will come in and port them to other platforms, which can take a while especially for rocm. Apple m series ports usually appear first before rocm, that's show how much the devs community dislike working with rocm with famous examples such as geohot throwing the towel after working with rocm for a while.

[–] MalReynolds@slrpnk.net 9 points 9 months ago* (last edited 9 months ago)

Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.