this post was submitted on 07 Dec 2023
284 points (97.3% liked)

Technology

58143 readers
5215 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] EmergMemeHologram@startrek.website 24 points 9 months ago (2 children)

I very much hope this is a success for AMD, because for years I’ve wanted to use their chips for analysis and frankly the software interface is so far behind Nvidia it’s ridiculous, and because of that none of the tools I want to use support it.

Attached is my review of AMD:

[–] stardreamer@lemmy.blahaj.zone 6 points 9 months ago* (last edited 9 months ago)

If we're nitpicking about AMD: another thing I dislike about them is their smaller presence in the research space compared to their competitors. Both Intel and NVIDIA throw money into risky new ideas like crazy (NVM, DPUs, GPGPUs, P4, Frame Generation). Meanwhile, AMD seems to only hop in once a specific area is well established to have an existing market.

For consumer stuff, AMD is definitely my go-to. But it occurs to me that we need companies that are willing to fund research in Academia. Even if they don't have a super good track record of getting profitable results.

[–] Dudewitbow@lemmy.zip 5 points 9 months ago (1 children)

Define software interface, because Adreneline as an interface is miles in modernity and responsiveness compared to nvidia control panel+geforce experience.

The word you are looking for is features, because the interface is not what AMD is behind on at all.

[–] EmergMemeHologram@startrek.website 7 points 9 months ago* (last edited 9 months ago) (1 children)

CUDA vs ROCm. Both support OpenCL which is meh.

I target GPU for mathematical simulations and calculations, not really gaming

[–] Dudewitbow@lemmy.zip 3 points 9 months ago* (last edited 9 months ago) (1 children)

Hence its a featureset. CUDA has a more in depth feature set because Nvidia is the leader and gets to dictate where compute goes, this in turn has a cyclical feedbackloop as devs use CUDA which locks them more and more into the ecosystem. Its a self inflicting problem till one bows out, and it wont be Nvidia.

It forces AMD to have to play catchup and write a wrapper that converts CUDA into OpenCL because the devs wont do it.

Ai is the interesting situation because when it came to the major libraries (e.g pytorch, tensorflow), they already have non Nvidia backends, and with microsofts desire to get AI compute to every pc, it makes more sense for them to partner with AMD/Intel due to the pc requiring a processor, while an nvidia gpu in the pc is not guaranteed. This caused more natural escape from requiring CUDA. If a project requires an Nvidia gpu, it rolls back that it was a small dev who programmed with CUDA for a feature and not the major library.

AMD didn’t even have a good/reliable implementation of OpenCL, which I would have liked to have succeed over CUDA.

Intel and AMD dropped the ball massively for like 15 years after Nvidia released CUDA. It wasn’t quiet either, CUDA was pushed all over the place even it came out.