this post was submitted on 22 Aug 2023
88 points (93.1% liked)

Technology

59402 readers
2735 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Replacing the usual denoising algorithms used to generate an image from the (usually small) number of traced rays in a scene with a new AI model that runs alongside DLSS upscaling, Nvidia is able to show ray traced reflections and shadows with less artifacts, significantly improved clarity, and more data points being represented (like soft shadows and small occlusions) at no penalty to performance.

Unlike frame generation, DLSS 3.5's ray reconstruction will work on all RTX series cards, including the 2000 series. In one example, Nvidia shows how fast moving objects no longer leave trails of temporal artifacts when RT global illumination is enabled.

you are viewing a single comment's thread
view the rest of the comments
[–] anlumo@feddit.de -1 points 1 year ago (2 children)

This is very exciting. Unfortunately, AMD card won’t be able to benefit from this, making the GPU market ever more fragmented.

[–] booty@sh.itjust.works 14 points 1 year ago

How does this fragment anything? DLSS and Nvidia's ray tracing tech have always been exclusive to their own RTX cards. Nothing changes in that respect. Ray tracing, as a whole, isn't exclusive to Nvidia, but their tech has been.

[–] Edgelord_Of_Tomorrow@lemmy.world 8 points 1 year ago (1 children)

NVIDIA literally couldn't make this work on AMD cards if they wanted to. The hardware simply does not exist on AMD cards to run it and so the performance impact on AMD cards would be worse than not using it at all. Then everyone would complain NVIDIA is purposefully crippling AMD.

Contrary to popular belief FSR isn't behind DLSS because NVIDIA has more magical AI capabilities to train DLSS, and AMD through partners like Microsoft and Sony have access to just as much if not more AI training capability to train FSR if they wanted.

Additionally, NVIDIA isn't going to be wasting any more AI horsepower on training DLSS than they have to when there is essentially infinite demand for it from high paying customers right now. If training your algorithm was the secret sauce AMD could do it too, and they'd be able to catch up simply by spending more over a shorter period of time.

The problem is that FSR has to be extremely lightweight to run, because it competes with the graphics rendering for resources. The more AMD makes FSR do the less performance advantage it will provide. DLSS on the other hand uses the tensor cores that are otherwise unutilised on RTX cards. This means DLSS can be a significantly heavier and more complicated system because it has essentially free resources to run on.

Until AMD has standard tensor or some other free performance to use on their cards, FSR is going to remain behind.

[–] anlumo@feddit.de 3 points 1 year ago

I'm not blaming NVIDIA for this, it's just bad that the two jobs of creating cards and creating rendering tech are combined within a single company.