this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

CUDA cores (or shader cores in general) have long been used to compute graphics. A very often used operation in computer graphics are matrix multiplications, just like in deep learning. Back in the days (AlexNet) NNs were computed using shader cores, but now have completely moved to be computed on Tensor cores. My question are:

  1. Why have these workloads been seperated? (Yes obviously the tensor cores are more specialized and leave out a bunch of unnecessary operations, but how and why not integrate it into the CUDA cores to boost MM operations for computer graphics?)

  2. Why isn't the workload offloaded to the other cores when the mathematical operations are the same

  3. What makes tensor cores so much more efficient and faster?

you are viewing a single comment's thread
view the rest of the comments
[–] picardythird@alien.top 1 points 11 months ago

I haven't worked on GPU design, or looked into tensor cores specifically, but I did study VSLI design in uni. Basically, even if there are nominal similarities between some of the operations shared between the types of cores, the actual silicon designs will differ based on the expected workloads that each type of designed for. Tensor cores will have silicon layouts optimized for tensor operations, and graphics cores will have silicon layouts optimized for matrix operations (and other overhead). There's a lot that goes into this at the nanometer scale (in 3D), and it's way beyond the scope of a reddit comment to describe. I suggest finding a random VLSI textbook and reading about silicon geometries and how things like logic gate are actually implemented on a fab level.