this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

CUDA cores (or shader cores in general) have long been used to compute graphics. A very often used operation in computer graphics are matrix multiplications, just like in deep learning. Back in the days (AlexNet) NNs were computed using shader cores, but now have completely moved to be computed on Tensor cores. My question are:

  1. Why have these workloads been seperated? (Yes obviously the tensor cores are more specialized and leave out a bunch of unnecessary operations, but how and why not integrate it into the CUDA cores to boost MM operations for computer graphics?)

  2. Why isn't the workload offloaded to the other cores when the mathematical operations are the same

  3. What makes tensor cores so much more efficient and faster?

top 13 comments
sorted by: hot top controversial new old
[–] picardythird@alien.top 1 points 11 months ago

I haven't worked on GPU design, or looked into tensor cores specifically, but I did study VSLI design in uni. Basically, even if there are nominal similarities between some of the operations shared between the types of cores, the actual silicon designs will differ based on the expected workloads that each type of designed for. Tensor cores will have silicon layouts optimized for tensor operations, and graphics cores will have silicon layouts optimized for matrix operations (and other overhead). There's a lot that goes into this at the nanometer scale (in 3D), and it's way beyond the scope of a reddit comment to describe. I suggest finding a random VLSI textbook and reading about silicon geometries and how things like logic gate are actually implemented on a fab level.

[–] smokingPimphat@alien.top 1 points 11 months ago

Building a solution that solves the specific problem you have is always going to yield faster and more efficient results compared to something that is even 90% a solution.

A recent example would be bitcoin ASICs. While not into crypto personally it was amazing to see just how fast bitcoin ASICs got rolled out.

Its now at a point where no one in their right mind would use anything else and if a faster one gets released people clamor to grab as many as they can afford.

Having dedicated hardware to do the specific math for ML is the logical move.

[–] smt1@alien.top 1 points 11 months ago (2 children)

I'd listen to this recent talk by Bill Dally (chief scientist @ NVIDIA) who talks a bit about the underlying math operation primitives from a computer architecture-level:

https://www.youtube.com/watch?v=kLiwvnr4L80

He cuts through some of the marketing language like CUDA and tensor cores to focus on the complex instructions from 16 mins on or so.

[–] ognjenivuk@alien.top 1 points 11 months ago

!remindme 1day

[–] anirudhr20@alien.top 1 points 11 months ago

!remindme 1 day

[–] VirtualHat@alien.top 1 points 11 months ago (2 children)

This big difference with tensor cores is that they use a 16-bit float multiply combined with a 32-bit float accumulate. This makes them much more efficient in terms of transistors required... but not a swap in replacement for CUDA.

Libraries like Pytorch can do matrix multiply (MM) on both CUDA cores and Tensor cores (and CPU, too, if you like). Typically Tensor cores are ~1.5-2x faster (in theory they're much faster, in practice we're often memory bandwidth limited so it doesn't matter). The current default in Pytorch is to perform MM on CUDA, and convolutions on Tensor cores. The reason being that MM sometimes requires extra precision, and in vision models, most of the work is in the convolutions anyway.

[–] wen_mars@alien.top 1 points 11 months ago

And recently tensor cores have started appearing with 8 bit float/int as well, which gives them a huge advantage in inference throughput. The memory bandwidth limitation can be mitigated by increasing the batch size.

[–] Buddy77777@alien.top 1 points 11 months ago (1 children)

Why use 32-bit accumulator to accumulate 16-bit numbers?

[–] wen_mars@alien.top 1 points 11 months ago

If you multiply two 16-bit numbers the result can overflow the range that can be represented by 16 bits.

[–] SnooHesitations8849@alien.top 1 points 11 months ago

Shader cores and tensor core only make sense when they are in a gaming card. Shader core carries the work of shading, while tensor core carries the upscaling, especially in case of Ray Tracing. In terms of operands, shading units works with vector as the base data structure, while in tensor core it is matrix (4x4 matrix) In ML card like A100, the amount of shading unit:tensor units is much lower than the ratio in gaming card.

[–] hershey678@alien.top 1 points 11 months ago

The bus widths and on module memory sizes are configured differently depending on your workload. It’s been a while since I looked at this stuff but Google Tensor cores for ex are really optimized for big matrix computations.

[–] danielfm123@alien.top 1 points 11 months ago

They need to release new models with more and more tensor cores until it's unified. Otherwise you would only buy 1 video cars.

[–] ResponsibleJudge3172@alien.top 1 points 11 months ago

Tensor cores do what CUDA cores do but they accelerate a set of CUDA instructions to finish in one clock cycle