this post was submitted on 23 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I haven't worked on GPU design, or looked into tensor cores specifically, but I did study VSLI design in uni. Basically, even if there are nominal similarities between some of the operations shared between the types of cores, the actual silicon designs will differ based on the expected workloads that each type of designed for. Tensor cores will have silicon layouts optimized for tensor operations, and graphics cores will have silicon layouts optimized for matrix operations (and other overhead). There's a lot that goes into this at the nanometer scale (in 3D), and it's way beyond the scope of a reddit comment to describe. I suggest finding a random VLSI textbook and reading about silicon geometries and how things like logic gate are actually implemented on a fab level.