385
Former Intel CPU engineer details how internal x86-64 efforts were suppressed prior to AMD64's success
(www.tomshardware.com)
This is a most excellent place for technology news and articles.
That seems to be exactly what you're arguing about, unless I have misread this entire thread.
If we want to highlight other capabilities, we should use different terminology than "X-bit" because that has been pretty much universally agreed upon to refer to instruction sizes and addresses, not data pipelines. And we do that, product spec sheets refer to extensions to point out the unique capabilities they offer (e.g. Intel was pretty famous for supporting AVX-512 almost 10 years before AMD).
That said, now that 32-bit is essentially dead, the "X-bit" marker is essentially dead, and saying something is 256-bit or whatever today is just going to confuse people. People have gotten into the habit if talking about specific capabilities if it's relevant (which it isn't for most people, who just care about "IPC").
That was kind of the point, it's ridiculous to think a modern CPU hasn't evolved dramatically since the introduction of mainstream 64 bit in 2003.
It's still called 64 bit, but there are so many developments.
Exactly, and that is achieved by a modern core operating at about 256 bit internally, to achieve faster execution.
I'm not arguing it's wrong to call it 64 bit, because there is no "true" bit width to call it. So we might as well still call it 64 bit, because it describes the core instruction set. (not just pointers as was claimed by someone else) My point was just that it doesn't really describe the dramatic development of the CPU as a whole, and even the individual cores are more complex in hardware, despite the main instruction set remains the same.