this post was submitted on 23 Jun 2024
276 points (94.5% liked)

Technology

59575 readers
3696 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] wewbull@feddit.uk 143 points 5 months ago* (last edited 5 months ago) (20 children)

We do, depending on how you count it.

There's two major widths in a processor. The data register width and the address bus width, but even that is not the whole story. If you go back to a processor like the 68000, the classic 16-bit processor, it has:

  • 32-bit data registers
  • 16- bit ALU
  • 16-bit data bus
  • 32-bit address registers
  • 24-bit address bus

Some people called it a 16/32 bit processor, but really it was the 16-bit ALU that classified it as 16-bits.

If you look at a Zen 4 core it has:

  • 64-bit data registers
  • 512-bit AVX data registers
  • 6 x 64-bit integer ALUs
  • 4 x 256-bit AVX ALUs
  • 2 x 128-bit data bus to DDR5 (dual edge 64-bit)
  • ~40-bits of addressable physical RAM

So, what do you want to call this processor?

64-bit (integer width), 128-bit (physical data bus width), 256-bit (widest ALU) or 512-bit (widest register width)? Do you want to multiply those numbers up by the number of ALUs in a core? ...by the number of cores on a piece of silicon?

Me, I'd say Zen4 was a 256-bit core, but you could argue any of the above numbers.

Basically, it's a measurement that lost all meaning so people stopped using it.

[–] LeFantome@programming.dev 18 points 5 months ago* (last edited 5 months ago)

I would say that you make a decent argument that the ALU has the strongest claim to the “bitness” of a CPU. In that way, we are already beyond 64 bit.

For me though, what really defines a CPU is the software that runs natively. The Zen4 runs software written for the AMD64 family of processors. That is, it runs 64 bit software. This software will not run on the “32 bit” x86 processors that came before it ( like the K5, K6, and original Athlon ). If AMD released the AMD128 instruction set, it would not run on the Zen4 even though it may technically be enough hardware to do so.

The Motorola 68000 only had a 16 but ALU but was able to run the same 32 bit software that ran in later Motorola processors that were truly 32 bit. Software written for the 68000 was essentially still native on processors sold as late as 2014 ( 35 years after the 68000 was released ). This was not some kid of compatibility mode, these processors were still using the same 32 bit ISA.

The Linux kernel that runs on the Zen4 will also run on 64 bit machines made 20 years ago as they also support the amd64 / x86-64 ISA.

Where the article is correct is that there does not seem to be much push to move on from 64 bit software. The Zen4 supports instructions to perform higher-bit operations but they are optional. Most applications do not rely on them, including the operating system. For the most part, the Zen4 runs the same software as the Opteron ( released in 2003 ). The same pre-compiled Linux distro will run on both.

[–] Blackmist@feddit.uk 14 points 5 months ago (1 children)

I gave up trying to figure out what the "bitness" of CPUs were around the time the Atari Jaguar came out and people described it as 64 bit because it had 32 bit graphics chip plus a 32 bit sound chip.

It's been mostly marketing bollocks since forever.

load more comments (1 replies)
[–] Buffalox@lemmy.world 13 points 5 months ago* (last edited 5 months ago) (3 children)

At less than a tenth the size, this is actually a better explanation than the article. Already correcting the fact that we do at the very beginning.
If you absolutely had to put a bit width on the Zen 4, the 2x128 bit data bus is probably the best single measure totaling 256 bit IMO.

load more comments (3 replies)
load more comments (17 replies)
[–] just_another_person@lemmy.world 129 points 5 months ago (5 children)

Is this a question?

We haven't even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it's technically the opposite.

[–] jwr1@kbin.earth 93 points 5 months ago (1 children)

It's a link to an article I found interesting. It basically details why we're still using 64-bit CPUs, just as you mentioned.

[–] fmstrat@lemmy.nowsci.com 19 points 5 months ago

Comment OP must never learn anything new. Good find.

[–] Technus@lemmy.zip 67 points 5 months ago (10 children)

We don't even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

load more comments (10 replies)
[–] Cethin@lemmy.zip 35 points 5 months ago (2 children)

Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It's harder to do half. Sure, it'd be slightly faster for some things, but it's not significant.

[–] sugar_in_your_tea@sh.itjust.works 22 points 5 months ago (1 children)

And you can get 128-bit data to the CPU, so those things can be fast if we need them to be.

[–] henfredemars@infosec.pub 21 points 5 months ago

And we have wide instructions that can process this data, such as for multimedia applications.

Addressing and memory size has been the historic motivator for wider registers, but it’s probably not going to be in my lifetime that I see the need for 128.

load more comments (1 replies)
[–] Voroxpete@sh.itjust.works 34 points 5 months ago (2 children)

Is this a question?

For the people who don't know the answer? Yes.

Not everything you see is intended for your consumption. Let people enjoy learning things.

[–] Cocodapuf@lemmy.world 15 points 5 months ago* (last edited 5 months ago) (1 children)

I totally agree. I know a teacher who who likes to say:

"I believe there really is no such thing as a dumb question. As long as it's an honest question (not rhetorical or sarcastic), then it's a genuine request for more information. So even if it's coming from a place of extreme ignorance, asking a question is an attempt to learn something, and the effort should be applauded."

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] hades@lemm.ee 114 points 5 months ago (6 children)

We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don't we drive eight-wheel vehicles?

[–] TonyTonyChopper@mander.xyz 91 points 5 months ago (3 children)

Lobbying by the auto corporations obviously. More wheels is more better

load more comments (3 replies)
[–] kayazere@feddit.nl 58 points 5 months ago (6 children)

Funny how we are moving back to bicycles, as cars aren’t scalable solution.

[–] Surreal@programming.dev 13 points 5 months ago (2 children)
load more comments (2 replies)
load more comments (5 replies)
[–] borari@lemmy.dbzer0.com 43 points 5 months ago (1 children)

Some of us drive 18-wheeled vehicles.

[–] Liz@midwest.social 19 points 5 months ago

See here's where this analogy is perfect. Sometimes a bicycle is the best solution, just like how sometimes a microcontroller is the best solution. You use the tool you need for the job, and American product design is creating way too many "smart" products just like how American town planning demands too many cars. Bring back the microcontroller! Bring back the bike!

load more comments (2 replies)
[–] ArbiterXero@lemmy.world 64 points 5 months ago (4 children)

32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

[–] aard@kyu.de 43 points 5 months ago (7 children)

You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

load more comments (7 replies)
[–] amanda@aggregatet.org 15 points 5 months ago (4 children)

Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

[–] pivot_root@lemmy.world 24 points 5 months ago* (last edited 5 months ago) (2 children)

Only slightly related, but here's the compiler flag to disable an arbitrary 2GB limit on x86 programs.

Finding the reason for its existence from a credible source isn't as easy, however. If you're fine with an explanation from StackOverflow, you can infer that it's there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.

load more comments (2 replies)
[–] ArbiterXero@lemmy.world 17 points 5 months ago

Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

Also the entire article comes down to simple math.

Bits is the number of digits.

So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

10^4 is WAY smaller than 10^8

[–] neclimdul@lemmy.world 15 points 5 months ago* (last edited 5 months ago) (1 children)

It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It's more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

https://en.m.wikipedia.org/wiki/3_GB_barrier

[–] amanda@aggregatet.org 12 points 5 months ago (1 children)

Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

[–] ms_lane@lemmy.world 11 points 5 months ago

Only on consumer Windows.

Windows Server never had the problem. But wouldn't allow Creative Labs drivers to be installed either...

load more comments (1 replies)
[–] Blue_Morpho@lemmy.world 14 points 5 months ago

I'm not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.

[–] Moobythegoldensock@lemm.ee 10 points 5 months ago

Not really, Raspberry Pi had that same issue with its 32 bit distros.

[–] amanda@aggregatet.org 36 points 5 months ago (1 children)

The comments on this one really surprised me. I thought the kinds of people who hang out on XDA-developers were developers. I assumed that developers had a much better understanding of computer architecture than the people commenting (who of course may not be representative of all readers).

I also get the idea that the writer is being vague not to simplify but because they genuinely don’t know the details, which feels even worse.

[–] sandalbucket@lemmy.world 29 points 5 months ago (1 children)

I think it’s a D-tier article. I wouldn’t be surprised if it was half gpt. It could have been summarized in a single paragraph, but was clearly being drawn out to make screen real-estate for the ads.

load more comments (1 replies)
[–] irotsoma@lemmy.world 28 points 5 months ago (7 children)

Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven't even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

load more comments (7 replies)
[–] vane@lemmy.world 21 points 5 months ago

tell that to playstation2 owners

[–] dlundh@lemmy.world 18 points 5 months ago (4 children)
load more comments (4 replies)
[–] djehuti@programming.dev 17 points 5 months ago (3 children)
load more comments (3 replies)
[–] mox@lemmy.sdf.org 15 points 5 months ago

John Mashey wrote about this nearly 30 years ago. This Usenet thread is worth a read.

[–] AmidFuror@fedia.io 13 points 5 months ago (4 children)

That would be like 6 minutes abs.

load more comments (4 replies)
load more comments
view more: next ›