humblebun

joined 1 month ago
[–] humblebun@sh.itjust.works 2 points 1 week ago (1 children)

Because here the person talks about their frustration and you're blaming them. Don't be like that. Empathize!

[–] humblebun@sh.itjust.works 4 points 1 week ago

Nobody was good enough for anyone and nobody was leftist enough for everyone

Very true. Everybody is concerned about how right they are through highlighting how wrong whoever else is. This era of profilicity will exterminate left thought

[–] humblebun@sh.itjust.works 11 points 1 week ago

That's one thing that I like about grandpa style lefties and don't like about Lemmy: blaming trumpists for wanting a change in the political system. Lemmy and MAGA crowd share common fears about the future and have absolutely different solutions.

Why the fuck one wants to highlight the difference and not the thing that's in common. People achieve unimaginable results when working together.

[–] humblebun@sh.itjust.works 27 points 1 week ago* (last edited 1 week ago) (1 children)

Mom, please take me home. Soft men made the times hard again 😭😭😭

[–] humblebun@sh.itjust.works 24 points 1 week ago

How do North Koreans win their country back?

How do Chinese win their country back?

How do Russians win their country back?

How do Hungarians win their country back?

How do Iranians win their country back?

How do South Koreans win their country back?

[–] humblebun@sh.itjust.works 9 points 1 week ago

Never liked them. Modern smartphone is convenient , but a keyboard would be nicer

[–] humblebun@sh.itjust.works 2 points 2 weeks ago (1 children)

While you describe the way how error correction works, there are other factors you fail to notice.

It is widely known, that for each physical qubit T2 time decreases when you place it among other. The ultimate question here is: when you add qubits, could you overcome this decoherence with EC or not.

Say you want to build a QC with 1000 logical qubits and you want to be sure that the error rate doesn't exceed 0.01% after 1 second. You assemble it, and it turns out that you have 0.1%. You choose to use some simple code, say 7,1 and now you have to assemble a 7000 chip to execute 1000 qubits logic. You again assemble it and the error rate is higher now (due to decoherence and crosstalk). But the question is how much higher? If it's lower than your EC efficiency then you just drop a few more qubits, use 15,2 code and you are good to go. But what if no?

[–] humblebun@sh.itjust.works -2 points 2 weeks ago (3 children)

It was shown this year for how many, 47 qbits to scale? How could you be certain this will stand for millions and billions?

[–] humblebun@sh.itjust.works -2 points 2 weeks ago (5 children)

But who guarantees that ec will overcome decoherence, introduced by this number of qbits? Not a trivial question that nobody can answer for certain

[–] humblebun@sh.itjust.works -2 points 2 weeks ago (7 children)

If qbits double every year

And then we need to increase coherence time, which is 50ms for the current 433 qubits large chip. Error correction might work, but might not

[–] humblebun@sh.itjust.works 10 points 2 weeks ago

Ok, I decided to dive into it today again and look what I've found:

  1. They still demonstrate supremacy to each other proving that their setup couldn't be simulated. These 433 and 1000 qubit processors are good only for one purpose: to simulate itself.

  2. Photonic QC still estimates hafnian billions times faster; if only this mathematical structure appeared to have any practical meaning

  3. They demonstrated that toric codes might be effective

view more: next ›