bunchberry

joined 4 months ago
[–] bunchberry@lemmy.world 2 points 2 months ago (1 children)

It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.

[–] bunchberry@lemmy.world 2 points 2 months ago

You don't have to be sorry, that was stupid of me to write that.

[–] bunchberry@lemmy.world 1 points 2 months ago (1 children)

Because the same functionality would be available as a cloud service (like AI now). This reduces costs and the need to carry liquid nitrogen around.

Okay, you are just misrepresenting my argument at this point.

[–] bunchberry@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (5 children)

Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

If they would offer speed benefits, then why wouldn't you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn't want it.

That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there's no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don't see why consumers wouldn't want it.

I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won't outweigh the cost anytime soon.

[–] bunchberry@lemmy.world 11 points 2 months ago* (last edited 2 months ago) (10 children)

Uh... one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it's "businessy"? All modern technology is technical, that's what technology is. It would be like someone saying, "GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?" Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It's huge for regular consumers... assuming the technology could ever progress to come to regular consumers.

[–] bunchberry@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (1 children)

A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

It's really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It's been around for a century, a lot of ideas are very developed.

I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are "looking," which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you "trace out" the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn't be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don't entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.

[–] bunchberry@lemmy.world 1 points 3 months ago

This is why many philosophers came to criticize metaphysical logic in the 1800s, viewing it as dealing with absolutes when reality does not actually exist in absolutes, stating that we need some other logical system which could deal with the "fuzziness" of reality more accurately. That was the origin of the notion of dialectical logic from philosophers like Hegel and Engels, which caught on with some popularity in the east but then was mostly forgotten in the west outside of some fringe sections of academia. Even long prior to Bell's theorem, the physicist Dmitry Blokhintsev, who adhered to this dialectical materialist mode of thought, wrote a whole book on quantum mechanics where the first part he discusses the need to abandon the false illusion of the rigidity and concreteness of reality and shows how this is an illusion even in the classical sciences where everything has uncertainty, all predictions eventually break down, nothing is never possible to actually fully separate something from its environment. These kinds of views heavily influenced the contemporary physicist Carlo Rovelli as well.

[–] bunchberry@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (1 children)

There 100% are…

If you choose to believe so, like I said I don't really care. Is a quantum computer conscious? I think it's a bit irrelevant whether or not they exist. I will concede they do for the sake of discussion.

Penrose thinks they’re responsible for consciousness.

Yeah, and as I said, Penrose was wrong, not because the measurement problem isn't the cause for consciousness, but that there is no measurement problem nor a "hard problem." Penrose plays on the same logical fallacies I pointed out to come to believe there are two problems where none actually exist and then, because both problems originate from the same logical fallacies. He then notices they are similar and thinks "solving" one is necessary for "solving" the other, when neither problems actually existed in the first place.

Because we also don’t know what makes anesthesia stop consciousness. And anesthesia stops consciousness and stops the quantum process.

You'd need to define what you mean more specifically about "consciousness" and "quantum process." We don't remember things that occur when we're under anesthesia, so are we saying memory is consciousness?

Now, the math isn’t clean. I forget which way it leans, but I think it’s that consciousness kicks out a little before the quantum action is fully inhibited? It’s been a minute, and this shit isn’t simple.

Sure, it's not simple, because the notion of "consciousness" as used in philosophy is a very vague and slippery word with hundreds of different meanings depending on the context, and this makes it seem "mysterious" as its meaning is slippery and can change from context to context, making it difficult to pin down what is even being talked about.

Yet, if you pin it down, if you are actually specific about what you mean, then you don't run into any confusion. The "hard problem of consciousness" is not even a "problem" as a "problem" implies you want to solve it, and most philosophers who advocate for it like David Chalmers, well, advocate for it. They spend their whole career arguing in favor of its existence and then using it as a basis for their own dualistic philosophy. It is thus a hard axiom of consciousness and not a hard problem. I simply disagree with the axioms.

Penrose is an odd case because he accepts the axioms and then carries that same thinking into QM where the same contradiction re-emerges but actually thinks it is somehow solvable. What is a "measurement" if not an "observation," and what is an "observation" if not an "experience"? The same "measurement problem" is just a reflection of the very same "hard problem" about the supposed "phenomenality" of experience and the explanatory gap between what we actually experience and what supposedly exists beyond it.

It’s the quantum wave function collapse that’s important.

Why should I believe there is a physical collapse? This requires you to, again, posit that there physically exists something that lies beyond all possibilities of us ever observing it (paralleling Kant's "noumenon") which suddenly transforms itself into something we can actually observe the moment we try to look at it (paralleling Kant's "phenomenon"). This clearly introduces an explanatory gap as to how this process occurs, which is the basis of the measurement problem in the first place.

There is no reason to posit a physical "collapse" or even that there exists at all a realm of waves floating about in Hilbert space. These are unnecessary metaphysical assumptions that are purely philosophical and contribute nothing but confusion to an understanding of the mathematics of the theory. Again, just like Chalmers' so-called "hard problem," Penrose is inventing a problem to solve which we have no reason to believe is even a problem in the first place: nothing about quantum theory demands that you believe particles really turn into invisible waves in Hilbert space when you aren't looking at them and suddenly turn back into visible particles in spacetime when you do look at them.

That's entirely metaphysical and arbitrary to believe in.

There’s no spinning out where multiple things happen, there is only one thing. After wave collapse, is when you look in the box and see if the cats dead. In a sense it’s the literal “observer effect” happening our head. And that is probably what consciousness is.

There is only an "observer effect" if you believe the cat literally did turn into a wave and you perturbed that wave by looking at it and caused it to "collapse" like a house of cards. What did the cat see in its perspective? How did it feel for the cat to turn into a wave? The whole point of Schrodinger's cat thought experiment was that Schrodinger was trying to argue against believing particles really turn into waves because then you'd have to believe unreasonable things like cats turning into waves.

All of this is entirely metaphysical, there is no observations that can confirm this interpretation. You can only justify the claim that cats literally turn into waves when you don't look at them and there is a physical collapse of that wave when you do look at them on purely philosophical grounds. It is not demanded by the theory at all. You choose to believe it purely on philosophical grounds which then leads you to think there is some "problem" with the theory that needs to be "solved," but it is purely metaphysical.

There is no actual contradiction between theory and evidence/observation, only contradiction between people's metaphysical assumptions that they refuse to question for some reason and what they a priori think the theory should be, rather than just rethinking their assumptions.

That’s how science works. Most won’t know who Penrose is till he’s dead.

I'd hardly consider what Penrose is doing to be "science" at all. All these physical "theories of consciousness" that purport not to just be explaining intelligence or self-awareness or things like that, but more specifically claim to be solving Chalmers' hard axiom of consciousness (that humans possess some immaterial invisible substance that is somehow attached to the brain but is not the brain itself), are all pseudoscience, because they are beginning with an unreasonable axiom which we have no scientific reason at all to take seriously and then trying to use science to "solve" it.

It is no different then claiming to use science to try and answer the question as to why humans have souls. Any "scientific" approach you use to try and answer that question is inherently pseudoscience because the axiomatic premise itself is flawed: it would be trying to solve a problem it never established is even a problem to be solved in the first place.

[–] bunchberry@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (3 children)

Roger Penrose is pretty much the only dude looking into consciousness from the perspective of a physicist

I would recommend reading the philosophers Jocelyn Benoist and Francois-Igor Pris who argue very convincingly that both the "hard problem of consciousness" and the "measurement problem" stem from the same logical fallacies of conflating subjectivity (or sometimes called phenomenality) with contextuality, and that both disappear when you make this distinction, and so neither are actually problems for physics to solve but are caused by fallacious reasoning in some of our a priori assumptions about the properties of reality.

Benoist's book Toward a Contextual Realism and Pris' book Contextual Realism and Quantum Mechanics both cover this really well. They are based in late Wittgensteinian philosophy, so maybe reading Saul Kripke's Wittgenstein on Rules and Private Language is a good primer.

That’s the only way free will could exist...What would give humans free will would be the inherent randomness if the whole “quantum bubble collapse” was a fundamental part of consciousness.

Even if they discover quantum phenomena in the brain, all that would show is our brain is like a quantum computer. But nobody would argue quantum computers have free will, do they? People often like to conflate the determinism/free will debate with the debate over Laplacian determinism specifically, which should not be conflated, as randomness clearly has nothing to do with the question of free will.

If the state forced everyone into a job for life the moment they turned 18, but they chose that job using a quantum random number generator, would it be "free"? Obviously not. But we can also look at it in the reverse sense. If there was a God that knew every decision you were going to make, would that negate free will? Not necessarily. Just because something knows your decision ahead of time doesn't necessarily mean you did not make that decision yourself.

The determinism/free will debate is ultimately about whether or not human decisions are reducible to the laws of physics or not. Even if there is quantum phenomena in the brain that plays a real role in decision making, our decisions would still be reducible to the laws of physics and thus determined by them. Quantum mechanics is still deterministic in the nomological sense of the word, meaning, determinism according to the laws of physics. It is just not deterministic in the absolute Laplacian sense of the word that says you can predict the future with certainty if you knew all properties of all systems in the present.

If the conditions are exactly the same down to an atomic level… You’ll get the same results every time

I think a distinction should be made between Laplacian determinism and fatalism (not sure if there's a better word for the latter category). The difference here is that both claim there is only one future, but only the former claims the future is perfectly predictable from the states of things at present. So fatalism is less strict: even in quantum mechanics that is random, there is a single outcome that is "fated to be," but you could never predict it ahead of time.

Unless you ascribe to the Many Worlds Interpretation, I think you kind of have to accept a fatalistic position in regards to quantum mechanics, mainly due not to quantum mechanics itself but special relativity. In special relativity, different observers see time passing at different rates. You can thus build a time machine that can take you into the future just by traveling really fast, near the speed of light, then turning around and coming back home.

The only way for this to even be possible for there to be different reference frames that see time pass differently is if the future already, in some sense, pre-exists. This is sometimes known as the "block universe" which suggests that the future, present, and past are all equally "real" in some sense. For the future to be real, then, there has to be an outcome of each of the quantum random events already "decided" so to speak. Quantum mechanics is nomologically deterministic in the sense that it does describe nature as reducible to the laws of physics, but not deterministic in the Laplacian sense that you can predict the future with certainty knowing even in principle. It is more comparable to fatalism, that there is a single outcome fated to be (that is, again, unless you ascribe to MWI), but it's impossible to know ahead of time.

[–] bunchberry@lemmy.world 0 points 4 months ago

If our technology is limited so we can never see beyond something, why even propose it exists? Bell's theorem also demonstrates that if you do add hidden parameters, it would have to violate Lorentz invariance, meaning it would have to contradict with the predictions of our current best theories of the universe, like GR and QFT. Even as pure speculation it's rather dubious as there's no evidence that Lorentz invariance is ever violated.

[–] bunchberry@lemmy.world 13 points 4 months ago

My issue it is similar: each "layer" of simulation would necessarily be far simpler than than the layer in which the simulation is built, and so complexity would drop down exponentially such that even an incredibly complex universe would not be able to support conscious beings in simulations within only a few layers. You could imagine that maybe the initial universe is so much more complex than our own that it could support millions of layers, but at that point you're just guessing, as we have no reason to believe there is even a single layer above our own, and the whole notion that "we're more likely to be an a simulation than not" just ceases to be true. You can't actually put a number on it, or even a vague description like "more likely." it's ultimately a guess.

[–] bunchberry@lemmy.world 2 points 4 months ago* (last edited 4 months ago) (1 children)

I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not "rendering" things at that low of a level. The problem is that, yes, it's probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That's the whole reason for building quantum computers, it's so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.

view more: next ›