Well that sent me down an interesting but short ~~rabbithole~~ wormhole, ending here. Glad to see I'm not alone in thinking most forms of consciousness copy or transfer that get discussed are actually involving murder/death of the original, even if the resulting copy believes itself to be the same entity and people around it treat it as such.
I'd absolutely be one of those "I ain't getting in that transporter" people on Star Trek unless convinced that it truly was a transfer of consciousness, not a copy and destroy.
Mind you, I'd love for that not to be the case, and would love to be convinced otherwise. It kills my enjoyment of stories that are centered around that sort of technology sometimes.
Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control of the body.
Oddly, the bolded ship-of-Theseus kind of approach doesn't bother me as much - maybe because it feels akin to the continuous death and replacement of individual cells, but if challenged I might have a hard time defending why this bothers me so much less than the Transporter or even Altered Carbon approach.
You're coming at this from a slightly askew angle. Consciousness is holographic - that is, it's complex behavior arising in the interaction of a more complex system. There's nothing "more" to it than what we see. The transporters from startrek, which destroy then reproduce exactly, would change nothing about your experience. You're just a complex arrangement of atoms, and it doesnt matter where that arrangement occurs so long as it's unique. There is no "you", there's just "stuff" stuck together in a way that lets it think that it can think about itself. A perfect reproduction would result in the same entity, perfectly reproduced.
Alas, philosophers answer questions about the interrelation of minds, but not what a mind actually, chemically, is. They can extemporize at great length on the tendencies of a mind, the definition of consciousness, the value of thought, the many many vagaries of morality. They cannot, unfortunately, sit down and draw a picture of a mind. Many good and important questions can be answered by philosophers, but not every problem can or should be assessed with the tools they have.
You may be conscious, and you may have many long and deeply opinionated thoughts about what it means to be conscious, and how you can know that you are in fact conscious, but you cannot tell me what consciousness looks like. And to be perfectly honest, I don't really care.
I don't know if you've ever done this, but you should sometime present an engineer with the trolley problem. I've done this many times, and the invariable result is that they will ask endless questions to establish the parameters and present endless solutions within those parameters so that nobody has to die at all. It is, in short, a problem. Not an ontological tool for unlocking hidden understanding, which falls under the purview of your 'philosophy', but a practical problem. Like how you're going to prevent some big mean mother-hubbard from tying you to the hypothetically metaphorical trolley tracks. And the solution? Is a gun. And if that don't work, use more gun. Like this heavy caliber tripod-mounted little old number designed by me. Built, by me.
You’re presupposing the superiority of science. What good is knowing the chemical composition of a mind, if such chemicals are but shadows on the cave wall?
You can’t actually witness a rock, in its full objective “rock-ness”. You can only witness yourself perceiving the rock. I call this the Principle of Objective Things in Space.
Admittedly, the study of consciousness is still in its infancy, especially compared to study of the physical world. But it would be foolish to discard the entire concept when it is unavoidably fundamental. Suppose we do invent teleporters and they do erase consciousness. Doesn’t it say something about the peril of worshipping quantification over all else, that we wouldn’t even know until we had already teleported all of our bread? The entire field is babies. I am heavy ideas guy and this is my PoOTiS.
(I am absolutely going to steal the Principle of Objective Things in Space, that's wonderful.)
There's a drive philosophers have, to question why things are the way they are, through a very specific lens. Why is it wrong to push a fat man onto the trolley tracks, if his death would save six others? Why is there a difference between the perception of the shadows and the perception of the man with the shadow puppets? Does free will exist, and why does that matter?
These are all the pursuit of meaning, and while they are noble and important questions to ask, they are not questions driven by the pursuit of understanding. Philosophy depends on assumptions about the world that are taken to be incontrovertible, and bases it's conclusions from there. The capacity for choice is a classic example, as is the assumption of a causal universe, and though they're quite reasonable things to assume in most cases, it can get mind-bleedingly aggravating when philosophers apply the same approach to pure fields like mathematics, which require rigorous establishment of assumptions before any valid value of truth can be derived.
Which is not to attack philosophers. I want to be clear about that, I bring this up just to emphasize that there are differences in thought between the two disciplines (that occasionally those differences in thought make me want to brain them with a chair is unrelated to the topic at hand). The philosophical study and speculation as to and on the nature of consciousness is perhaps the single oldest field of inquiry humanity has. And while the debate has raged for literal ages, we haven't really gotten anywhere with it.
And then, recently, scientists (especially computer scientists, but many other fields as well) have shown up and gone "hey look, we can see what the brain looks like, we know how the discrete parts work, we can even simulate it! Look, we've got the behavior right here, and... well, maybe... when we get right down to it, it's just not all that deep?" And philosophers have embraced this, enfolded it into their considerations, accepted it as valid work... and then kept right on asking the exact same questions.
The truth is, as I've been able to study it, that 'consciousness' is a meaningless term. We haven't been able to define it for ten thousand years of sitting around stroking our beards, because it's posited on assumptions that turn out to be, fundamentally, meaningless. It's assumed there is another layer of abstraction, or that there's a point or meaning to consciousness, or anything within the Theory of Mind. And I think it's just too hard to accept that, maybe, it all... doesn't matter. That we haven't found any answers not because the question is somehow unanswerable, but because the question was asked in a context that invalidates the entire premise. It's the philosophical equivalence of 'null'.
Sufficiently complex networks can compute and self reference, and it turns out when you do that enough, it'll start referencing The Self (or whatever you'd like to call it). There's no deeper meaning, or hidden truth. There's just that, on a machine, a simulation can be run that can think about itself.
Everything else is just... ontological window dressing. Syntactic sugar for the teenage soul.
This article is 9 years old. Here's the OpenWorm Wikipedia page.
Edit: still haven't mapped the brain but here's the official site and [the github] (https://github.com/openworm/OpenWorm)
Well that sent me down an interesting but short ~~rabbithole~~ wormhole, ending here. Glad to see I'm not alone in thinking most forms of consciousness copy or transfer that get discussed are actually involving murder/death of the original, even if the resulting copy believes itself to be the same entity and people around it treat it as such.
I'd absolutely be one of those "I ain't getting in that transporter" people on Star Trek unless convinced that it truly was a transfer of consciousness, not a copy and destroy.
Mind you, I'd love for that not to be the case, and would love to be convinced otherwise. It kills my enjoyment of stories that are centered around that sort of technology sometimes.
Oddly, the bolded ship-of-Theseus kind of approach doesn't bother me as much - maybe because it feels akin to the continuous death and replacement of individual cells, but if challenged I might have a hard time defending why this bothers me so much less than the Transporter or even Altered Carbon approach.
You're coming at this from a slightly askew angle. Consciousness is holographic - that is, it's complex behavior arising in the interaction of a more complex system. There's nothing "more" to it than what we see. The transporters from startrek, which destroy then reproduce exactly, would change nothing about your experience. You're just a complex arrangement of atoms, and it doesnt matter where that arrangement occurs so long as it's unique. There is no "you", there's just "stuff" stuck together in a way that lets it think that it can think about itself. A perfect reproduction would result in the same entity, perfectly reproduced.
The physical world is the hologram.
Between saccades, fnords, and confabulation, I don’t trust a single thing my senses tell me. But the one thing I know for sure is that I’m conscious.
So, knowing that only consciousness is “real”, why would I assume it can be recreated through atoms (which are a mere hallucination)?
Ah, but how do you know you're conscious?
To quote Searle: Should I pinch myself and report the results in the Journal of Philosophy?
Alas, philosophers answer questions about the interrelation of minds, but not what a mind actually, chemically, is. They can extemporize at great length on the tendencies of a mind, the definition of consciousness, the value of thought, the many many vagaries of morality. They cannot, unfortunately, sit down and draw a picture of a mind. Many good and important questions can be answered by philosophers, but not every problem can or should be assessed with the tools they have.
You may be conscious, and you may have many long and deeply opinionated thoughts about what it means to be conscious, and how you can know that you are in fact conscious, but you cannot tell me what consciousness looks like. And to be perfectly honest, I don't really care.
I don't know if you've ever done this, but you should sometime present an engineer with the trolley problem. I've done this many times, and the invariable result is that they will ask endless questions to establish the parameters and present endless solutions within those parameters so that nobody has to die at all. It is, in short, a problem. Not an ontological tool for unlocking hidden understanding, which falls under the purview of your 'philosophy', but a practical problem. Like how you're going to prevent some big mean mother-hubbard from tying you to the hypothetically metaphorical trolley tracks. And the solution? Is a gun. And if that don't work, use more gun. Like this heavy caliber tripod-mounted little old number designed by me. Built, by me.
And you best hope, not pointed at you.
You’re presupposing the superiority of science. What good is knowing the chemical composition of a mind, if such chemicals are but shadows on the cave wall?
You can’t actually witness a rock, in its full objective “rock-ness”. You can only witness yourself perceiving the rock. I call this the Principle of Objective Things in Space.
Admittedly, the study of consciousness is still in its infancy, especially compared to study of the physical world. But it would be foolish to discard the entire concept when it is unavoidably fundamental. Suppose we do invent teleporters and they do erase consciousness. Doesn’t it say something about the peril of worshipping quantification over all else, that we wouldn’t even know until we had already teleported all of our bread? The entire field is babies. I am heavy ideas guy and this is my PoOTiS.
(I am absolutely going to steal the Principle of Objective Things in Space, that's wonderful.)
There's a drive philosophers have, to question why things are the way they are, through a very specific lens. Why is it wrong to push a fat man onto the trolley tracks, if his death would save six others? Why is there a difference between the perception of the shadows and the perception of the man with the shadow puppets? Does free will exist, and why does that matter?
These are all the pursuit of meaning, and while they are noble and important questions to ask, they are not questions driven by the pursuit of understanding. Philosophy depends on assumptions about the world that are taken to be incontrovertible, and bases it's conclusions from there. The capacity for choice is a classic example, as is the assumption of a causal universe, and though they're quite reasonable things to assume in most cases, it can get mind-bleedingly aggravating when philosophers apply the same approach to pure fields like mathematics, which require rigorous establishment of assumptions before any valid value of truth can be derived.
Which is not to attack philosophers. I want to be clear about that, I bring this up just to emphasize that there are differences in thought between the two disciplines (that occasionally those differences in thought make me want to brain them with a chair is unrelated to the topic at hand). The philosophical study and speculation as to and on the nature of consciousness is perhaps the single oldest field of inquiry humanity has. And while the debate has raged for literal ages, we haven't really gotten anywhere with it.
And then, recently, scientists (especially computer scientists, but many other fields as well) have shown up and gone "hey look, we can see what the brain looks like, we know how the discrete parts work, we can even simulate it! Look, we've got the behavior right here, and... well, maybe... when we get right down to it, it's just not all that deep?" And philosophers have embraced this, enfolded it into their considerations, accepted it as valid work... and then kept right on asking the exact same questions.
The truth is, as I've been able to study it, that 'consciousness' is a meaningless term. We haven't been able to define it for ten thousand years of sitting around stroking our beards, because it's posited on assumptions that turn out to be, fundamentally, meaningless. It's assumed there is another layer of abstraction, or that there's a point or meaning to consciousness, or anything within the Theory of Mind. And I think it's just too hard to accept that, maybe, it all... doesn't matter. That we haven't found any answers not because the question is somehow unanswerable, but because the question was asked in a context that invalidates the entire premise. It's the philosophical equivalence of 'null'.
Sufficiently complex networks can compute and self reference, and it turns out when you do that enough, it'll start referencing The Self (or whatever you'd like to call it). There's no deeper meaning, or hidden truth. There's just that, on a machine, a simulation can be run that can think about itself.
Everything else is just... ontological window dressing. Syntactic sugar for the teenage soul.
consciousness does not exist outside the physical world (nothing does), so why would you remove it from the study of the physical world?
why would an exact replica not have all the same properties, including consciousness? or is this just an extraordinary claim without evidence?