After trillions of experiments carried out by billions of volunteers over the course of thousands of years, we come to the inescapable conclusion that consciousness is the result of brain chemistry, and that everyone else - Cartesian Dualists, Berkelian Idealists, Penrosian Quantumists - is quite simply wrong.*
I'm talking, of course, about beer.
1
i'm a quantumist and proud of it. but i'm not sure if i am a tegmarkian or a penrosian.
and you forgot wine.
G-d made man
frail as a bubble
G-d made love
love made trouble
G-d made the vine
was it a sin
that man made wine to drown trouble in?
Posted by: matoko kusanagi at Saturday, August 13 2005 02:22 AM (gNc4O)
Posted by: Pixy Misa at Saturday, August 13 2005 02:24 AM (AIaDY)
3
this typekey stuff is so flakey--every third comment puts me back in MT purgatory.
but pixy, if matter and energy are the underlying substrate for the electro-biochemical processes of thought and memory, don't we have to have quantum consiciousness?
Posted by: matoko kusanagi at Saturday, August 13 2005 03:24 AM (gNc4O)
4
No.
Or if so, only in the same sense that we have quantum car keys and quantum tennis balls. Quantum mechanics underlies everything, but in general, the quantum effects are averaged out rather than amplified (which gives us, for example, chemistry).
Penrose argues that consciousness cannot be the result of biochemical brain function, and must be the direct result of quantum fairies (I think they're even quantum gravity fairies).
The beer argument shows that this is not so. The effect of beer on consciousness makes no sense if consciousness is generated by quantum gravity fairies, but is perfectly natural if consciousness is a product of biochemistry. Alcohol has no effect on quantum gravity, but it has a very direct effect on biochemistry.
Berkelian Idealism is in an even worse position vis-a-vis beer. If mind is what exists, how does beer, a mere shadow of the mind, manage to disrupt mind function?
And Dualism is inherently self-contradictory, so Dualists tend to drink a lot.
Posted by: Pixy Misa at Saturday, August 13 2005 04:11 AM (AIaDY)
Posted by: Susie at Saturday, August 13 2005 10:42 AM (nekkG)
6
I must be a dualist, then.
Posted by: Wonderduck at Sunday, August 14 2005 01:07 AM (QbcjU)
7
Is consciousness is the result of brain chemistry alone -- or of any system, chemical or otherwise, that processes information in a particular way?
If the Strong Beer Theory is correct, then only chemical systems can have consciousness.
If the Weak Beer Theory is correct, then only chemical systems or systems that directly model chemical systems can have consciousness.
If the Bloody Useless Warm Pommy Beer theory is correct, then systems can have consciousness without necessarily having any relation to chemical systems.
Posted by: Evil Pundit at Sunday, August 14 2005 11:32 PM (+2/LZ)
8
I have a nasty cold and right now my consciousness seems to be the result of cottage cheese, but I'll try to answer.
The effect of beer on human consciousness shows us that our consciousness is the result of brain chemistry, but provides no information on whether this must be so in general. So only the BUWPBT is supported by evidence.
I see no reason that the SBT should be true, and I sincerely doubt that the WBT is true.
Posted by: Pixy Misa at Monday, August 15 2005 03:42 AM (ymzzr)
9
That's pretty much my own opinion.
I'd love to see a glorified Babbage machine (several dozen hectares of it) develop consciousness.
Hope your cold goes away soon.
Posted by: Evil Pundit at Monday, August 15 2005 04:37 AM (+2/LZ)
10
In Penrose's theory, quantum gravity matters for biochemical dynamics! Penrose has a
technical argument that superpositions of "different enough" geometries are ill-defined - I believe the problem is that general relativity offers no natural way to synchronize the passage of time throughout the different components of the superposition. He even has an
experiment meant to differentiate "objective wavefunction collapse" from decoherence, the known phenomenon it would most closely resemble. In any case, the bottom line is that if a chemical enters the brain and (for example) causes microtubule wavefunctions to decohere more often, then the theory does predict that it should alter the character of consciousness. And in fact this is how his collaborator Stuart Hameroff proposes to explain the effect of anesthetics.
Posted by: mitchell porter at Wednesday, August 17 2005 01:09 PM (mr6sB)
11
In Penrose's theory, quantum gravity matters for biochemical dynamics!
But that's clearly nonsense, because biochemical dynamics are fully explained by normal chemistry. Quantum gravity is irrelevant.
In any case, the bottom line is that if a chemical enters the brain and (for example) causes microtubule wavefunctions to decohere more often, then the theory does predict that it should alter the character of consciousness.
Which is also clearly nonsense, because the same chemical is causing far more significant biochemical changes
directly. Even if Penrose's quantum fairies existed, they'd be drowned in the noise of the biochemical processes.
Sorry, but Penrose's ideas are at odds with everything we know about how the brain works, and how it doesn't work, and that's a hell of a lot. It's just the pointless speculation of a mathematical physicist in a field he knows nothing about.
Posted by: Pixy Misa at Wednesday, August 17 2005 07:17 PM (ymzzr)
12
Quantum mechanics, as it stands, is not a complete theory of reality - "observables" are ascribed definite values only at the moment of measurement. Some people (parapsychologists, metaphysical idealists, free-will advocates) are happy to suppose that this is the whole story, and that the act of observation collapses the wavefunction. However, if you press them on this issue, the vast majority of physicists don't want to take that avenue, and so we have the various new formulations of quantum theory (Everett, Bohm, Cramer) that are meant to put the theory on a more objective and less anthropocentric basis. This is a problem that every contemporary materialist should know about and take very seriously.
Chemistry is really the quantum theory of electrons interacting with nuclei, and it inherits this same conceptual problem. What is the *actual* state of an electron in an orbital? Is it literally in all those places at once, or is it just in one of them? We may get to evade this question in most empirical contexts, but we have no excuse for ignoring it altogether. In Penrose's theory, quantum jumps are controlled by quantum gravity, not by "observation", and so quantum gravity is the ultimate determinant of the *actual* state. This is the sense in which "quantum gravity matters for biochemical dynamics" in his theory.
Next, neuroscience. The quest for the "neural correlates of consciousness" - that is, the part of the brain whose physical state is somehow to be equated with the subjective state of consciousness - is a mainstream preoccupation by now. Hameroff and Penrose are saying one needs to pick out not just a brain region, but actually a particular subcellular structure. E.g. that the physical correlate of visual consciousness is not just "neurons in the visual cortex", but, more precisely, quantum-entangled microtubules in neurons in the visual cortex. I'll return to the empirical merits of this particular hypothesis shortly. But if one can entertain it for a moment, it is clear that a biochemical change will be relevant to the state of consciousness, only if it eventually impacts on the quantum state of the microtubules, for instance by damping the thermal noise to which you refer. This strikes me as a rather fruitfully stringent criterion. It does not obfuscate; it invites inquiry about mechanism. The only trouble is that it's presently difficult to investigate exact quantum states in biological matter (with a few exceptions, such as NMR), but that situation will improve with time.
Now to the question of whether quantum-brain theories are badly motivated. Since it is clearly possible, in our current state of ignorance, that quantum computation occurs in the brain, this can only be a question of research *priorities*: one might say that we do know it's unlikely, or that the arguments advanced in its favor are spurious. Penrose, of course, got here via Turing and Goedel. I do not think the arguments for noncomputable mind are particularly potent, because we have no evidence that the human mind really can "jump out of the system" indefinitely. I have my own reasons for being interested in quantum-brain theories, namely (1) the unitary character of consciousness (2) the un-objectively fuzzy character, from a microphysical perspective, of classical computational states (3) the possible evolutionary advantages of (e.g.) quantum search over classical search. But if you do have an a-priori interest in the quantum mind, then microtubules really are a promising place to look, on account of their high degree of symmetry, something which can enhance quantum effects.
Posted by: mitchell porter at Thursday, August 18 2005 02:48 AM (mr6sB)
13
Ugh.
The human brain doesn't do anything resembling quantum computation. What it does do is exactly what you would expect a biochemical system of that nature to do.
That's the thing: There's nothing that requires a quantum-mechanical explanation, there's no evidence that there are direct (rather than statistical) quantum-mechanical influences, and there's no plausible mechanism for this to happen.
You refer to "our current state of ignorance". We're not
that ignorant about what's going on in the brain, far from it. And there's not the slightest shred of evidence from any direction that it's quantum-mechanical in nature. I mean, you refer to
the possible evolutionary advantages of (e.g.) quantum search over classical search, when there is no evidence at all that the brain does quantum searches. It sure as hell doesn't act that way, so the evolutionary advantage would seem to be nil.
Chemistry is really the quantum theory of electrons interacting with nuclei, and it inherits this same conceptual problem.
Chemistry inherits nothing of the sort. Chemistry works. Chemistry doesn't care about that level of abstraction; it's simply not relevant.
Oh, and Chemistry is the theory of electrons interacting with
other electrons. Nuclei barely enter into it; they can be viewed as positively charged point masses in almost every case.
But if one can entertain it for a moment, it is clear that a biochemical change will be relevant to the state of consciousness, only if it eventually impacts on the quantum state of the microtubules, for instance by damping the thermal noise to which you refer.
I'm not talking about thermal noise, I'm talking about systemic noise. It's incontrovertible that the brain processes information at the cellular level. We can - and do - watch this happening. We can see the effects of, say, alcohol on this cellular function at the same time as we observe its effects on consciousness - as expressed, once more, via the brain.
What I am pointing out is that the drastic biochemical impact of alcohol (in our example) would drown out any quantum-gravitational influence on or from the microtubules. Since quantum effects could make no difference either way, and yet we do see a very marked change, it is clear that the change is coming from the biochemical system and that quantum has nothing to do with it.
And if the microtubules were having a sufficiently significant effect to influence the system beyond the level of the direct biochemical activity, that would be directly detectible with current equipent, and we see
nothing. It just isn't happening.
You can't get to where Penrose is purely via Goedel and Turing; you have to change trains at Dennet and head up the Unfounded Speculation Line.
Oh, and:
In Penrose's theory, quantum jumps are controlled by quantum gravity
I'll bet you five dollars that this is not a theory in the scientific sense. It's right up Penrose's alley - unlike the rest of this stuff - but if it was an actual
theory it would have actually made some impact on QM. And it hasn't.
Posted by: Pixy Misa at Thursday, August 18 2005 03:59 AM (AIaDY)
14
In fact, that last bit sounds very much like an
interpretation rather than a theory, and I expect that it is not testable in any way. There are plenty of interpretations of QM already, and Penrose is welcome to add his own, but it doesn't change the calculations.
Posted by: Pixy Misa at Thursday, August 18 2005 04:09 AM (AIaDY)
15
To put it another way: If quantum gravity resonance in microtubules is really the source of consciousness, then it is somewhat curious that the result is indistinguishable from biochemistry.
Posted by: Pixy Misa at Thursday, August 18 2005 04:14 AM (AIaDY)
16
Well, it's my impression (from seminars, from the literature) that we have hardly begun to think about cellular biophysics. The prevailing physical conception of the cell is something like: a reaction-diffusion chamber for gene products, some of which self-organize into supramolecular complexes that do stuff. And (to put it polemically), genetics tells us what those complexes do, but not how they do it. For that, one needs a model of physical mechanism.
Drexler's Nanosystems has a nice overview of the spectrum of modelling choices available. The bottom rung (quantum field theory) is indeed considered irrelevant to biophysics. But this is just a default-conservative assumption, made by people who are already preoccupied with understanding higher-level complexities, and not the result of any systematic consideration of the subject. The pseudo-crystalline structure of the cytoskeleton, and the microtubule's two-dimensional array of symmetrically coupled subunits, makes it a natural place to look for quantum many-body effects, for example among mobile electrons, whose activities would be coupled to conformational change and thus to molecular function. Furthermore, the cytoskeleton is a highly dynamic structure implicated in a wide variety of cellular processes; and it has a unique form (no "centrioles") in neurons. The information processing that we know about in neurons (action potential propagation, synaptic transmission) very definitely interacts with intracellular state changes, e.g. the "second messenger" system, and, in a quantum-brain model, should presumably be viewed as a form of classical co-processing. (Every model of quantum computation I've ever seen also features auxiliary classical computation.)
I could say much more about biological detail but I should answer a few other points. My list of reasons for being interested (such as "advantages of quantum search") are reasons to take an a-priori interest in the hypothesis, at a time when there is no empirical evidence either way. Exotic quantum effects in biomolecules will be "indistinguishable from biochemistry" because empirically, biochemistry is defined as what biomolecules are observed to do, and often we don't know the physical mechanisms involved in how they do it (e.g. protein folding). As for Penrose's amendment to quantum theory, he has managed to extract a testable prediction from it (see my first comment, second link). But it's a difficult experiment.
Posted by: mitchell porter at Thursday, August 18 2005 01:23 PM (mr6sB)
17
But this is just a default-conservative assumption, made by people who are already preoccupied with understanding higher-level complexities, and not the result of any systematic consideration of the subject.
No. It's the null hypothesis. In the absence of
any evidence whatsoever that anything of the sort is happening, you don't run off into wild speculations about it.
The information processing that we know about in neurons (action potential propagation, synaptic transmission) very definitely interacts with intracellular state changes, e.g. the "second messenger" system, and, in a quantum-brain model, should presumably be viewed as a form of classical co-processing.
No.
There is no sign of quantum processing at all. Nothing that the brain does looks anything like quantum processing, in terms of operations or results.
Exotic quantum effects in biomolecules will be "indistinguishable from biochemistry" because empirically, biochemistry is defined as what biomolecules are observed to do
That's complete nonsense. Penrose isn't talking about quantum effects within molecules, he's talking about quantum events within cell structures.
The first
is biochemistry, the latter most definitely is not. If there were quantum gravity events being amplified by microtubules in such a way so that it gave rise to consciousness, we would
know, because consciousness would not be correlated directly and in every case with brain chemistry. (Or at least, we would know that there was something other than just biochemistry going on.)
But the converse is true. Consciousness always correlates with brain biochemistry; changes in consciousness always correlate with changes in brain biochemistry. There is simply no reason to believe that anything else is going on.
Penrose's speculation on consciousness is magic fairies and nothing more.
My list of reasons for being interested (such as "advantages of quantum search") are reasons to take an a-priori interest in the hypothesis, at a time when there is no empirical evidence either way.
There is no evidence that the brain performs quantum searches.
None. It doesn't matter how much of an evolutionary advantage it might be if it doesn't actually exist. The "searches" that the brain
does perform act nothing like quantum searches. So there is empirical evidence, and it is that there aren't quantum searches. It's not sufficient for falsification because Penrose's speculation is just speculation and can't be falsified.
I took a look at that diagram you linked to; of itself it tells me nothing, but presumably there is more to it than that. Penrose is, after all, a competent mathematical physicist; it's just outside of that field that he gets hopelessly lost.
Posted by: Pixy Misa at Thursday, August 18 2005 10:31 PM (AIaDY)
18
A
Kane quantum chip should look superficially just like an ordinary silicon chip. You would have to discover the phosphorus dopants and surmise their purpose to figure out that there was more than classical computation taking place.
But let's short-circuit this debate, which from my perspective really is about the necessity of "wild speculations". I think the Hameroff-Penrose theory is not as arbitrary as people say, but it certainly involves simultaneous multiple hypotheses. The theory's advocates should spend less time trying to interest the world, and more time developing the theory. But even if Penrose were shown to be 100% correct about brain physics, it would only be an incremental advance philosophically. Materialism about the mind, in every form that I have ever seen, either posits the identity of two very dissimilar things, or tries to deny the mental entirely; and Penrose's theory does not change this.
Consider everyone's favorite, color perception. What do the colors that one actually perceives have in common with an electromagnetic pulse of a certain wavelength, or with a particular spiking pattern in a neuron? Both the latter are arrangements of colorless matter in space-time, so where does the "color" come from? And the Dennett Line terminates with the even more fantastic conclusion that, despite appearances, color isn't actually there. Naturalistic philosophy of mind reduces to a Hobson's choice between impossibilities. I conclude that the metaphysics of naturalism is a bit too parsimonious, and that we must acknowledge ontological categories beyond those countenanced by mathematical physics. Understanding how they might relate to familiar categories such as number and form is the real challenge.
Posted by: mitchell porter at Thursday, August 18 2005 11:53 PM (mr6sB)
19
You would have to discover the phosphorus dopants and surmise their purpose to figure out that there was more than classical computation taking place.
That depends very much on what it was doing. Quantum and classical processors work entirely differently, and it is very easy to distinguish between them based on their results. (Not in every case, true. But you can choose your tests.)
I think the Hameroff-Penrose theory is not as arbitrary as people say, but it certainly involves simultaneous multiple hypotheses.
Well yeah. As I said, wild speculation.
Materialism about the mind, in every form that I have ever seen, either posits the identity of two very dissimilar things, or tries to deny the mental entirely
Huh? I don't know what you have been reading, but it sounds like no materialism I have ever heard of.
What do the colors that one actually perceives have in common with an electromagnetic pulse of a certain wavelength, or with a particular spiking pattern in a neuron?
Conscious perception of colour is the product of the brain's processing of the
physical perception of colour. (Or of a memory, of course.)
Both the latter are arrangements of colorless matter in space-time, so where does the "color" come from?
It's information. That's what the brain does, after all; it processes information.
And the Dennett Line terminates with the even more fantastic conclusion that, despite appearances, color isn't actually there.
Isn't where? In the brain? Of course not. What's there is the
representation of colour.
Naturalistic philosophy of mind reduces to a Hobson's choice between impossibilities.
Well, if it does, that will come as a surprise to Naturalists, because no-one has ever shown this to be the case.
I conclude that the metaphysics of naturalism is a bit too parsimonious, and that we must acknowledge ontological categories beyond those countenanced by mathematical physics.
And I am waiting, not entirely patiently, for someone to provide me with a coherent explanation for why they believe this.
The upshot of all this is that Penrosian Consciousness is just Cartesian Dualism in a Quantum Hat. Right?
Posted by: Pixy Misa at Friday, August 19 2005 01:24 AM (AIaDY)
20
Penrose should not be held responsible for the views I am now expressing. :-)
So. My experience of the world includes visual sensations of color. I wish to know what, in materialist terms, color is. Your answer seems to be the following. First there is the "physical perception of color", which I take to be the physical response of sensory neurons to the physical stimulus of light. Then a series of neural computations occur, producing a "representation of color", which I take to be some sort of state of cortical neurons. In other words, all we have are states of neurons - which, on a physicalist account, are assemblies of colorless particles in space. No color so far. So where is the color? What is it? It's "information". But what is that? Is it perhaps a quantitative physical property, such as the Shannon information in the state of a particular set of neurons? Because that would be a very strange thing for color to actually be - the logarithm of a probability. I don't see how venturing into the configuration space of a set of colorless objects brings us any closer to actually having color there.
Given the premises so far, I see three options here. You can take the Dennett Line, throw up your hands and say, there's nothing to the phenomenon of color beyond what you've described. You can be an identity theorist and assert that color is information. Or you can be an "information dualist", and say that color and information aren't the same, but they're linked somehow. Is there another option?
Posted by: mitchell porter at Friday, August 19 2005 03:26 AM (mr6sB)
21
My experience of the world includes visual sensations of color.
As does mine! So, we're clearly starting from a point of agreement.
First there is the "physical perception of color", which I take to be the physical response of sensory neurons to the physical stimulus of light.
Correct.
Then a series of neural computations occur, producing a "representation of color", which I take to be some sort of state of cortical neurons.
Not a state, but a process. This probably isn't important in this example, but perceptions are processes, not states.
In other words, all we have are states of neurons - which, on a physicalist account, are assemblies of colorless particles in space.
With my previous proviso, yes.
No color so far.
No colour; only the representation of colour.
So where is the color?
Exactly where it was.
What is it? It's "information".
Or more precisely, information being processed.
But what is that?
It's information.
Is it perhaps a quantitative physical property, such as the Shannon information in the state of a particular set of neurons?
Eh?
Because that would be a very strange thing for color to actually be - the logarithm of a probability.
I'm not sure how that even applies.
I don't see how venturing into the configuration space of a set of colorless objects brings us any closer to actually having color there.
Well, it doesn't.
You don't have colour there.
You have a representation of colour. It's information, or an information process.
I keep pointing this out: The brain doesn't process things, it processes information about things. You won't find colour in the brain, any more than you will find wombats or refrigerators.
Given the premises so far, I see three options here.
Fourth option: Your terms aren't well-defined.
Define colour. Then come back if you still have a problem.
If you expect colour as it is defined in physics to be present in the brain, you are bound to be disappointed.
If you expect colour as it is perceived in the brain to be the same thing as handled by the laws of optics, then still more disappointment looms.
But in the two cases, colour is defined entirely differently. If you pick a definition and stick to it, there isn't a problem. It's only because you are conflating multiple definitions of colour that you perceive a problem.
Think of the brain as a computer. It is, after all, just not much like the ones on our desks.
You take a picture with your webcam, and the computer saves it as a bunch of numbers. Where's the colour? The computer can tell you that a particular pixel is orange. Where's the colour? All it has is numbers. Where's the computer's perception of orangeness? It must have one, because it just told us that it perceived that colour. The fact that it would only take a few lines of code to do this is irrelevant.
Posted by: Pixy Misa at Friday, August 19 2005 03:58 AM (AIaDY)
22
Color is that ensemble of location-specific properties (hue, saturation, brightness) which fills every visible shape.
Your turn: define information.
By the way, I can look in a paint catalog and have it inform me, correctly, that various squares of color on the page are red, orange, etc. This is not to be explained by attributing "perceptions" to the catalog. But there is no need to do this in the case of the computer, either.
Posted by: mitchell porter at Saturday, August 20 2005 12:39 AM (mr6sB)
23
hmmm...but, pixy, if you are saying the brain is like a computer, then you are implying turing computability, and we're back to Godel, Hilbert, Turing, Church, Searle, etc, and the tiling problem and the haltling problem and the diophantine equation problem that have all been been proven computationally insoluable.
Perhaps beer generates quantum effects as well as biochemical ones.
;-)
Posted by: matoko kusanagi at Saturday, August 20 2005 01:55 AM (7TtOW)
24
Color is that ensemble of location-specific properties (hue, saturation, brightness) which fills every visible shape.
Well if it's location-specific,
obviously you're not going to find it in the brain, which makes me wonder why you asked the question in the first place.
Your turn: define information.
In what context? Computer science? Physics? Cognitive science?
By the way, I can look in a paint catalog and have it inform me, correctly, that various squares of color on the page are red, orange, etc.
You can also look at a paint catalog and have it inform you, incorrectly, that various squares of colour on the page are puce, fuchsia, orangutan, skorkle, etc.
Because the paint catalog is a static object; it doesn't process information in any way.
This is not to be explained by attributing "perceptions" to the catalog.
Right.
But there is no need to do this in the case of the computer, either.
Wrong.
You can present
any image to the computer, choose
any part of that image, and the computer will tell you what colour it is. That necessarily requires a perception of colour.
That perception of colour is trivial and shallow, but that doesn't matter. You can't dismiss the existence of something just because you understand how it works.
Posted by: Pixy Misa at Saturday, August 20 2005 02:04 AM (AIaDY)
25
Your turn: define information.
i'll take that one, mitch--
Dr. Zeilinger says information and reality are the same thing--but he can't prove it...yet.
Posted by: matoko kusanagi at Saturday, August 20 2005 02:13 AM (7TtOW)
26
hmmm...but, pixy, if you are saying the brain is like a computer, then you are implying turing computability
No.
I'm not saying that consciousness isn't Turing computable, but it is perfectly clear that the brain is not a Turing machine.
Godel, Hilbert, Turing, Church, Searle
sings:
One of these things is not like the others,
One of these things just doesn't belong
the tiling problem and the haltling problem and the diophantine equation problem that have all been been proven computationally insoluable.
The brain can't solve computationally insoluble problems either. At least not certain classes of such. For example, the brain is necessarily bound by Gödel's incompleteness theorem - not because the brain is a formal system (it isn't), but because if you want to carry out rigorous reasoning about mathematics, you have to
use a formal system.
The work of Church and Turing likewise applies to a non-Turing computational system that is performing a task according to Turning-equivalent rules.
Searle, on the other hand, is only useful for teaching freshman philosophy students how to detect flawed arguments.
Posted by: Pixy Misa at Saturday, August 20 2005 02:26 AM (AIaDY)
27
information and reality are the same thing
I'm not convinced that is even a meaningful statement.
Posted by: Pixy Misa at Saturday, August 20 2005 02:30 AM (AIaDY)
28
Searle, on the other hand, is only useful for teaching freshman philosophy students how to detect flawed arguments.
huh? i was refering to the chinese room problem, which Penrose feels gives some support to his position, that "a 'simulation' of understanding is not equal to 'actual' understanding."
Posted by: matoko kusanagi at Saturday, August 20 2005 02:41 AM (7TtOW)
29
The fundamental reason why I believe in this is that it is impossible to make an operational distinction between reality and information. In other words, whenever we make any statement about the world, about any object, about any feature of any object, we always make statements about the information we have. And, whenever we make scientific predictions we make statements about information we possibly attain in the future. So one might be tempted to believe that everything is just information. The danger there is solipsism and subjectivism. But we know, even as we cannot prove it, that there is reality out there. For me the strongest argument for a reality independent of us is the randomness of the individual quantum event, like the decay of a radioactive atom. There is no hidden reason why a given atom decays at the very instant it does so.
Anton Zeilinger
performed the quantum teleportation experiment
http://www.quantum.univie.ac.at/
Posted by: matoko kusanagi at Saturday, August 20 2005 02:50 AM (7TtOW)
30
And Pixy, (this is so fun, thanx, even if you are kicking my ass), my basic problem with the biochemical basis of consciousness theory, is that biochemistry is computable. Non?
Posted by: matoko kusanagi at Saturday, August 20 2005 02:52 AM (7TtOW)
31
huh? i was refering to the chinese room problem, which Penrose feels gives some support to his position, that "a 'simulation' of understanding is not equal to 'actual' understanding."
That's what I thought you were referring to - after all, it's what Searle is best known for.
And it's complete tripe.
The Chinese Room clearly understands Chinese. Every conceivable external test - by defintion, as Searle proposed it - shows this to be so.
Searle then complains that if he breaks the room up, he can't find the Understanding. There's a man, who doesn't understand Chinese, and there are a whole lot of books, which don't understand anything at all.
It's just the Fallacy of Division again. It keeps popping up in this sort of argument. Understanding Chinese is a property of the system, not of the components.
If Penrose thinks Searle's argument supports him, that's all the more reason to consider Penrose to be hopelessly lost.
Posted by: Pixy Misa at Saturday, August 20 2005 03:07 AM (AIaDY)
32
And Pixy, (this is so fun, thanx, even if you are kicking my ass), my basic problem with the biochemical basis of consciousness theory, is that biochemistry is computable. Non?
Non.
Biochemistry is statistical.
Posted by: Pixy Misa at Saturday, August 20 2005 03:09 AM (AIaDY)
33
The fundamental reason why I believe in this is that it is impossible to make an operational distinction between reality and information. In other words, whenever we make any statement about the world, about any object, about any feature of any object, we always make statements about the information we have.
Yes.
So one might be tempted to believe that everything is just information.
I don't think that follows.
The danger there is solipsism and subjectivism.
I think we can reject them on purely utilitarian grounds - they don't
work.
But we know, even as we cannot prove it, that there is reality out there.
Yep. That's the underlying assumption of metaphysical Materialism - that reality is independent of us.
For me the strongest argument for a reality independent of us is the randomness of the individual quantum event, like the decay of a radioactive atom. There is no hidden reason why a given atom decays at the very instant it does so.
Under most models of QM, yes. There are models of QM where there are causes for such events, but I don't think those models have had much success (either they turn out to make wrong predictions, or they are indistinguishable from acausal QM in practical terms).
Posted by: Pixy Misa at Saturday, August 20 2005 03:15 AM (AIaDY)
34
Biochemistry is statistical.
i don't think so...we can model a protein unfolding or molecular docking for example.
maybe i don't understand what you mean by statistical?
Posted by: matoko kusanagi at Saturday, August 20 2005 03:15 AM (7TtOW)
35
i don't think so...we can model a protein unfolding or molecular docking for example.
maybe i don't understand what you mean by statistical?
Biochemistry - all chemistry, in fact - is a higher-level statistical model built on top of quantum mechanics. The earlier (pre-QM) chemists didn't realise this, of course, but they were empiricists and chemistry worked and that was good enough.
You can model biochemistry, but that's not the same thing as computing it.
But we
still don't have any reason to believe that consciousness isn't Turing-computable.
Posted by: Pixy Misa at Saturday, August 20 2005 03:18 AM (AIaDY)
36
why isn't modelling/simulation the same as computation?
But we still don't have any reason to believe that consciousness isn't Turing-computable.
then, if we can modell brain processes at a fine enough granularity, like the calcium gradient and all, then we should be able to build an AI that would exhibit consciousness?
Posted by: matoko kusanagi at Saturday, August 20 2005 03:26 AM (7TtOW)
37
this is so fun, thanx, even if you are kicking my ass
I'm enjoying it too. This is one of the best discussions I ever had on the subject.
It's not about ass-kicking, though, it's about learning.
...
Okay, maybe a
little ass-kicking. ;)
Posted by: Pixy Misa at Saturday, August 20 2005 03:29 AM (AIaDY)
38
well, i
am learning.
;-)
Posted by: matoko kusanagi at Saturday, August 20 2005 03:32 AM (7TtOW)
39
why isn't modelling/simulation the same as computation?
It can be - but it depends on what you are modelling, and how you are modelling it.
The idea of a model, the reason models are useful, is that they are
simplified representations of something. To fully compute a biochemical process, you have to compute the acausal quantum events that underlie it.
I think you can appreciate the problem there.
then, if we can modell brain processes at a fine enough granularity, like the calcium gradient and all, then we should be able to build an AI that would exhibit consciousness?
Well, no; at least, not necessarily. The brain can be modelled, but it isn't Turing-computible. But the fact that consciousness arises from a process that isn't Turing-computible doesn't provide any information on whether consciousness is or isn't Turing-computible itself.
I suspect that it is, based on the models we have, but it's just a suspicion.
Posted by: Pixy Misa at Saturday, August 20 2005 03:34 AM (AIaDY)
40
But we still don't have any reason to believe that consciousness isn't Turing-computable.
hmmm...but neither do we have a reason to believe that it is.
i shall have to regroup, and get fresh armor.
a demain?
Posted by: matoko kusanagi at Saturday, August 20 2005 03:41 AM (7TtOW)
41
hmmm...but neither do we have a reason to believe that it is.
Well, we have some reason - certain properties that we usually attribute to consciousness have been shown to be computable - but nothing conclusive.
i shall have to regroup, and get fresh armor.
No problem.
a demain?
A which-what?
By the way, have you read Douglas Hofstadter's
Gödel, Escher, Bach: An Eternal Golden Braid? It is without a doubt
the best introduction to this whole field. You're clearly past the introductory stage, but it still has a lot to offer.
I was 16 when I read it (my mother bought it for me for Christmas...) and it just tied everything together for me. It didn't change my way of thinking, but it brought me to understand my way of thinking.
Posted by: Pixy Misa at Saturday, August 20 2005 03:55 AM (AIaDY)
42
[Define information in] what context? Computer science? Physics? Cognitive science?In whatever context you had in mind when you first said that color is information, or a product of information processing. I brought up Shannon information because I know how to define that as a property of a physical system.
Well if [color is] location-specific, obviously you're not going to find it in the brain, which makes me wonder why you asked the question in the first place.
That's location in "visual space". I don't know if that changes anything for you.
But consider what we have so far. We have a "physical perception of color" (an event in a sense organ) and a neural "representation of color", neither of which has color in the straightforward sense, I think we are agreed; and no other physical entity has been mooted as relevant so far.
It might be clearer if I expanded this discussion to take in the whole world of appearances, since that is where the color I'm talking about resides. Once upon a time ;-), we managed to agree that we both have visual sensations of color. These sensations also have forms, which are the objects of visual experience. I am trying to adopt a language here which does not presuppose materialism; there was a time when I did not know about atoms or brains, but I did know that I could "see things". Now as an adult I am told that "seeing" is an activity of my brain, that the things I see directly are in some sense in my brain, but that (unless I am hallucinating) they have a causally-mediated resemblance to physical objects external to my body. The question is whether this account makes sense, given our current conception of matter. I submit that it does not, because the things we see directly have properties (such as color) which nothing in the universe of physical theory has.
At the dawn of mathematical physics, a distinction was made between primary qualities, such as length, and secondary qualities, such as color. It was somehow agreed that primary qualities were in the world external to us, but secondary qualities were not; they were in us, not in the world. But now that our brains are like the rest of the physical world, the secondary qualities are simply nowhere - in theory. In reality, of course, they're still right there in front of us, where they always were. But they represent more of a philosophical conundrum now that there are no souls for them to inhabit.
Dennett's response, it seems to me, really is to deny the existence of the so-called secondary qualities. See his discussion of "figment" in Consciousness Explained. But this is untenable; color, in precisely this sense, is a primary epistemological datum. So I think we need to backtrack to the last time we had an ontology featuring both primary and secondary qualities - somewhere between Descartes and Russell - and go forward again from there, alongside the path that physics already took, but being careful to maintain a sense of the reality of the neglected aspects of experience. If that requires dabbling in solipsism, idealism, or dualism for a while, so be it. I see no reason why a new monism should not eventually be possible, which we might even still call materialism, but it's going to have to be ontologically richer than what we now call naturalism.
Since I do think that states of consciousness could be and would be states of "something in the brain" in a new monism (by which I mean, only one type of substance; not literally only one thing in existence), a monism that must explain everything that naturalistic physics already explains, you might reasonably ask whether I would expect states of consciousness in existing computers as well. I do not, even though the brain presently looks to us like a classical computer, as we originally discussed. If I line up the features of consciousness as known from the inside, with the features of computers as known from the outside, I find that consciousness for computers would require a very complicated psychophysical dualism (that is, the laws describing the association would have to be very complicated). It's not logically impossible, but I rate it as less likely than quantum computing in the brain, and a monism in which consciousness requires quantum entanglement between its parts.
Posted by: mitchell porter at Saturday, August 20 2005 11:26 AM (mr6sB)
43
Zeilinger, Searle, and Hofstadter... Saying that reality is made of information seems to be the new Platonism. In Aristotle, you have substance, you have form, and they need each other. In Plato you just have form (at least, that's what they say he meant). Similarly, in computationalist materialism, you have matter, and you have information, and the information resides in the matter. Because he talks about epistemology - "it is impossible to make an operational distinction between reality and information" - I think Zeilinger's abolition of matter implicitly relies on the existence of a
mind to be an alternative host for the information he talks about. So he (and others like him) may be halfway to inventing a new metaphysical idealism. Actually, you see this already in those versions of the many-worlds interpretation which only ascribe reality to branches with observers. And Tegmark says similar things in his "all possible worlds" paper, although as I recall he equivocates between saying that uninhabited worlds do not exist, and saying that their existence is merely irrelevant.
Searle I agree with; his Room doesn't understand anything, unless some peculiar form of dualism obtains. But then I think he doesn't go far enough, because he thinks that the physical brain, as described by contemporary natural science, can be a locus of "intrinsic intentionality". As a matter of logic, he should find that just as difficult to believe, but his simultaneous belief in naturalism and in consciousness leads him to trust that it must be possible, somehow. So I would agree with the criticism that there's a certain inconsistency in Searle, but I take off in the opposite direction from his critics.
Hofstadter... I read him around the same age, and I suppose I accepted his views. I know that by 20 I was already entertaining the idea that the self was a single knotted superstring, so atomism bothered me then; and by 22 or so I was an "aspect dualist", thinking in terms of correlation, rather than identity, between the string's topology and the mind's propositional state, so I had implicitly abandoned naturalism at that point. Anyway, as I recall, Hofstadter tries to escape Searle's dictum that "syntax is not semantics" (i.e. the physical tokens in a computational device do not have intrinsic meanings, and so cannot be regarded as components of a thought) by exhibiting puns, Godel sentences, and a variety of other objects whose semantics are more complex than "A represents B". But in every scenario he advances, I think you can separate out the intricacies which are causal, physical, and intrinsic, and the intricacies which are semantic and introduced by an act of interpretation. So I don't think he solved the "symbol-grounding problem" at all.
Posted by: mitchell porter at Saturday, August 20 2005 12:09 PM (mr6sB)
44
Searle I agree with; his Room doesn't understand anything
But this is obviously incorrect. If you ask the room a question, you get back a meaningful answer; that is, after all, how Searle defined the room.
That clearly requires understanding. There is no way around this,
except for redefining "understanding" in terms that deny Naturalism. Which immediately defeats the purpose.
If you don't know that you are talking to a room, if you think you are talking to a person who speaks Chinese, you would not hesitate to ascribe understanding to your correspondant.
The results are exactly the same in each case, yet Searle claims that understanding is present in one situation and not in the other,
even though he can't point to the location of this understanding in either case.
It's simply the Logical Fallacy of Division,
again.
Posted by: Pixy Misa at Saturday, August 20 2005 06:06 PM (ymzzr)
45
Anyway, as I recall, Hofstadter tries to escape Searle's dictum that "syntax is not semantics" (i.e. the physical tokens in a computational device do not have intrinsic meanings, and so cannot be regarded as components of a thought) ... But in every scenario he advances, I think you can separate out the intricacies which are causal, physical, and intrinsic, and the intricacies which are semantic and introduced by an act of interpretation.
Hmm. I'll have to take another look at it, because I don't remember Hofstadter's argument as you do.
But in any case: Robots. A robot processes symbols and then
interprets them by acting on them in the real world. (If that's not an act of interpretation, I don't know what is.)
Syntax might not be semantics, I would say, but semantics is syntax. The only way semantics can be expressed or interpreted is via syntax.
Posted by: Pixy Misa at Saturday, August 20 2005 06:12 PM (ymzzr)
46
a demain= until tomorrow, but this is really
demain prochaine, tomorrow next.
;-)
this is getting really good...mitch, i am a platonian (bird view) and i think pixy is an aristotelian (frog view)...which are you?
Posted by: matoko kusanagi at Sunday, August 21 2005 02:48 PM (ApZQK)
47
The brain can be modelled, but it isn't Turing-computible. But the fact that consciousness arises from a process that isn't Turing-computible doesn't provide any information on whether consciousness is or isn't Turing-computible itself.
At the risk of side-tracking a very interesting conversation (and, potentially, nit-picking), what is the evidence that the brain isn't Turing-computable? If you mean that the brain isn't a Turing machine, I agree whole-heartedly (the brain is simply an extremely powerful analog pattern recognition/storage system, with a few of those GEB:EGB loops thrown in for fun); but that is fundamentally different than saying that the processes by which the brain functions aren't Turing computable.
The brain accepts multiple sources of input (both internal and external), routes that information across certain neurons based on pattern-specific information (that is, input containing one pattern will fire one set of neurons, etc.), in the process reinforcing those biochemical pathways which serve to recognize/store those specific patterns, then generates output - output in this sense meaning both the action response the brain generates, and the continual firing of nuerons that represents our conciousness.
None of these processes is inherently non-Turing-computable. The primary process, pattern recognition and response, has been increasingly modeled by nueral-network systems, and since those systems function on the same basic principle as an actual brain, a Turing system of sufficient complexity should be able to exactly reproduce the processes that occur in said brain.
The only inherent difference that I can see is that a brain is an analog system and a Turing machine is not, but here Avogadro's number saves me - at a small enough level of granularity, there are no analog systems. This, of course, ties back into the quantum mechanics discussion, but, as you said, biochemistry is statistical, and thus can be accurately modelled.
So, therefore, a Turing machine with sufficiently complex instructions should be able to exactly model and reproduce the functions of any specific individual's brain; ie., a given brain is indeed Turing computable.
Of course, I'm neither nueroscientist nor AI research, and have at best a layman's understanding of each of those fields, so if there's evidence of which I'm not aware that contradicts any of my conjectures (or I'm misunderstanding what you mean by Turing-computable), I will bow to your superior knowledge.
Posted by: Jason at Sunday, August 21 2005 05:44 PM (Dj3SK)
48
At the risk of side-tracking a very interesting conversation (and, potentially, nit-picking), what is the evidence that the brain isn't Turing-computable? If you mean that the brain isn't a Turing machine, I agree whole-heartedly (the brain is simply an extremely powerful analog pattern recognition/storage system, with a few of those GEB:EGB loops thrown in for fun); but that is fundamentally different than saying that the processes by which the brain functions aren't Turing computable.
No, in fact it's the same thing.
All Turing machines are equivalent. Any Turing machine can perform the operations of any other Turing machine, regardless of the details of how they are implemented.
And Turing machines aren't analog. Ever. They are purely digital.
Now, a Turing machine can
model an analog process, by assigning a sufficient number of significant digits to each analog property. But it can never compute the analog system, in the mathematical sense.
When it comes to machine consciousness, this might well be entirely irrelevant. I think that (a) a Turing machine can be conscious and (b) a digital model of a brain can be conscious. The distinction I'm making is between modelling something and computing it.
Posted by: Pixy Misa at Sunday, August 21 2005 08:53 PM (ymzzr)
49
Perhaps I'm misunderstanding something, then. I've always understood the term "Turing computible" to mean any operation a Turing machine can perform, not simply operations which are performed on a Turing machine. The functions of an abacus are Turing computable (as a trivial example) without requiring an abacus to be a Turing machine.
So, I guess my question boils down to this - aren't, ultimately, the biochemical interactions which make up the brain computable?
Actually, formulating things this way, I answer myself - the question is roughly the same as asking whether or not the universe is deterministic (that is, from exact starting conditions, is it possible to calculate the exact state, or probabilistic distribution of states, at any given time in the future, and the answer is probably no; at least, not computable in any useful sense).
When it comes to machine consciousness, this might well be entirely irrelevant. I think that (a) a Turing machine can be conscious and (b) a digital model of a brain can be conscious. The distinction I'm making is between modelling something and computing it.
I see your point now, and completely agree.
Posted by: Jason at Sunday, August 21 2005 09:14 PM (Dj3SK)
50
If you ask the room a question, you get back a meaningful answer; that is, after all, how Searle defined the room.That clearly requires understanding. There is no way around this, except for redefining "understanding" in terms that deny Naturalism. Which immediately defeats the purpose.
On reflection, I think you're right. I say Searle's Room would not understand, so I agree with him there, but I also say Searle's Brain (i.e. the brain of ordinary naturalism) would not understand, for exactly the same reasons that the Room would not.
Posted by: mitchell porter at Tuesday, August 23 2005 12:53 AM (mr6sB)
51
mitch, i am a platonian (bird view) and i think pixy is an aristotelian (frog view)...which are you?As I recall, Tegmark introduces his two 'views' in the context of his 'reality is mathematics' theory. The birdseye view is then 'mathematics experienced from the outside', the frogseye view is 'mathematics experienced from the inside'. Even accepting Tegmark's framework, the two views are not completely disjoint, since every mathematician inhabits a particular place in the multiverse, and so is necessarily coming from a frogseye view, even when they think about things from a birdseye view. In fact, you could argue that the true birdseye view does not exist anywhere, and is at best asymptotically approximated by frogs that fly higher and higher, and see more and more of the Mindscape (to throw in a term from Rudy Rucker).
It is a risky thing to make analogies like this when you haven't really studied the original doctrines, but I might call the frogseye view Pythagorean rather than Aristotelian. Or maybe we should call these views 'Pythagorean Platonism' and 'Pythagorean Aristotelianism'. I bring up Pythagoras because he is said to have believed that 'all is number'. That would make him the original mathematical idealist.
To really discuss this properly, we would need to set aside birds, frogs, and ancient Greeks, and try to express the ideas in question plainly. Unfortunately, the modes of thought in question are simply unfamiliar to the modern mind, which is why we make allusive references to Plato and Aristotle, rather than using the correct technical vocabulary as developed by the scholastics. As for what I think, well, in this book I ran across the idea of a fundamental "exemplification relation" connecting "substance" and "property", and that would be my default position on the problem of universals (that's a very nice little book by Bertrand Russell behind the second link, incidentally), which is the technical question in metaphysics whose answer was the key issue at stake in the medieval debate. But I have beginner's agnosticism when it comes to most metaphysical questions; the whole substance-property metaphysic could be wrong, as Heidegger seems to have argued.
Posted by: mitchell porter at Tuesday, August 23 2005 01:18 AM (mr6sB)
52
I also say Searle's Brain (i.e. the brain of ordinary naturalism) would not understand, for exactly the same reasons that the Room would not.
What reasons are those? I'm really curious, not just being snarky or anything. As far as I can see, the Room understands Chinese for every meaningful definition of the word "understand".
Posted by: Pixy Misa at Tuesday, August 23 2005 01:37 AM (AIaDY)
53
The Room (or the Brain), by hypothesis, engages in conversation as if it understands. When it comes to
actual understanding, I'll distinguish between three types of theory:
1. 'Understanding' is a matter of appropriate response; how this is achieved (implementation details, if you will) is irrelevant.
2. 'Understanding' exists if and only if a certain causal structure lies behind the appropriate responses.
3. 'Understanding' must be implemented in a thing called "mind" (followed by some theory of what mind is).
I think many people prefer (2) to (1) because of the possibility of implementation via Giant Look-Up Table (here's an old extropian thread on the topic). Someone like John Pollock, for example, who writes at length about the relationship between rationality and cognitive architecture. It's a little like the old distinction between knowledge and 'true belief'; you might believe the right thing for the wrong reason, in which you case you don't actually know that X, you just happen to correctly believe that X. The Lookup Table gives the right answers, but its internal representations are entirely structureless, so one might not wish to say that it understands.
However, I am staking out a variant of position (3), for the same reasons I gave when talking about color and physics. Just as the phenomenology of color tells us something about the ontology of color, there is a phenomenology of meaning and of meaning-perception that tells us something about the ontology of meaning, and I don't see anything like it in the world of particles in space. So in the debate about "naturalizing intentionality", I say yes, intentionality is the mark of the mental, and no, it does not exist in current physical ontology. However, one can certainly make a formal state-machine model of intentional processes, so intentionality can be simulated in a possible world without mind (so long as there are 'things' with 'states' and which interact in sufficiently complex ways), and it might even be simulated in a world where mind is ontologically possible, if it's implemented in the wrong way. And I think classical computation is a wrong way (for the creation of mind, anyhow), because the physical entities which constitute the computational states are not enough of a unit. Or, put another way, the only things that bind them are causal relations, and it seems to me that there are non-causal relations which play a constitutive role in mental states as well.
Quantum mechanics is of interest in this regard precisely because it has relationships like entanglement; the idea is that the complexity of entangled states may actually be a formal glimpse of fundamental intentional states. This idea has its own problems, but at least it avoids the main problem of functionalism, namely: which virtual machine is the machine whose states are the mental states? There are two sources of ambiguity here - one is the question of level (which "level of abstraction" is the right one?), the other involves the definition of the bottom level in terms of exact microphysical states, about which I say a bit here.
Posted by: mitchell porter at Tuesday, August 23 2005 10:09 AM (mr6sB)
54
1. 'Understanding' is a matter of appropriate response; how this is achieved (implementation details, if you will) is irrelevant.
Right. A functional definition; it
is understanding if it
acts like understanding.
2. 'Understanding' exists if and only if a certain causal structure lies behind the appropriate responses.
Hmm.
3. 'Understanding' must be implemented in a thing called "mind" (followed by some theory of what mind is).
Hmm.
The problem with 2 & 3 is that they are purely philosophical; they don't make any difference to any goal you might wish to achieve in the real world (by definition).
I think many people prefer (2) to (1) because of the possibility of implementation via Giant Look-Up Table (here's an old extropian thread on the topic).
The Look-Up Table argument is nonsense, I'm afraid. The size of the lookup table grows exponentially with the length of the conversation, and the constant factor itself is enormous. Even for a very short conversation the lookup table would be larger than the known universe.
It's a little like the old distinction between knowledge and 'true belief'; you might believe the right thing for the wrong reason, in which you case you don't actually know that X, you just happen to correctly believe that X.
Oh, that old thing. That's a language problem and nothing else. You can know things that are false; you can know things that are true but that you reached from false data or flawed logic.
Knowledge ain't what it's cracked up to be.
The Lookup Table gives the right answers, but its internal representations are entirely structureless, so one might not wish to say that it understands.
As I pointed out, the Lookup Table is physically impossible. What's more, it
is structured, very deeply so; every possible understanding of every subject is encoded in the Table.
Which is clearly also nonsense.
However, I am staking out a variant of position (3), for the same reasons I gave when talking about color and physics.
Well, we already established that this doesn't work for colour.
Just as the phenomenology of color tells us something about the ontology of color
It does?
What?
there is a phenomenology of meaning and of meaning-perception that tells us something about the ontology of meaning, and I don't see anything like it in the world of particles in space.
Fallacy of Division.
Meaning is information. Information is a purely physical property.
So in the debate about "naturalizing intentionality", I say yes, intentionality is the mark of the mental, and no, it does not exist in current physical ontology.
Intention is an information process. Information is physical.
However, one can certainly make a formal state-machine model of intentional processes
Indeed, I do exactly that every day. It's what I get paid for.
so intentionality can be simulated in a possible world without mind
Eh?
(so long as there are 'things' with 'states' and which interact in sufficiently complex ways) and it might even be simulated in a world where mind is ontologically possible, if it's implemented in the wrong way.
How can there be a "wrong" way?
And I think classical computation is a wrong way (for the creation of mind, anyhow), because the physical entities which constitute the computational states are not enough of a unit.
But you have never shown any valid reason for thinking that.
Or, put another way, the only things that bind them are causal relations, and it seems to me that there are non-causal relations which play a constitutive role in mental states as well.
Quantum Mechanics? That's not impossible; but we have no reason for thinking that it is true.
There is nothing that we observe of brain or mind that behaves as though acausal effects were involved. There's no reason to believe that QM is involved other than statistically, because the brain doesn't do anything QM-ish.
Quantum mechanics is of interest in this regard precisely because it has relationships like entanglement; the idea is that the complexity of entangled states may actually be a formal glimpse of fundamental intentional states.
No.
Entanglement doesn't bear any relationship to intentionality. This is just magic fairy thinking again. Take one thing you don't understand and explain it with some other thing you don't understand. The problem isn't that you don't understand either subject; the problem is that the
only reason you are linking them together is that you don't understand either one.
This idea has its own problems, but at least it avoids the main problem of functionalism, namely: which virtual machine is the machine whose states are the mental states?
That's not a problem at all. Each level of the multi-layered machine that is the brain is the level whose states are mental states. Describe it in terms of quantum mechanics or biochemistry or cell functions; it's the right level. It's just the description that varies.
Mind is biology is chemistry is physics. But for specific questions, it's much easier to focus your attention on a particular level in the stack. That is, after all, why we have those levels - they are more convenient models for specific classes of problem.
Posted by: Pixy Misa at Tuesday, August 23 2005 10:42 PM (RbYVY)
55
I'd better just describe a specific scenario, and then we can see how the philosophy looks. In the terms of your
forum essay, it would be a
monadological information idealism: a CA in which the cell states are not just generic information states, but full-fledged intentional states, at least some of the time. One should also envision the causal grid (both connectivity and cell population) as dynamical: cells can engage in relative motion, as changes in connectivity change their separation within the grid. This is like the Machian interpretation of general relativity, in which only matter exists, and space is a relational property. Finally, in conceiving of the state of a cell, one should think of a recursive data type like a tree, rather than a structure with a fixed number of arguments. In such a universe, one might implement a binary tree using several fundamental cells in simple states, but this would be ontologically different from a single fundamental cell which was directly in a tree state. In such a theory, state machines get to have genuine intentionality only if they are single fundamental cells. Otherwise, it's just a simulation. The property of "being a fundamental tree state" and the property of "being a tree state of a set of fundamental cells" are different things, and it approximates what I mean by the difference between mind and the simulation of mind.
Posted by: mitchell porter at Friday, August 26 2005 01:22 AM (mr6sB)
Hide Comments
| Add Comment