Showing posts with label mind. Show all posts
Showing posts with label mind. Show all posts

Wednesday, September 11, 2024

One-thinker colocationism

Colocationists about human beings think that in my chair are two colocated entities: a human person and a human animal. Both of them are made of the same stuff, both of them exhibit the same physical movements, etc.

The standard argument against colocationism is the two thinkers argument. Higher animals, like chimpanzees and dogs, think. The brain of a human animal is more sophisticated than that of a chimpanzee or a dog, and hence human animals also have what it takes to think. Thus, they think. But human persons obviously think. So there are two thinkers in my chair, which is innately absurd, plus leads to some other difficulties.

If I were a colocationist, I think I would deny that any animals think. Instead, the same kind of duplication that happens in the human case happens for all the higher animals. In my chair there is a human animal and a person, and only the person thinks. In the doghouse, there is a dog and a “derson”. In the savanna, one may have a chimpanzee and a “chimperson”. The derson and the chimperson are not persons (the chimperson comes closer than the derson does), but all three think, while their colocated animals do not. We might even suppose that the person, the derson and chimperson are all members of some further kind, thinker.

Suppose one’s reason for accepting colocationism about humans is intuitions about the psychological components of personal identity: if one’s psychological states were transfered into a different head, one would go with the psychological states, while the animal would stay behind, so one isn’t an animal. Then I think one should say a similar thing about other higher animals. If we think that that an interpersonal relationship should follow the psychological states rather than the body of the person, we should think similarly about a relationship with one’s pet: if one’s pet’s psychological states are transfered into a different body, our concerns should follow. If Rover is having a vivid dream of chasing a ball, and we transfer Rover’s psychological states into the body of another dog, Rover would continue the dream in that other body. I don’t believe this in the human case, and I don’t believe it in the dog case, but if I believed this in the human case, I’d believe it in the dog case.

What are the reasons for the standard colocationist’s holding that the human animal thinks? One may say that because both the animal and the person have the same brain activity, that’s a reason to say that either both or neither thinks. But the brain also has the same brain activity, and so if this is one’s reason for saying that the animal thinks, we now have three thinkers. And, if there are unrestricted fusions, the mereological sum of the person with their clothes also has the same brain activity, thereby generating a fourth thinker. That’s absurd. Thus thought isn’t just a function of hosting brain activity, but hosting brain activity in a certain kind of context. And why can’t this context be partly characterized by modal characteristics, so that although both the animal and the person have the same brain activity, they provide a different modally characterized context for the brain activity, in such a way that only one of the two thinks?

This one-thinker colocationism can be either naturalistic or dualistic. On the dualistic version, we might suppose that the nonphysical mental properties belong to only one member of the pair of associated beings. On the naturalistic version, we might suppose that what it is to have a mental property is to have a physical property in a host with appropriate modal properties—the ones the person, the derson and the chimperson all have.

I think there is one big reason why a colocationist may be suspicious of this view. Ethologists sometimes explain animal behavior in terms of what the animal knows, is planning, and more generally is thinking. These explanations are all incorrect on the view in question. But the one-thinker co-locationist has two potential answers to this. The first is to weaken her view and allow animals to think, but not consciously. It is only the associated non-animal that has conscious states, that has qualia. But the conscious states need not enter into behavioral explanations. The second is to say that the scientists’ explanations while incorrect can be easily corrected by replacing mental properties with their neural correlates.

Tuesday, June 11, 2024

A very simple counterexample to Integrated Information Theory?

I’ve been thinking a bit about Integerated Information Theory (IIT) as a physicalist-friendly alternative to functionalism as an account of consciousness.

The basic idea of IIT is that we measure the amount of consciousness in a system by subdividing the system into pairs of subsystems and calculating how well one can predict the next state of each of the two subsystems without knowing the state of the other. If there is a partition which lets you make the predictions well, then the system is considered reducible, with low integrated information, and hence low consciousness. So you look for the best-case subdivision—one where you can make the best predictions as measured by Shannon entropy with a certain normalization—and say that the amount Φ of “integrated information” in the system varies in reverse order with the quality of these best predictions. And then the amount of consciousness Φ in the system corresponds to the amount of integrated information.

Aaronson gives a simple mathematical framework and what sure look like counterexamples: systems that intuitively don’t appear to be mind-like and yet have a high Φ value. Surprisingly, though, Tononi (the main person behind IIT) has responded by embracing these counterexamples as cases of consciousness.

In this post, I want to offer a counterexample with a rather different structure. My counterexample has an advantage and a disadvantage with respect to Aaronson’s. The advantage is that it is a lot harder to embrace my counterexample as an example of consciousness. The disadvantage is that my example can be avoided by an easy tweak to the definition of Φ.

It is even possible that my tweak is already incorporated in the official IIT 4.0. I am right now only working with Aaronson’s perhaps simplified framework (for one, his framework depends on a deterministic transition function), because the official one is difficult for me to follow. And it is also possible that I am just missing something obvious and making some mistake. Maybe a reader will point that out to me.

The idea of my example is very simple. Imagine a system consisting of two components each of which has N possible states. At each time step, the two components swap states. There is now only one decomposition of the system into two subsystems, which makes things much simpler. And note that each subsystem’s state at time n has no predictive power for its own state at n + 1, since it inherits the other subsystem’s state at n + 1. The Shannon entropies corresponding to the best predictions are going to be log2N, and so Φ of the system is 2log2N. By making N arbitrarily large, we can make Φ arbitrarily large. In fact, if we have an analog system with infinitely many states, then Φ is infinite.

Advantage over Aaronson’s counterexamples: There is nothing the least consciousness-like in this setup. We are just endlessly swapping states between two components. That’s not consciousness. Imagine the components are hard drives and we just endlessly swap the data between them. To make it even more vivid, suppose the two hard drives have the same data, so nothing actually changes in the swaps!

Disadvantage: IIT can escape the problem by modifying the measure Φ of integrated information in some way in the special case where the components are non-binary. Aaronson’s counterexamples use binary components, so they are unaffected. Here are three such tweaks. (i) Just to divide by the logarithm of the maximum number of states in a component (seems ad hoc). (ii) Restrict the system to one with binary components, and therefore require that any component with more that two possible states be reinterpreted as a collection of binary components encoding the non-binary state (but which binarization should one choose?). (iii) Define Φ of a non-binary system as a minimum of the Φ values over all possible binarizations. Either (i) or (iii) kills my counterexample.

Tuesday, April 16, 2024

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Friday, April 5, 2024

A weaker epiphenomenalism

A prominent objection to epiphenomenalist theories of qualia, on which qualia have no causal efficacy, is that then we have no way of knowing that we had a quale of red. For a redness-zombie, who has no quale of red, would have the very same “I am having a quale of red” thought as me, since my “I am having a quale of red” thought is not caused by the quale of red.

There is a slight tweak to epiphenomanalism that escapes this objection, and the tweaked theory seems worth some consideration. Instead of saying that qualia have no causal efficacy, on our weaker epiphenomenalism we say that qualia have no physical effects. We can then say that my “I am having a quale of red” thought is composed of two components: one of these components is a physical state ϕ2 and the other is a quale q2 constituting the subjective feeling of thinking that I am having a quale of red. After all, conscious thoughts plainly have qualia, just as perceptions do, if there are qualia at all. We can now say that the physical state ϕ2 is caused by the physical correlate ϕ1 of the quale of red, while the quale q2 is wholly or partly caused by the quale q1 of red.

As a result, my conscious thought “I am having a quale of red” would not have occurred if I lacked the quale of red. All that would have occurred would be the physical part of the conscious thought, ϕ2, which physical part is what is responsible for further physical effects (such as my saying that I am having a quale of red).

If this is right, then the induced skepticism about qualia will be limited to skepticism with respect to unconscious thoughts about qualia. And that’s not much of a skepticism!

Tuesday, April 2, 2024

Aristotelian functionalism

Rob Koons and I have argued that the best functionalist theory of mind is one where the proper function of a system is defined in an Aristotelian way, in terms of innate teleology.

When I was teaching on this today, it occured to me (it should have been obvious earlier) that this Aristotelian functionalism has the intuitive consequence that only organisms are minded. For although innate teleology may be had by substances other than organisms, inorganic Aristotelian material substances do not have the kind of teleology that would make for mindedness on any plausible functionalism. Here I am relying on Aristotle’s view that (maybe with some weird exceptions, like van Inwagen’s snake-hammock) artifacts—presumably including computers—are not substances.

If this is right, then the main counterexamples to functionalism disappear:

Next recall Leibniz’s mill argument: if a machine can be conscious, a mill full of giant gears could be conscious, and yet as we walked through such mill, it would be clear that there is no consciousness anywhere. But now suppose we were told that the mill has an innate function (not derived from the purposes of the architect) which governed the computational behavior of the gears. We would then realize that the mill is more than just what we can see, and that would undercut the force of the Leibnizian intuition. In other words, it is not so hard to believe that a mill with innate purpose is conscious.

Further, note that perhaps the best physicalist account of qualia is that qualia are grounded in the otherwise unknowable categorical features of the matter making up our brains. This, however, has a somewhat anti-realist consequence: the way our experiences feel has nothing to do with the way the objects we are experiencing are. But an Aristotelian functionalist can tell this story. If I have a state whose function is to represent red light, then I have an innate teleology that makes reference to red light. This innate teleology could itself encode the categorical features of red light, and since this innate teleology, via functionalism, grounds our perception of red light, our perception of red light is “colored” by the categorical features not just by our brains, but by the categorical features of red light (even if we are hallucinating the red light). This makes for a more realist theory of qualia, on which there is a non-coincidental connection between the external objects and how they seem to us.

Observe, also, how the Aristotelian story has advantages of panpsychism without the disadvantages. The advantage of panpsychism is that the mysterious gap between us and electrons is bridged. The disadvantages are two-fold: (a) it is highly counterintuitive that electrons are conscious (the gap is bridged too well) and (b) we don’t have a plausible story about how the consciousness of the parts gives rise to a consciousness of the whole. But on Aristotelian functionalism, it is teleology that we have in common with electrons, so we do not need to say that electrons are conscious—but because mind reduces to teleological function, though not of the kind electrons have, we still have bridging. And we can tell exactly the kind of story that non-Aristotelians do about how the function of the parts gives rise to the consciousness of the whole.

There is, however, a serious downside to this Aristotelian functionalism. It cannot work for the simple God of classical theism. But perhaps we can put a lot of stress on the idea that “mind” is only said analogously between creatures and God. I don’t know if that will work.

Functionalism and organizations

I am quite convinced that if standard (non-evolutionary, non-Aristotelian) functionalism is true, then complex organizations such as universities and nations have minds and are conscious. For it is clear to me that dogs are conscious, and the functioning of complex organizations is more intellectually sophisticated than that of dogs, and has the kind of desire-satisfaction drivers that dogs and maybe even humans have.

(I am pretty sure I posted something like this before, but I can’t find it.)

Wednesday, March 27, 2024

Knowledge of qualia

Suppose epiphenomenalism is true about qualia, so qualia are nonphysical properties that have no causal impact on anything. Let w0 be the actual world and let w1 be a world which is exactly like the actual world, except that (a) there are no qualia (so it’s a zombie world) and (b) instead of qualia, there are causally inefficacious nonphysical properties that have a logical structure isomorphic to the qualia of our world, and that occur in the corresponding places in the spatiotemporal and causal nexuses. Call these properties “epis”.

The following seems pretty obvious to me:

  1. In w1, nobody knows about the epis.

But the relationship of our beliefs about qualia to the qualia themselves seems to be exactly like the relationship of the denizens of w1 to the epis. In particular, neither are any of their beliefs caused by the obtaining of epis, nor are any of our beliefs caused by the obtaining of qualia, since both are epiphenomenal. So, plausibly:

  1. If in w1, nobody knows about the epis, then in w0, nobody knows about the qualia.

Conclusion:

  1. Nobody knows about the qualia.

But of course we do! So epiphenomenalism is false.

Wednesday, March 13, 2024

Do you and I see colors the same way?

Suppose that Mary and Twin Mary live almost exactly duplicate lives in an almost black-and-white environment. The exception to the duplication of the lives and to the black-and-white character of the environment is that on their 18th birthday, each sees a colored square for a minute. Mary sees a green square and Twin Mary sees a blue square.

Intuitively, Mary and Twin Mary have different phenomenal experiences on their 18th birthday. But while I acknowledge that this is intuitive, I think it is also deniable. We might suppose that they simply have a “new color” experience on their 18th birthday, but it is qualitatively the same “new color” experience. Maybe what determines the qualitative character of a color experience is not the physical color that is perceived, but the relationship of this color to the whole body of our experience. Given that green and blue have the same relationship to the other (i.e., monochromatic) color experiences of Mary and Twin-Mary, it may be that they appear the same way.

If this kind of relationalism is correct, then it is very likely that when you and I look at the same blue sky, our experiences are qualitatively different. Your phenomenal experience is defined by its position in the network of your experiences and mine is defined by its position in the network of my experiences. Since these networks are different, the experiences are different. Somehow I find this idea somewhat plausible. It is even more plausible some experiences other than colors. Take tastes and smells. It’s not unlikely that fried cabbage tastes differently to me because in the network of my experiences it has connections to experiences of my grandmother’s cooking that it does not have in your network.

Such a relationalism could help explain the wide variation in sensory preferences. We normally suppose that people disagree on which tastes they like and dislike. But what if they don’t? What if instead the phenomenal tastes are different? What if banana muffins, which I dislike, taste differently to me than they do to most people, because they have a place in a different network of experiences, and if banana muffins tasted to me like they do to you, I would like them just as much?

In his original Mary thought experiment, Jackson says that monochrome Mary upon experiencing red for the first time learns what experience other people were having when they saw a red tomato. If the above hypothesis is right, she doesn’t learn that at all. Other people’s experiences of a red tomato would be very different from Mary’s, because Mary’s monochrome upbringing would place the red tomato in a very different network of experiences from that which it has in other people’s networks of experiences. (I don’t think this does much damage to the thought experiment as an argument against physicalism. Mary still seems to learn something—what it is to have an experience occupying such-and-such a spot in her network of experiences.)

More fun with monochrome Mary

Here’s a fun variant of the black-and-white Mary thought experiment. Mary has been brought up in a black-and-white environment, but knows all the microphysics of the universe from a big book. One day she sees a flash of green light. She gains the phenomenal concept α that applies to the specific look of that flash. But does Mary know what green light looks like?

You might think she knows because her microphysics book will inform her that on such-and-such a day, there was a flash of green light in her room, and so she now knows that a flash of green light has appearance α. But that is not quite right. A microphysics book will not tell Mary that there was a flash of green light in her room. It will tell her that there was a flash of green light in a room with such-and-such physical properties. Whether she can deduce from these properties and her observations that this was her room depends on what the rest of the universe is like. If the universe contains Twin Mary who lives in a room with exactly the same monochromatically observable properties as Mary’s room, but where at the analogous time there is a flash of blue light, then Mary will have no way to resolve the question of whether she is the woman in the room with the green flash or in the room with the blue flash. And so, even though Mary knows all the microphysical facts about the world, Mary doesn’t know whether it is a green flash or a blue flash that has appearance α.

This version of the Mary thought experiment seems to show that there is something very clear, specific and even verbalizable (since Mary can stipulate a term in her language to express the concept α, though if Wittgenstein is right about the private language argument, we might require a community of people living in Mary’s predicament) that can remain unknown even when one knows all the microphysical facts and has all the relevant concepts and has had the relevant experiences: Whether it is green or blue light that has appearance α?

This seems to do quite a bit of damage to physicalism, by showing that the correlation between phenomenal appearances and physical facts is a fact about the world going beyond microphysics.

But now suppose Joan lives on Earth in a universe which contains both Earth and Twin Earth. The denizens of both planets are prescientific, and at their prescientific level of observation, everything is exactly alike between Earth and Twin Earth. Finer-grained observation, however, would reveal that Earth’s predominant surface liquid is H2O while Twin Earth’s is XYZ, but currently there is no difference. Now, Joan reads a book that tells her in full detail all the microphysical structure of the universe.

Having read the book, Joan wonders: Is water H2O or is it XYZ? Just by reading the book, she can’t know! The reason she doesn’t know it is because her prescientific observations combined with the contents of the book are insufficient to inform her whether she lives on Earth or on Twin Earth, whether she is Joan or Twin Joan, and hence are insufficient to inform her whether the liquid she refers to as “water” is H2O or XYZ.

But surely this shouldn’t make us abandon physicalism about water!

Now Joan and Twin Joan both have concepts that they verbalize as “water”. The difference between these concepts is entirely external to Joan and Twin Joan—the difference comes entirely from the identity of the liquid interaction with which gave rise to the respective concepts. The concepts are essentially ostensive in their differences. In other words, Joan’s ignorance of whether water is H2O or XYZ is basically an ignorance of self-locating fact: is she in the vicinity of H2O or in the vicinity of XYZ.

Is this true for Mary and Twin Mary? Can we say that Mary’s ignorance of whether it is a green or a blue flash that has appearance α is essentially an ignorance of self-locating facts? Can we say that the difference between Mary’s phenomenal concept formed from the green flash and Twin Mary’s phenomenal concept formed from the blue flash is an external difference?

Intuitively, the answer to both questions is negative. But the point is not all that clear to me. It could turn out that both Mary and Twin Mary have a purely comparative recognitive concept of “the same phenomenal appearance as that flash”, together with an ability to recognize that similarity, and with the two concepts being internally exactly alike. If so, then the argument is unconvincing as an argument against physicalism.

Tuesday, March 12, 2024

The epistemic gap and causal closure

In the philosophical literature, the main objection to physicalism about consciousness is the epistemic gap: the alleged fact that full knowledge of the physical does not yield full knowledge of the mental. And one of the main objections to nonphysicalism about consciousness is causal closure: the alleged fact that physical events, like our actions, have causes that are entirely physical.

There is a simple way to craft a theory that avoids both objections. Simply suppose that mental states have two parts: a physical and a non-physical part. The physical part of the mental state is responsible for the mental state’s causal influence on physical reality. The non-physical part explains the epistemic gap: full knowledge of the physical world yields full knowledge of the physical part of the mental state, but not full knowledge of the mental state.

Wednesday, February 14, 2024

Yet another tweak of the knowledge argument against physicalism

Here is a variant on the knowledge argument:

  1. All empirical facts a priori follow from the fundamental facts.

  2. The existence of consciousness does not a priori follow from the fundamental physical facts.

  3. The existence of consciousness is an empirical fact.

  4. Thus, there are fundamental facts that are not fundamental physical facts.

In support of 2, note that we wouldn’t be able to tell which things are conscious by knowing their physical constitution without some a posteriori data like “When I say ‘ouch’, I am conscious.”

Tuesday, February 13, 2024

Physicalism and "pain"

Assuming physicalism, plausibly there are a number of fairly natural physical properties that occur when and only when I am having a phenomenal experience of pain, all of which stand in the same causal relations to other relevant properties of me. For instance:

  1. having a brain in neural state N

  2. having a human brain in neural state N

  3. having a primate brain in neural state N

  4. having a mammalian brain in neural state N

  5. having a brain in functional state F

  6. having a human brain in functional state F

  7. having a primate brain in functional state F

  8. having a mammalian brain in functional state F

  9. having a central control system in functional state F.

Suppose that one of these is in fact identical with the phenomenal experience of pain. But which one? The question is substantive and ethically important. If, for instance, the answer is (c), then cats and computers in principle couldn’t feel pain but chimpanzees could. If the answer is (i), then cats and computers and chimpanzees could all feel pain.

It is plausible on physicalism (e.g., Loar’s version) that my concept of pain refers to a physical property by ostension—I am ostending to the state that occurs in me in all and only the cases where I am in pain, and which has the right kind of causal connection to my pain behaviors. But there are many such states, as we saw above.

We might try to break the tie by saying that by reference magnetism I am ostending to the simplest physical state that has the above role, and the simplest one is probably (i). I don’t think this is plausible. Assuming naturalism, when multiple properties of a comparable degree of naturalness play a given role, ostension via the role is likely to be ambiguous, with ambiguity needing to be broken by a speaker or community decision. At some point in the history of biology, we had to decide whether to use “fish” at a coarse-grained functional level and include dolphins and whales as fish, or at a finer-grained level and get the current biological concept. One option might be a little more natural than the other, but neither is decisively more natural (any fish concept that has a close connection to ordinary language is going to have to be paraphyletic), and so a decision was needed. And even if (i) is somewhat simpler than (a)–(h), it is not decisively more natural.

This yields an interesting variant of the knowledge argument against physicalism.

  1. If “pain” refers to a physical property, it is a “merely semantic” question, one settled by linguistic decision, whether “pain” could apply to an appropriately programmed computer.

  2. It is not a “merely semantic” question, one settled by languistic decision, whether “pain” could apply to an appropriately programmed computer.

  3. Thus, “pain” does not refer to a physical property.

Thursday, February 8, 2024

Supervenience and counterfactuals

On typical functionalist views of mind, what mental states a physical system has depends on counterfactual connections between physical properties in that system. But we can have two worlds that are exactly the same physically—have exactly the same tapestry of physical objects, properties and relations—but differ in what counterfactual connections hold between the physical properties. To see that, just imagine that one of the two worlds is purely physical, and in that world, w1, striking a certain match causes a fire, and:

  1. Were that match not struck, there would have been no fire.

But now imagine another world, w2, which is physically exactly the same, but there is a nonphysical spirit who wants the fire to happen, and who will miraculously cause the fire if the match is not struck. But since the match is struck, the spirit does nothing. In w2, the counterfactual (1) is false. (This is of course just a Frankfurt case.)

Thus physicalist theories where counterfactual connections are essential are incompatible with supervenience of the mental upon the physical.

I suppose one could insist that the supervenience base has to include counterfactual facts, and not just physical facts. But this is problematic. Even in purely physical worlds, counterfactual facts are not grounded in physical facts, but in physical facts combined with the absence of spirits, ghosts, etc. And in worlds that are only partly physical, counterfactual connections between physical facts may be grounded in the dispositions of non-physical entities.

Something Mary doesn't know

Here is something our old friend Mary, raised in a black and white world, cannot know simply by knowing all of physics:

  1. What are the necessary and sufficient physical conditions for two individuals to be in exactly the same phenomenal state?

Of course, her being raised in a black and white world is a red herring. I think nobody can know the answer to (2) simply by knowing all of physics.

Some remarks:

  • Knowledge of the answer to (1) is clearly factual descriptive knowledge. So responses to the standard knowledge argument for dualism that distinguish kinds of knowledge have no effect here.

  • The answer to (1) could presumably be formulated entirely in the language of physics.

  • Question (1) has a presupposition, namely that there are necessary and sufficient physical conditions, but the physicalist can’t deny that.

  • A sufficient conditions is easy given physicalism: the individuals have the exact same physical state.

  • Dennettian RoboMary-style simulation does not solve the question. One might hope that if you rewrite your software, you can check if you have the same qualia before and after the rewrite. But the problem is that you can only really do exact comparisons of qualia that you see in a unified way, and there is insufficient unification of your state across the software rewrite.

Tuesday, February 6, 2024

The hand and the moon

Suppose Alice tells me: “My right hand is identical with the moon.”

My first reaction will be to suppose that Alice is employing some sort of metaphor, or is using language in some unusual way. But suppose that further conversation rules out any such hypotheses. Alice is not claiming some deep pantheistic connection between things in the universe, or holding that her hand accompanies her like the moon accompanies the earth, or anything like that. She is literally claiming of the object that the typical person will identify as “Alice’s hand” that it is the very same entity as the typical person will identify as “the moon”.

I think I would be a little stymied at this point. Suppose I expressed this puzzlement to Alice, and she said: “An oracle told me that over the next decade my hand will swell to enormous proportions, and will turn hard and rocky, the exact size and shape of the moon. Then aliens will amputate the hand, put it in a giant time machine, send it back 4.5 billion years, so that it will orbit the earth for billions of years. So, you see, my hand literally is the moon.”

If Alice isn’t pulling my leg, she is insane to think this. But now I can make some sense of her communication. Yes, she really is using words in the ordinary and literal sense.

Now, to some dualist philosophers the claim that a mental state of feeling sourness is an electrochemical process in the brain is about as weird as the claim that Alice’s hand is the moon. I’ve never found this “obvious difference” argument very plausible, despite being a dualist. Thinking through my Alice story makes me a little more sympathetic to the idea that there is something incredible about the mental-physical identity claim. But I think there is an obvious difference between the hand = moon and quale = brain-state claims. The hand and the moon obviously have incompatible properties: the colors are different, the shapes are different, etc. Some sort of an explanation is needed how that can happen despite identity—say, time-travel.

The analogue would be something like this: the quale doesn’t have a shape, while the brain process does. But it doesn’t seem at all clear to me that the quale positively doesn’t have a shape. It’s just that it is not the case that it positively seems to have a shape. Imagine that qualia turned out to be nonphysical but spatially extended entities spread through regions of the brain, kind of like a ghost is a nonphysical but spatially extended entity. There is nothing obvious about the falsity of this hypothesis. And on this hypothesis, qualia would have shape.

To be honest, I suspect that even if qualia don’t have a shape, God could give them the additional properties (say, the right relation to points of space) that would give them shape.

Computationalism and subjective time

Conway’s Game of Life is Turing complete. So if computationalism about mind is true, we can have conscious life in the Game of Life.

Now, consider a world C (for Conway) with three discrete spatial dimensions, x, y and z, and one temporal dimension, t, discrete or not. Space is thus a three-dimensional regular grid. In addition to various “ordinary” particles that occupy the three spatial dimensions and have effects forwards along the temporal dimension, C also has two special particle types, E and F.

The causal powers of the E and F particles have effects simultaneous with their causes, and follow a spatialized version of the rules of the Game of Life. Say that a particle at coordinates (x0,y0,z0) has as its “neighbors” particles at the eight grid points with the same z coordinate z0 and surrounding (x0,y0,z0). Then posit these causal powers of an E (for “empty”) or F (for “full”) particle located at (x0,y0,z0):

  • If it’s an F particle and has less two or three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • If it’s an E particle and has exactly three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • Otherwise, it instantaneously causes an E particle at (x0,y0,z0+1).

Furthermore, suppose that along the temporal axis, the E and F particles are evanescent: if they appear at some time, they exist only at that time, perishing right after.

In other words, once some E and F particles occur somewhere in space, they instantly propagate more E and/or F particles to infinity along the z-axis, all of which particles then perish by the next moment of time.

Given the Turing completeness of the Game of Life and a computational theory of mind, the E and F particles can compute whatever is needed for a conscious life. We have, after all, a computational isomorphism between an appropriately arranged E and F particle system in C and any digital computer system in our world.

But because the particles are evanescent, that conscious life—with all its subjective temporal structure—will happen all at once according to the objective time of its world!

If this is right, then on a computational theory of mind, we can have an internal temporally structured conscious life with a time sequence that has nothing to do with objective time.

One can easily get out of this consequence by stipulating that mind-constituting computations must be arranged in an objective temporal direction. But I don’t think a computationalist should add this ad hoc posit. It is better, I think, simply to embrace the conclusion that internal subjective time need not have anything to do with external time.

Monday, February 5, 2024

Physicalism, consciousness and history

Many physicalists think that conscious states are partly constituted by historical features of the organism. For instance, they think that Davidson’s swampman (who is a molecule-by-molecule duplicate of Davidson randomly formed by lightning hitting a swamp) does not have conscious states, because swampman lacks the right history (on some views, one just needs a history of earlier life, and on others, one needs the millenia of evolutionary history).

I want to argue that probably all physicalists should agree that conscious states are partly constituted by historical features.

For if there is no historical component to the constitution of a conscious state, and physicalism is true, then conscious states are constituted by the simultaneous arrangement of spatially disparate parts of the brain. But consciousness is not relative to a reference frame, while simultaneity is.

Here’s another way to see the point. Suppose that conscious states are not even partly constituted by the past. Then, surely, they are also not even partly constituted by the past. In other words, conscious states are fully constituted by how things are on an infinitesimally thin time-slice. On that view, it would be possible for a human-like being, Alice, to exist only for an instant and to be conscious at that instant. But now imagine that in inertial reference frame F, Alice is a three-dimensional object that exists only at an instant. Then it turns out that in every other frame than F, Alice’s intersection with a simultaneity hyperplane is two-dimensional—but she also has a non-empty intersection with more than one simultaneity hyperplane. Consequently, in every frame other than F, Alice exists for more than an instant, but is two-dimensional at every time. A two-dimensional slice of a human brain can’t support consciousness, so in no frame other than F can Alice be conscious. But then consciousness is frame-relative, which is absurd.

Once we have established:

  1. If physicalism is true, conscious states are partly constituted by historical features,

it is tempting to add:

  1. Conscious states are not even partly constituted by historical features.

  2. So, physicalism is not true.

But I am not very confident of (2).

If materialism is true, we can't die in constant pain

Here is an unfortunate fact:

  1. The last minute of your life can consist of constant conscious pain.

Of course, I think all pain is conscious, but I might as well spell it out. The modality of the “can” in this post will be something fairly ordinary, like some sort of nomic possibility.

Now say that a reference frame is “ordinary for you” provided that it is a reference frame corresponding to something moving no more than 100 miles per hour relative to your center of mass.

Next, note that switching between reference frames should not turn pain into non-pain: consciousness is not reference-frame relative. Thus:

  1. If the last minute of your life consists of constant conscious pain, then in every reference frame that is ordinary for you, in the last half-minute of your life you are in constant conscious pain.

Relativistic time-dilation effects of differences between “ordinary” frames will very slightly affect how long your final pre-death segment of pain is, but will not shorten that segment by even one second, and certainly not by 30 seconds.

Next add:

  1. If materialism is true, then you cannot have a conscious state when you are the size of a handful of atoms.

Such a small piece of the human body is not enough for consciousness.

But now (1)–(3) yield an argument against materialism. I have shown here that, given the simplifying assumption of special relativity, in almost every reference frame, and in particular in some ordinary frames, your life will end with you being the size of a handful of atoms. If materialism is true, in those frames towards the very end of your life you will have to exist without consciousness by (3), and in particular you won’t be able to have constant conscious pain (or any other conscious state) for your last half-minute.

Friday, February 2, 2024

Consciousness and plurality

One classic critique of Descartes’ “cogito ergo sum” is that perhaps there can be thought without a subject. Perhaps the right thing to say about thought is feature-placing language like “It’s thinking”, understood as parallel to “It’s raining” or “It’s sunny”, where there really is no entity that is raining or sunny, but English grammar requires a subject so we through in an “it”.

There is a more moderate option, though, that I think deserves a bit more consideration. Perhaps thought has an irreducibly plural subject, and in a language that expresses the underlying metaphysics better, we should say “The neurons are (collectively) thinking” or maybe even “The particles are (collectively) thinking.” On this view, thought is a relation that holds between a plurality of objects, without these objects making up a whole that thinks. This, for instance, is a very natural view for physicalist who is a compositional nihilist (i.e., thinks that only simples exists).

It seems to me that it is hard to reject this view if one’s only data is the fact of consciousness, as it is for Descartes. What kills the three-dimensionalist version of this view, in my opinion, is that it cannot do justice to the identity of the thinker over time, since there would be different pluralities of neurons or particles engaged in the thinking over time. And a four-dimensionalist version cannot do justice to the identity of the thinker in counterfactual scenarios. However, this data isn’t quite as self-evident as what Descartes wants.

In any case, I think this is a view that non-naturalists like me need to take pretty seriously.

Tuesday, January 23, 2024

Unconscious aliens

Lately I’ve been starting my philosophy of mind course with Carolyn Gilman’s short story about unconscious but highly intelligent aliens.

We can imagine such aliens having thoughts, beliefs, concepts, representational and motivational states. After all, we have beliefs even when totally unconscious, and we have subconscious thoughts, concepts, as well as representational and motivational states.

I’ve wondered what unconscious aliens would think about our philosophical arguments about physicalism and consciousness. They might not have the concept of consciousness or of an experiential state, but they could have the concept of “that special mode of representing reality that humans have and we don’t”. And so now I ask myself: Would these aliens have any reason to think that consciousness-based arguments for dualism have any force? Would they have any reason to think that “special mode” is a non-physical mode?

Of course, the aliens might be convinced of dualism on the basis of intentionality arguments. But would something about humans give them additional evidence of dualism about humans?

The aliens shouldn’t be surprised to discover that humans when awake have some ways of processing inputs that they themselves don’t, nor should that give any evidence for dualism. Neither should the presence of some special “phenomenological” vocabulary in humans for describing such processing.

But I think what should give the aliens some evidence is the conviction that many humans have that their “experiences” lack physical properties, that they are categorically different from physical properties and things. If someone describes an object of sensory perception as lacking color, that gives one reason to think the object indeed lacks color. If someone describes the object of introspective perception as lacking charge or mass, that gives one reason to think the object indeed lacks charge or mass.

The aliens would need to then consider the fact that some people have the conviction and others do not, and try to figure out which ones are doing a better job learning from their introspection.