Showing posts with label functionalism. Show all posts
Showing posts with label functionalism. Show all posts

Monday, July 1, 2024

Duplicating electronic consciousnesses

Assume naturalism and suppose that digital electronic systems can be significantly conscious. Suppose Alice is a deterministic significantly conscious digital electronic system. Imagine we duplicated Alice to make another such system, Bob, and fed them both the same inputs. Then there are two conscious beings with qualitatively the same stream of consciousness.

But now let’s add a twist. Suppose that we create a monitoring system that continually checks all of Alice and Bob’s components, and as soon as any corresponding components disagree—are in a different state—then the system pulls the plug on both, thereby resetting all components to state zero. In fact, however, everything works well, and the inputs are always the same, so there is never any deviation between Alice and Bob, and the monitoring system never does anything.

What happens to the consciousnesses? Intuitively, neither Alice nor Bob should be affected by a monitoring system that never actually does anything. But it is not clear that this is the conclusion that specific naturalist theories will yield.

First, consider functionalism. Once the monitoring system is in place, both Alice and Bob change with respect to their dispositional features. All the subsystems of Alice are now incapable of producing any result other than one synchronized to Bob’s subsystems, and vice versa. I think a strong case can be made that on functionalism, Alice and Bob’s subsystems lose their defining functions when the monitoring system is in place, and hence lose consciousness. Therefore, on functionalism, consciousness has an implausible extrinsicness to it. The duplication-plus-monitoring case is some evidence against functionalism.

Second, consider Integrated Information Theory. It is easy to see that the whole system, consisting of Alice, Bob and the monitoring system, has a very low Φ value. Its components can be thought of as just those of Alice and Bob, but with a transition function that sets everything to zero if there is a deviation. We can now split the system into two subsystems: Alice and Bob. Each subsystem’s behavior can be fully predicted from that subsystem’s state plus one additional bit of information that represents whether the other system agrees with it. Because of this, the Φ value of the system is at most 2 bits, and hence the system as a whole has very, very little consciousness.

Moreover, Alice remains significantly conscious: we can think of Alice as having just as much integrated information after the monitoring system is attached as before, but now having one new bit of environmental dependency, so the Φ measure does not change significantly from the monitoring being added. Moreover, because the joint system is not significantly conscious, Integrated Information Theory’s proviso that a system loses consciousness when it comes to be in a part-to-whole relationship with a more conscious system is irrelevant.

Likewise, Bob remains conscious. So far everything seems perfectly intuitive. Adding a monitoring system doesn’t create a new significantly conscious system, and doesn’t destroy the two existing conscious systems. However, here is the kicker. Let X be any subsystem of Alice’s components. Let SX be the system consisting of the components in X together with all of Bob’s components that don’t correspond to the components in X. In other words, SX is a mix of Alice’s and Bob’s components. It is easy to see the information theoretic behavior of SX is exactly the same as the information theoretic behavior of Alice (or of Bob for that matter). Thus, the Φ value of SX will be the same for all X.

Hence, on Integrated Information Theory, each of the SX systems will be equally conscious. The number of these systems equals to 2n where n is the number of components in Alice. Of course, one of these 2n systems is Alice herself (that’s SA where A is the set of Alice’s components) and another one is Bob himself (that’s S). Conclusion: By adding a monitoring system to our Alice and Bob pair, we have created a vast number of new equally conscious systems: 2n − 2 of them!

The ethical consequences are very weird. Suppose that Alice has some large number of components, say 1011 (that’s how many neurons we have). We duplicate Alice to create Bob. We’ve doubled the number of beings with whatever interests Alice had. And then we add a dumb monitoring that pulls the plug given a deviation between them. Suddenly we have created 21011 − 2 systems with the same level of consciousness. Suddenly, the moral consideration owed to to the Alice/Bob line of consciousness vastly outnumbers everything.

So both functionalism and Integrated Information Theory have trouble with our duplication story.

Monday, June 10, 2024

Computation

I’ve been imagining a very slow embodiment of computation. You have some abstract computer program designed for a finite-time finite-space subset of a Turing machine. And now you have a big tank of black and white paint that is constantly being stirred in a deterministic way, but one that is some ways into the ergodic hierarchy: it’s weakly mixing. If you leave the tank for eternity, every so often the paint will make some seemingly meaningful patterns. In particular, on very rare occasions in the tank one finds an artistic drawing of the next step of the Turing machine’s functioning while executing that program—it will be the drawing of a tape, a head, and various symbols on the tape. Of course, in between these steps will be a millenia of garbage.

In fact, it turns out that (with probability one) there will be some specific number n of years such that the correct first step of the Turing machine’s functioning will be drawn in exactly n years, the correct second step in exactly 2n years, the correct third one in exactly 3n years, and so on (remembering that there is only a finite number of steps, since we have working with a finite-space subset). (Technically, this is because weak mixing implies multiple weak mixing.) Moreover, each step causally depends on the preceding one. Will this be computation? Will the tank of paint be running the program in this process?

Intuitively, no. For although we do have causal connections between the state in n years and the next state in 2n years and so on, those connections are too counterfactually fragile. Let’s say you took the artistic drawing of the Turing machine in the tank at the first step (namely in n years) and you perturbed some of the paint particles in a way that makes no visible difference to the visual representation. Then probably by 2n years things would be totally different from what they should be. And if you changed the drawing to a drawing of a different Turing machine state, the every-n-years evolution would also change.

So it seems that for computation we need some counterfactual robustness. In a real computer, physical states define logical states in a infinity-to-one way (infinitely many “small” physical voltages count as a logical zero, and infinitely many “larger” physical voltages count as a logical one). We want to make sure that if the physical states were different but not sufficiently different to change the logical states, this would not be likely to affect the logical states in the future. And if the physical states were different enough to change the logical states, then the subsequent evolution would likely change in an orderly way. Not so in the paint system.

But the counterfactual robustness is tricky. Imagine a Frankfurt-style counterfactual intervener who is watching your computer while your computer is computing ten thousand digits of π. The observer has a very precise plan for all the analog physical states of your computer during the computation, and if there is the least deviation, the observer will blow up the computer. Fortunately, there is no deviation. But now with the intervener in place, there is no counterfactual robustness. So it seems the computation has been destroyed.

Maybe it’s fine to say it has been destroyed. The question of whether a particular physical system is actually running a particular program seems like a purely verbal question.

Unless consciousness is defined by computation. For whether a system is conscious, or at least conscious in a particular specific way, is not a purely verbal question. If consciousness is defined by computation, we need a mapping between physical states and logical computational states, and what that mapping is had better not be a purely verbal question.

Tuesday, April 16, 2024

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Tuesday, April 2, 2024

Aristotelian functionalism

Rob Koons and I have argued that the best functionalist theory of mind is one where the proper function of a system is defined in an Aristotelian way, in terms of innate teleology.

When I was teaching on this today, it occured to me (it should have been obvious earlier) that this Aristotelian functionalism has the intuitive consequence that only organisms are minded. For although innate teleology may be had by substances other than organisms, inorganic Aristotelian material substances do not have the kind of teleology that would make for mindedness on any plausible functionalism. Here I am relying on Aristotle’s view that (maybe with some weird exceptions, like van Inwagen’s snake-hammock) artifacts—presumably including computers—are not substances.

If this is right, then the main counterexamples to functionalism disappear:

Next recall Leibniz’s mill argument: if a machine can be conscious, a mill full of giant gears could be conscious, and yet as we walked through such mill, it would be clear that there is no consciousness anywhere. But now suppose we were told that the mill has an innate function (not derived from the purposes of the architect) which governed the computational behavior of the gears. We would then realize that the mill is more than just what we can see, and that would undercut the force of the Leibnizian intuition. In other words, it is not so hard to believe that a mill with innate purpose is conscious.

Further, note that perhaps the best physicalist account of qualia is that qualia are grounded in the otherwise unknowable categorical features of the matter making up our brains. This, however, has a somewhat anti-realist consequence: the way our experiences feel has nothing to do with the way the objects we are experiencing are. But an Aristotelian functionalist can tell this story. If I have a state whose function is to represent red light, then I have an innate teleology that makes reference to red light. This innate teleology could itself encode the categorical features of red light, and since this innate teleology, via functionalism, grounds our perception of red light, our perception of red light is “colored” by the categorical features not just by our brains, but by the categorical features of red light (even if we are hallucinating the red light). This makes for a more realist theory of qualia, on which there is a non-coincidental connection between the external objects and how they seem to us.

Observe, also, how the Aristotelian story has advantages of panpsychism without the disadvantages. The advantage of panpsychism is that the mysterious gap between us and electrons is bridged. The disadvantages are two-fold: (a) it is highly counterintuitive that electrons are conscious (the gap is bridged too well) and (b) we don’t have a plausible story about how the consciousness of the parts gives rise to a consciousness of the whole. But on Aristotelian functionalism, it is teleology that we have in common with electrons, so we do not need to say that electrons are conscious—but because mind reduces to teleological function, though not of the kind electrons have, we still have bridging. And we can tell exactly the kind of story that non-Aristotelians do about how the function of the parts gives rise to the consciousness of the whole.

There is, however, a serious downside to this Aristotelian functionalism. It cannot work for the simple God of classical theism. But perhaps we can put a lot of stress on the idea that “mind” is only said analogously between creatures and God. I don’t know if that will work.

Functionalism and organizations

I am quite convinced that if standard (non-evolutionary, non-Aristotelian) functionalism is true, then complex organizations such as universities and nations have minds and are conscious. For it is clear to me that dogs are conscious, and the functioning of complex organizations is more intellectually sophisticated than that of dogs, and has the kind of desire-satisfaction drivers that dogs and maybe even humans have.

(I am pretty sure I posted something like this before, but I can’t find it.)

Thursday, February 8, 2024

Supervenience and counterfactuals

On typical functionalist views of mind, what mental states a physical system has depends on counterfactual connections between physical properties in that system. But we can have two worlds that are exactly the same physically—have exactly the same tapestry of physical objects, properties and relations—but differ in what counterfactual connections hold between the physical properties. To see that, just imagine that one of the two worlds is purely physical, and in that world, w1, striking a certain match causes a fire, and:

  1. Were that match not struck, there would have been no fire.

But now imagine another world, w2, which is physically exactly the same, but there is a nonphysical spirit who wants the fire to happen, and who will miraculously cause the fire if the match is not struck. But since the match is struck, the spirit does nothing. In w2, the counterfactual (1) is false. (This is of course just a Frankfurt case.)

Thus physicalist theories where counterfactual connections are essential are incompatible with supervenience of the mental upon the physical.

I suppose one could insist that the supervenience base has to include counterfactual facts, and not just physical facts. But this is problematic. Even in purely physical worlds, counterfactual facts are not grounded in physical facts, but in physical facts combined with the absence of spirits, ghosts, etc. And in worlds that are only partly physical, counterfactual connections between physical facts may be grounded in the dispositions of non-physical entities.

Tuesday, February 6, 2024

Computationalism and subjective time

Conway’s Game of Life is Turing complete. So if computationalism about mind is true, we can have conscious life in the Game of Life.

Now, consider a world C (for Conway) with three discrete spatial dimensions, x, y and z, and one temporal dimension, t, discrete or not. Space is thus a three-dimensional regular grid. In addition to various “ordinary” particles that occupy the three spatial dimensions and have effects forwards along the temporal dimension, C also has two special particle types, E and F.

The causal powers of the E and F particles have effects simultaneous with their causes, and follow a spatialized version of the rules of the Game of Life. Say that a particle at coordinates (x0,y0,z0) has as its “neighbors” particles at the eight grid points with the same z coordinate z0 and surrounding (x0,y0,z0). Then posit these causal powers of an E (for “empty”) or F (for “full”) particle located at (x0,y0,z0):

  • If it’s an F particle and has less two or three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • If it’s an E particle and has exactly three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • Otherwise, it instantaneously causes an E particle at (x0,y0,z0+1).

Furthermore, suppose that along the temporal axis, the E and F particles are evanescent: if they appear at some time, they exist only at that time, perishing right after.

In other words, once some E and F particles occur somewhere in space, they instantly propagate more E and/or F particles to infinity along the z-axis, all of which particles then perish by the next moment of time.

Given the Turing completeness of the Game of Life and a computational theory of mind, the E and F particles can compute whatever is needed for a conscious life. We have, after all, a computational isomorphism between an appropriately arranged E and F particle system in C and any digital computer system in our world.

But because the particles are evanescent, that conscious life—with all its subjective temporal structure—will happen all at once according to the objective time of its world!

If this is right, then on a computational theory of mind, we can have an internal temporally structured conscious life with a time sequence that has nothing to do with objective time.

One can easily get out of this consequence by stipulating that mind-constituting computations must be arranged in an objective temporal direction. But I don’t think a computationalist should add this ad hoc posit. It is better, I think, simply to embrace the conclusion that internal subjective time need not have anything to do with external time.

Thursday, September 28, 2023

Humeanism about causation and functionalism about mind

Suppose we combine a Humean account of causation on which causation is a function of the pattern of intrinsically acausal events in reality with a functionalist account of consciousness. (David Lewis, for instance, accepted both.)

Here is an interesting consequence. Whether you are now conscious depends on what will happen in the future. For if the world were to radically change 14 billion years from the Big Bang, i.e., 200 million years from now, in such a way that the regularities that held for the first 14 billion years would not be laws, then the causal connections that require these regularities to be laws would not obtain either, and hence (unless we got lucky and new regularities did the job) our brains would lack the kind of causal interconnections that are required for a functionalist theory of mind.

This dependence of whether we are now conscious on what will happen in the future is intuitively absurd.

But suppose we embrace it. Then if functionalism is the necessary truth about the nature of mind, the fact that we are now conscious necessarily implies that the future will not be such as to disturb the lawlike regularities on which our consciousness is founded. In other words, on the basis of the fact that there are now mental states, one can a priori conclude things about the arrangement of physical objects in the future.

Indeed, this opens up the way for specific reasoning of the following sort. Given what the constitution of humans brains is, and given functionalism, for these brains to exhibit mental states of the sort they do, such-and-such generalizations must be special cases of laws of nature. But for there to be such laws of nature, then the future must be such-and-such. So, we now have a room for substantive a priori predictions of the future.

This all sounds very un-Humean. Indeed, it sounds like a direct contradiction to the Humean idea that reasoning from present to future is merely probabilistic. But while it is very counterintuitive, it is not actually a contradiction to the Humean idea. For on functionalism plus Humeanism about causation, facts about present mental states are not facts about the present—they are facts about the universe as a whole!

(This was sparked by some related ideas by Harrison Jennings.)

Thursday, February 10, 2022

Animalist functionalism

The only really plausible hope for a materialist theory of mind is functionalism. But the best theory of our identity, materialist or not, is animalism—we are animals.

Can we fit these two theories together? On its face, I think so. The thing we need to do is to make the functions defining mental life be functions of the animal, not of the brain as such. Here are three approaches:

  1. Adopt a van Inwagen style ontology on which organisms exist but brains do not. If brains don’t exist, they don’t have functions.

  2. Insist that some of the functions defining mental life are such that they are had by the animal as a whole and not by the brain. Probably the best bet here are the external inputs (senses) and outputs (muscles).

  3. Modify functionalism by saying that mental properties are properties of an organism with such-and-such functional roles.

I think option 2 has some special difficulties, in that it is going to be difficult to define “external” in such a way that the brain’s connections to the rest of the body don’t count as external inputs and outputs and yet we allow enough multiple realizability to make very alien intelligent life possible. One way to fix these difficulties with option 2 is to move it closer to option 3 by specifying that the external inputs and outputs must be inputs and outputs of an organism.

Options 1 and 3, as well as option 2 if the above fix is used, have the consequence that strong AI is only possible if it is embedded in a synthetic organism.

All that said, animalist functionalism is in tension with an intuition I have about an odd thought experiment. Imagine that after I got too many x-rays, my kidney mutated to allow me exhibit the kinds of functions that are involved in consciousness through the kidney (if organism-external inputs and outputs are required, we can suppose that the kidney gets some external senses, such as a temperature sense, and some external outputs, maybe by producing radio waves, which help me in some way) in addition to the usual way through the brain, and without any significant interaction with the brain’s computation. So I am now doing sophisticated computation in my kidney of a sort that should yield consciousness. On animalist functionalism, I should now have two streams of consciousness: one because of how I function via the brain and another because of how I function via the mutant kidney. But my intuition is that in fact I would not be conscious via the kidney. If there were two streams of consciousness in this situation (which I am not confident of), only one would be mine. And that doesn’t fit with animalist functionalism (though it fits fine with non-animalist functionalism, as well as with animalist dualism, since the dualist can say that the kidney’s functioning is zombie-like).

Given that functionalism is the only really good hope we have right now for a materialist theory of mind, if my intuition about the mutant kidney is correct, this suggests that animalism provides evidence against materialism.

Monday, November 22, 2021

Functionalism implies the possibility of zombies

Endicott has observed that functionalism in the philosophy of mind contradicts the widely accepted supervenience of the mental on the physical, because you can have worlds where the functional features are realized by non-physical processes.

My own view is that a functionalist physicalist shouldn’t worry about this much. It seems to be a strength of a functionalist view that it makes it possible to have non-physical minds, and the physicalist should only hold that in the actual world all the minds are physical (call this “actual-world physicalism”).

But here is something that might worry a physicalist a little bit more.

  • If functionalism and actual-world physicalism are true, there is a possible world which is physically exactly like ours but where there is no pain.

Here is why. On functionalism, pain is constituted by some functional roles. No doubt an essential part of that role is the detection of damage and the production of aversive behavior. Let’s suppose for simplicity that this role is realized in C-fiber firing in all beings capable of pain (the argument generalizes straightforwardly if there are multiple realizers). Now imagine a possible world physically just like this one, but with two modifications: there are lots of blissful non-physical angels, and all C-fiber equipped brains have an additional non-physical causal power to trigger C-fiber firing whenever an angel thinks about that brain. It is no longer true that the functional trigger for C-fiber firing is damage. Now, the functional trigger for C-fiber firing is the disjunction of damage and being thought about by an angel, and hence C-fiber firing no longer fulfills the functional role of pain. But now add that the angels never actually think about a rain while that brain is alive, though they easily could. Then the world is physically just like ours, but nobody feels any pain.

One might object that a functional role of a detector is unchanged by adding a disjunct to what is being detected. But that is mistaken. After all, imagine that we modify the hookups in a brain so that C-fiber firing is triggered by damage and lack of damage. Then clearly we’ve changed the functional role of C-fiber firing—now, the C-fibers are triggered 100% of the time, no matter what—even though we’ve just added a disjunct.

We can also set up a story where it is the aversive behavior side of the causal role that is removed. For instance, we may suppose that there is a magical non-physical aura normally present everywhere in the universe, and C-fiber firing interacts with this aura to magically move human beings in the opposite direction to the one their muscles are moving them to. The aura does nothing else. Thus, if the aura is present and you receive a painful stimulus, you now move closer to the stimulus; if the aura is absent, you move further away. It is no longer the case that C-fibers have the function of producing aversive behavior. However, we may further imagine that at times random abnormal holes appear in the aura, perhaps due to a sport played by non-physical pain-free imps, and completely coincidentally a hole has always appeared around any animal while its C-fibers were firing. Thus, the physical aspects of that world can be exactly the same as in ours, but there is no pain.

The arguments generalize to show that functionalists are committed to zombies: beings physically just like us but without any conscious states. Interestingly, these are implemented as the reverse of the zombies dualists think up. The dualist’s zombies lack non-physical properties that the dualist (rightly) thinks we have, and this lack makes them not be conscious. But my new zombies are non-conscious precisely because they have additional non-physical properties.

Note that the arguments assume the standard physicalist-based functionalism, rather than Koons-Pruss Aristotelian functionalism.

Tuesday, November 16, 2021

Functionalism and multiple realizability

Functionalism holds that two (deterministic) minds think the same thoughts when they engage in the same computation and have the same inputs. What does it mean for them to engage in the same computation?

This is a hard question. Suppose two computers run programs that sort a series of names in alphabetical order, but they use different sorting algorithms. Given the same inputs, are the two computers engaging in the same computation?

If we say “no”, then functionalism doesn’t have the degree of multiple realizability that we thought it did. We have no guarantee that aliens who behave very much like us think very much like us, or even think at all, since the alien brains may have evolved to compute using different algorithms from us.

If we say “yes”, then it seems we are much better off with respect to multiple realizability. However, there is a tricky issue here: What counts as the inputs and outputs? We just said that the computers using different sorting algorithms engage in the same computation. But the computer using a quicksort typically returns an answer sooner than a computer using a bubble sort, and heats up less. In some cases, the time at which an output is produced itself counts as an output (think of a game where timing is everything). And heat is a kind of output, too.

In my toy sorting algorithm example, presumably we didn’t count the timing and the heat as features of the outputs because we assumed that to the human designers and/or users of the computers the timing and heat have no semantic value, but are merely matters of convenience (sooner and cooler are better). But when we don’t have a designer or user to define the outputs, as in the case where functionalism is applied to randomly evolved brains, things are much more difficult.

So, in practice, even if we answered “yes” in the toy sorting algorithm case, in a real-life case where we have evolved brains, it is far from clear what counts as an output, and hence far from clear what counts as “engaging in the same computation”. As a result, the degree to which functionalism yields multiple realizability is much less clear.

Thursday, October 14, 2021

Disembodied existence and physicalism

Consider the following standard Cartesian argument:

  1. I can imagine myself existing without a body.

  2. So, probably, I can exist without a body.

  3. If I can exist without a body, I am not physical.

  4. So, I am not physical.

It is my impression that the bulk of physicalist concern about this argument focuses on the inference from (1) to (2). But it seems to me that it would be much more reasonable for the physicalist to agree to (2) but deny (3). After all, our best physicalist theory of the person is functionalism combined with a psychological account of personal identity. But on that theory, for me to exist without a body all that’s needed is for my memories to be transfered into a spiritual computational system which is functionally equivalent to my current neural computational system, and that seems quite plausibly possible.

The physicalist need not claim that I am essentially physical, only that I am in fact physical, i.e., that in the actual world, the realizer of my functioning is physical.

Wednesday, September 22, 2021

Against digital phenomenology

Suppose a digital computer can have phenomenal states in virtue of its computational states. Now, in a digital computer, many possible physical states can realize one computational state. Typically, removing a single atom from a computer will not change the computational state, so both the physical state with the atom and the one without the atom realize the same computational state, and in particular they both have the same precise phenomenal state.

Now suppose a digital computer has a maximally precise phenomenal state M. We can suppose there is an atom we can remove that will not change the precise phenomenal state it is in. And then another. And so on. But then eventually we reach a point where any atom we remove will change the precise phenomenal state. For if we could continue arbitrarily long, eventually our computer would have no atoms, and then surely it wouldn’t have a phenomenal state.

So, we get a sequence of physical states, each differing from the previous by a single atom. For a number of initial states in the sequence, we have the phenomenal state M. But then eventually a single atom difference destroys M, replacing it by some other phenomenal state or by no phenomenal state at all.

The point at which M is destroyed cannot be vague. For while it might be vague whether one is seeing blue (rather than, say, purple) or whether one is having a pain (rather than, say, an itch), whether one has the precise phenomenal state M is not subject to vagueness. So there must be a sharp transition. Prior to the transition, we have M, and after it we don’t have M.

The exact physical point at which the transition happens, however, seems like it will have to be implausibly arbitrary.

This line of argument suggests to me that perhaps functionalists should require phenomenal states to depend on analog computational states, so that an arbitrarily small of the underlying physical state can still change the computational state and hence the phenomenal state.

Functionalism and pain-likeness

Say that a functional property F is pain-like provided that a human is in pain if and only if the human has F.

Assuming functionalism, there is a functional property F0 which is pain. Property F0 will be pain-like, but it won’t be the only pain-like property. For there will be infinitely many ways of tweaking F0 to generate functional properties F1, F2, ... that in humans are instantiated precisely when F0 is, but that differ in instantiation among aliens. For instance, F1 could be F0 conjoined with the property of not currently thinking a thought that has seventeen levels of embedding (I take it that humans can’t think a thought with more than about three levels of embedding), while F2 could be F0 conjoined with the property of not consciously exercising magnetic sense, and so on.

There will thus be infinitely many pain-like properties that differ in when different aliens instantiate them. One of these pain-like properties, F0, is pain. And now we have a difficult question for functionalism: What grounds the fact that this particular pain-like property is pain? Why is it that having F0 is necessary and sufficient for hurting but having F1 isn’t? What’s so special about F0? Why is it that F0 picks out a phenomenally unified type, but the other properties need not?

Tuesday, November 17, 2020

Nomic functionalism

Functionalism says that of metaphysical necessity, whenever x has the same functional state as a system y with internal mental state M, then x has M as well.

What exactly counts as an internal mental state is not clear, but it excludes states like thinking about water for which plausibly semantic externalism is true and it includes conscious states like having a pain or seeing blue. I will assume that functional states are so understood that if a system x has functional state S, then a sufficiently good computer simulation of x has S as well.

A weaker view is nomic functionalism according to which for every internal mental state M (at least of a sort that humans have), there is a law of nature that says that everything that has functional state S has internal mental state M.

A typical nomic functionalist admits that it is metaphysically possible to have S without M, but thinks that the laws of nature necessitate M given S.

I am a dualist. As a result, I think functionalism is false. But I still wonder about nomic functionalism, often in connection with this intuition:

  1. Computers can be conscious if and only if functionalism or nomic functionalism is true.

Here’s the quick argument: If functionalism or nomic functionalism is true, then a computer simulation of a conscious thing would be conscious, so computers can be conscious. Conversely, if both computers and humans can be conscious, then the best explanation of this possibility would be given by functionalism or nomic functionalism.

I now think that nomic functionalism is not all that plausible. The reason for this is the intuition that a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself. Let me try to be more rigorous, though.

First, let’s continue from (1):

  1. Dualism is true.

  2. If dualism is true, functionalism is fale.

  3. Nomic functionalism is false.

  4. Therefore, neither functionalism nor nomic functionalism is true. (2–4)

  5. So, computers cannot be conscious. (1, 5)

And that’s really nice: the ethical worries about whether AI research will hurt or enslave inorganic persons disappear.

The premise I am least confident about in the above argument is (4). Nomic functionalism seems like a serious dualist option. However, I now think there is good inductive reason to doubt nomic functionalism.

  1. No known law of nature makes functional states imply non-functional states.

  2. So, no law of nature makes functional states imply non-functional states. (Inductively from 7)

  3. If functionalism is false, mental states are not functional states.

  4. So, mental states are not functional states. (2, 3, 9)

  5. So, no law of nature makes functional states imply mental states. (8 and 10)

  6. So, nomic functionalism is false. (11 and definition)

Regarding (7), if a law of nature made functional states imply non-functional states, that would mean that we have multiple realizability on the left side of the law but lacked multiple realizability on the right side. It would mean that any accurate computer simulation of a system with the given functional state would exhibit the particular non-functional state. This would be like a case where a computer simulation of water being heated were to have to result in actual water boiling.

I think the most promising potential counterexamples to (7) are thermodynamic laws that can be multiply realized. However, I think tht in those cases, the implied states are typically also multiply realizable.

A variant of the above argument replaces “law” with “fundamental law”, and uses the intuition that if dualism is true, then nomic functionalism would have to have fundamental laws that relate functional states to mental states.

Monday, November 9, 2020

Restricted epistemic mysterianism

There are two forms of mysterianism about X (say, consciousness):

  1. Conceptual: It would not be possible for us to even conceptualize the true theory of X.

  2. Epistemic: It would not be possible for us to know the true theory of X.

Conceptual mysterianism about X entails epistemic mysterianism about X. In the case of typical Xs, like consciousness or intentionality or morality, epistemic mysterianism entails conceptual mysterianism. For if we could conceptualize the true theory of X, then God could reveal to us that that theory is true. (I restricted to “typical Xs”, for there are some truths that we could not know but which we could conceptualize. For instance, that the past existence of life on Mars is a reality unknown to me is something I can conceptualize, but I can’t possibly know it.)

However, one can weaken epistemic mysterianism to:

  1. Restricted Epistemic: It would not be possible for us to know the true theory of X merely by human epistemic resources.

Consider the following interesting conditional:

  1. If physicalism is true about consciousness, then restricted epistemic mysterianism is true about it.

Here is an argument against 4. Imagine that we find a new physics in the brains of precisely those organisms that it is plausible to think of as conscious (maybe cephalopods and higher vertebrates). For instance, maybe there is a new particle type that is only found in those brains, or perhaps some already known particle type behaves differently in those brains. Moreover, there is a close correlation between the behavior of the new physics and plausible things to say about consciousness in these critters. And when make a sophisticated enough AI, surprisingly that new physics also shows up in it. Given this, it would be reasonable to say that consciousness is to be identified with the behavior of that new physics.

But I think the following is true:

  1. If physicalism is true about consciousness and there is no new physics in the brains of conscious beings, then restricted epistemic mysterianism is true.

Here’s why. Assume physicalism. Some degree of multiple realizability of consciousness is true since cephalopods and mammals are both conscious, even though our brains are quite different—assuming the “new physics in brains” hypothesis is false (if it were true, the structural differences between cephalopod and mammal brains could be relevantly outbalanced by the similarities with respect to the “new physics”). Multiple realizability requires that consciousness be abstracted to some degree from the particular details of its embodiment in us. But there is no way of knowing how far it is to be abstracted. And without knowing that, we won’t know the true theory of consciousness.

If this is right, the true view of mind must be found among these three:

  • non-physicalism

  • restricted epistemic mysterianism (with or without conceptual mysterianism)

  • new physics.

On each of them, mind is mysterious. :-)

Monday, November 2, 2020

Pain and water

One way for physicalists to handle the apparent differences between mental and physical properties is to liken the difference to that between water and H2O. It is a surprising a posteriori fact that water is H2O. Similarly, it is a surprising a posteriori fact that pain is physical state ϕ135 (say).

Now, a posteriori facts are facts that are knowable by observation. But it is not clear that the proposition that pain is physical state ϕ135 is knowable by observation.

Here is why. There are two main candidates for what kind of a state ϕ135 could be: a brain state or a functional state. The choice between these two candidates depends on how strongly one feels about multiple realizability of mental states. If one is willing to say that only beings with brains like ours—say, complex vertebrates—feel pain, one might identify ϕ135 with a brain state. If one has a strong intuition that beings with other computational systems anatomically different from those of complex vertebrates—cephalopods, aliens, and robots—could have consciousness, one will opt for identifying ϕ135 as a functional state.

But in fact, assuming pain is a physical state, there is a broad spectrum of physical state candidates for identifying pain with, depending on how far we abstract from the actual physical realizers of our pains while keeping fixed the broad outlines of functionality (signaling damage and leading to aversive behavior). If we abstract very little, only brain states found in humans—and perhaps not all humans—will be pain. If we abstract a bit more, but still insist on anatomical correspondence, then brain states found in other complex vertebrates will be pain. If we drop the insistence on anatomical correspondence but do not depart too far, we may include amongst the subjects of pain other DNA-based organisms such as cephalopods. Further abstraction will let in living organisms with other chemical bases, and yet further abstraction will let in robots. And even when talking of the fairly pure functionalism applicable to robots, we will have serious questions about how far to abstract concepts such as “damage” and “aversive behavior”.

The question of where in this spectrum of more and more general physical states we find the state that is identical with pain does not appear to be a question to be settled by observation. By internal observation, we only see our own pain. By external observation, however, we cannot tell where in the spectrum of more and more general (perhaps along multiple dimensions) physical states pain is present, without begging the question (e.g., by assuming from the outset that certain behaviors show
the presence of pain, which basically forces our hand to a functionalism centered on those behavior).

Objection 1: An experimenter could replace the brain structures responsible for pain in her own brain by structures that are further from human ones, and observe whether she can still feel pain. Where the feeling of pain stops, there we have abstracted too far.

Response: There are serious problems with this experimental approach. First, mere replacement of brain pain centers will not allow one to test hypotheses on which what constitutes pain depends on the larger neural context. And replacement of the brain as a whole is unlikely to result in the experimenter surviving. Second, and perhaps more seriously, if replacements of the brain pain centers commit the same data to memory storage as brain pain centers do, after the experiment the agent will think that there was pain, even if there wasn’t any pain there, and if they have the same functional influence on vocal production as brain centers do, the agent will report pain, again even if there wasn’t any pain there.

Objection 2: We could know which physical state pain is identified with if God told us, and being told by God is a form of a posteriori knowledge.

Response: It seems likely that God’s knowledge of which physical states are pains, or of the fact that water is H2O, would be a priori knowledge. God doesn’t have to do scientific research to know necessary truths.

Objection 3: We can weaken the analogy and say that just as the identity between water and H2O is not a priori, so too the identity between pain and ϕ135 is not a priori, without saying that both are a posteriori.

Response: This is probably the move I’d go for if I were a physicalist. But by weakening this analogy, one weakens the position that it defends. For it is now admitted that there is a disanalogy between water-H2O and pain-ϕ135. There is something rather different about the mental case.

Monday, September 28, 2020

Causal finitism and functionalism

Say that a possible thought content has finite complexity provided that the thought content can be represented by a sentence of finite length in a language whose basic terms are the fundamental concepts in the thought content.

  1. Necessarily, if functionalism is true, then the occurrence of a thought content with infinite complexity requires infinitely many states to cooperate to produce a single effect.

  2. Infinitely many states cannot cooperate to produce a single effect.

  3. It is possible for a thought content with infinite complexity to occur.

  4. So, functionalism is false.

I have two separate ideas to defend (1). First, it seems like a system capable of producing a thought content must go through a number of states proportional to the complexity of that thought content in producing it if functionalism is true. Second, the occurrence of a thought content of infinite complexity requires infinitely many constituent states. Moreover, thoughts have to be unified: to think the conjunction of p and q is not just to think p and to think q but to think them in a unified way. On functionalism, the unification has to be causal in nature. To unify the infinitely many constituent states would require them to have the capability of producing some effect together.

If I were a functionalist, I would deny (3). The cost of that is that then most truths end up unthinkable, which seems implausible.

Monday, September 21, 2020

Limits of neuroscience

I think that our best physicalist view right now is a functionalism on which mental states are identified with types of computation in a hardware-agnostic way (i.e., whatever the hardware is, as long as the same type of computation is done, the mental states get tokened).

Question:

  1. If functionalism is true, what human discipline, if any, will discover which functional processes (e.g., the execution of what algorithms) constitute consciousness?

There are, I think, three plausible answers:

  1. None. We wouldn’t be able to know the answer.

  2. Philosophy and neuroscience working together.

  3. Neuroscience working alone.

I think the most plausible answer is (2), with (3) being a runner up.

In this post I want to give a quick argument that (4) is not the answer.

Neuroscience is a natural science. The natural sciences do not discover substantive facts about worlds whose laws of nature are radically different from ours. Some possible worlds whose laws of nature are radically different from ours contain beings with functional processes isomorphic to the ones running in us. Thus, if neuroscience discovered which functional processes constitute consciousness, it would discover about those worlds that they contain consciousness. That would be a substantive fact about these worlds, and that would contradict the assumption that neuroscience is a natural science.

Thursday, May 7, 2020

Swapping ones and zeroes

Decimal addition can be done by a computer using infinitely many algorithms. Here are two:

  1. Convert decimal to binary. Add the binary. Convert binary to decimal.

  2. Convert decimal to inverted binary. Inv-add the binary. Convert inverted binary to decimal.

By conversion between decimal and inverted binary, I mean this conversion (in the 8-bit case):

  • 0↔11111111, 1↔11111110, 2↔11111101, …, 255↔00000000.

By inv-add, I mean an odd operation that is equivalent to bitwise inverting, adding, and bitwise inverting again.

You probably thought (or would have thought had you thought about it) that your computer does decimal addition using algorithm (1).

Now, here’s the fun. We can reinterpret all the physical functioning of a digital computer in a way that reverses the 0s and 1s. Let’s say that normally 0.63V or less counts as zero and 1.17V or higher counts as one. But “zero” or “one” are our interpretation of analog physical states that in themselves do not have such meanings. So, we could deem 0.63V or less to be one and 1.17V or higher to be zero. With such a reinterpretation, logic gates change their semantics: OR and AND swap, NAND and NOR swap, while NOT remains NOT. Arithmetical operations change more weirdly: for instance, the circuit that we thought of as implementing an add should now be thought of as implementing what I earlier called an inv-add. (I am inspired here by Gerry Massey’s variant on Putnam reinterpretation arguments.)

And if before the reinterpretation your computer counted as doing decimal addition using algorithm (1), after the reinterpretation your computer uses algorithm (2).

So which algorithm is being used by a computer depends on the interpretation of the computer’s functioning. This is a kind of flip side to multiple realizability: multiple realizability talks of how the same algorithm can be implemented in physically very different ways; here, the same physical system implements many algorithms.

There is nothing really new here, though I think much of the time in the past when people have discussed the interpretation problem for a computer’s functioning, they talked of how the inputs and outputs can be variously interpreted. But the above example shows that we can keep fixed our interpretation of the inputs and outputs, and still have a lot of flexibility as to what algorithm is running “below the hood”.

Note that normally in practice we resolve the question of which algorithm is running by adverting to the programmers’ intentions. But we can imagine a case where an eccentric engineer builds a simple calculator without ever settling in her own mind how to interpret the voltages and whether the relevant circuit is an add or an inv-add, and hence without settling in her own mind whether algorithm (1) or (2) is used, knowing well that either one (as well as many others!) is a possible interpretation of the system’s functioning.