Showing posts with label science fiction. Show all posts
Showing posts with label science fiction. Show all posts

Tuesday, September 03, 2024

Themes and Aims of My Science Fiction

Expectation-Lowering Preface (Feel Free to Skip)

You probably won't like my science fiction stories.

Here's how I think of it. You could play me the best mariachi music in the world, and I won't enjoy it. Mariachi isn't my thing; I just don't get it. Similarly, Mozart's operas are great cultural achievements that move some people to ecstasy and tears, but I can't keep my seat through a whole performance of The Magic Flute. Even in genres I enjoy, some of the best performers don't interest me. Green Day inspired many of the alt-rock bands I like, and they are probably in some objective or intersubjective sense better than the bands I prefer, but... meh.

Tolkien, Le Guin, and Asimov were great science fiction and fantasy writers with broad appeal. Still, only a minority of readers -- indeed probably only a minority even among those who like the genre -- will actually enjoy Lord of the Rings, Left Hand of Darkness, or Asimov's robot stories.

I want to set low expectations. You're here, presumably, because you like my blog or my work in academic philosophy. Odds are, you won't like my fiction. My repeated experience is: I describe the concept behind one of my stories. The listener says, whoa, that sounds really cool, I'll check it out! but the story doesn't yield the pleasure they anticipate.

Maybe I'm a bad writer. But I prefer to think I'm good enough for the right readership, and it's mariachi music. You might find the concepts of some of my stories intriguing. I'm a philosopher: Concept is the first thing I go for, the sine qua non. But the story itself won't delight you unless it has the right prose style, the right pacing, a narrator and characters you relate to, plot styles you like, the right balance of action versus exposition, the right balance between easy familiarity and hard-to-digest strangeness, and many other factors of taste that legitimately vary. What's the chance that everything aligns? My guess: 10%.

Still, for any particular story, maybe you're in that 10%. You might even belong to the 5% who will enjoy most of my stories. I hope so! On that chance, I thought I'd compile a list, describing their guiding ideas and my aims in writing them. All are available online.

[image: translation of "Gaze of Robot, Gaze of Bird" into Chinese for Science Fiction World, with illustration]

The Stories

Starting in 2011, I began sporadically sharing short pieces of conceptual fiction on my blog. I don't think that was entirely successful, partly because I was a novice fiction writer and partly because that's not what people come here for. But one story prompted a reply by prominent SF writer R. Scott Bakker, who added an alternative ending. We decided to revise the story together and seek publication. Astonishingly, the science journal Nature accepted it.

* In that story, Reinstalling Eden (Nature, 503 (2013), 562), I wanted to imagine a utilitarian ethicist (that is, someone who thinks that our moral duty is to maximize the world's pleasure) who discovered he could create a multitude of happy entities on his computer and who then followed through on the consequences of that -- specifically, advocating the creation of such entities as a major global priority and then sacrificing his life for them. Bakker imagined a second narrator inheriting the computer after the first narrator's death, who chose to give the entities knowledge of their condition, setting them free to interact with humanity. (Themes: utilitarian ethics, living in a computer simulation.)

Through the mid-2010s, I continued to write short conceptual pieces, no longer placing them on my blog. Most of them have never been published, though there are still a few I like.

* My next published piece, Out of the Jar (F&SF, 128 (2015), 118-128), was my first "full-length" (~4000 word) story. I wanted to imagine a philosophy professor who discovers that his world is a simulation run by a sadistic adolescent "God". I thought the professor should try to convince God to have mercy on his creations, and then -- when that failed -- install himself as the new God. (Themes: living in a simulation, the problem of evil, the duties of gods to their creations)

* "Momentary Sage" (The Dark, 8 (2015), 38-43) explores teenage self-harm and suicide -- imaginatively reconfigured in the form of a self-destructive faerie infant. The infant is born cleverly arguing for a quasi-Buddhist perspective according to which past and future are unreal and the self is an illusion. Given these philosophical commitments, the infant would rather kill himself than suffer a moment's displeasure. His parents' desperate attempts to keep him in the world can only briefly postpone the inevitable. I chose to frame it as a sequel to Shakespeare's Midsummer Night's Dream, from the perspective of a bitter Demetrius. (Themes: suicide, the self, obligations to the future, parenthood)

* "The Tyrant's Headache" (Sci Phi Journal, 3 (2015), 78-83) will probably only appeal to readers who know David Lewis's classic article "Mad Pain and Martian Pain". This story is an extended thought-experimental objection to Lewis's view, according to which your experienced mental states constitutively depend in part on the normal causal role of those mental states in the population to which you belong. I imagine a tyrant who, heeding Lewis's advice, absurdly attempts to cure his headache by doing everything but changing his current brain state. (Themes: functionalism in philosophy of mind; see also Chapter 2 of Weirdness of the World)

* "The Dauphin's Metaphysics" (Unlikely Story, 12 (2015); audio at PodCastle 475 (2017)) portrays a psychologically realistic, low-technology case of "mind transfer" from one body to another. On some theories of personal identity, what makes you you are your memories, your personality, your values, and other features of your psychology. Suppose, then, that a dying prince arranges for a newborn infant to be raised to think of himself as a continuation of the prince, with accurate memories of the prince's life and the same values and personality. If done perfectly enough, would that be a continuation of the prince in a new body? The realistic, low-tech nature of the case makes it, I think, more challenging to say "yes" than with high-tech "upload" fantasies. The narrator is a socially isolated academic superstar who had earlier "become a new person" in a much more ordinary way. (Themes: personal identity, sexism, inequalities of power)

* In "Fish Dance" (Clarkesworld, 118 (2016); audio) I wanted to explore the boundaries of a meaningful afterlife or personality upload, by imagining a highly imperfect upload into an intensely pleasurable "afterlife". Suppose a small portion of you continues to exist for millions of years, with a few imperfect memories, ceaselessly repeating an ecstatic, joyful, erotic dance with a superficial duplicate of the person you once intensely loved? Would that be almost unimaginably good, or would it be a monstrous parody? I also thought it would be interesting for the protagonist to be -- contrary to virtually all writing advice -- almost completely passive throughout the story. He's an amputated head on life support, hallucinating half the time, and his only real action is to signal with his eyes at the crucial moment. (Themes: personal identity, afterlife, parenthood and marriage)

* In "The Library of Babel", Jorge Luis Borges searches for meaning in a universe composed of a vast library containing every possible book with every possible combination of letters, randomly arranged. In "THE TURING MACHINES OF BABEL" (Apex, 98 (2017); or here), I create a similar infinite library of texts -- except that the texts prove to be instructions for infinitely many randomly constituted computer programs, including the programs that constitute your mind as the story's reader and mine as author. I assume for the sake of the story that computational functionalism is true, and human minds are essentially just organic computers. (Themes: functionalism and computationalism about the mind, randomness and meaning)

* In "Little /^^^\&-" (Clarkesworld, 132 (2017); audio), a planet-sized group intelligence falls in love with Earth, which she sees as an immature, partly-formed group intelligence of broadly her kind. Little /^^^\&- herself is small compared to a galactic government that plans to sacrifice the whole galaxy for a still greater good, vast beyond even the government's comprehension. This is probably my weirdest, most difficult, least approachable story -- only for readers who don't mind puzzling together a complicated story with pieces near the beginning that only make sense retrospectively by the end. (Themes: group minds, how much we should sacrifice for larger things we can't understand)

In contrast, "Gaze of Robot, Gaze of Bird" (Clarkesworld, 151 (2019); audio) is probably my least dense, most approachable story, liked by the highest percentage of readers. I wanted to write a story in which the protagonist is a non-conscious machine -- a machine the reader can't help but incorrectly imagine as having desires and a point of view. This "point of view" character is a terraforming robot that spends 200 million years recreating the species that designed it and is finally rewarded with consciousness. (Themes: consciousness, what constitutes the survival of a species)

My 2019 book A Theory of Jerks and Other Philosophical Misadventures (MIT Press) is mostly a collection of lightly-to-moderately revised blog posts and op-eds, but it also contains four brief conceptual fictions.

In "A Two-Seater Homunculus" I discover that my neighbor's brain was replaced by a brother-and-sister homunculus pair, though no one seemed to notice.

"My Daughter's Rented Eyes" imagines submitting to corporate advertising and copyright protection agreements on what you can see, for improved overall functionality.

"Penelope's Guide to Defeating Time, Space, and Causation": Waiting for Odysseus' return, Penelope proves that the world contains infinitely many duplicates of everyone living out every possible future and concludes that death is impossible.

"How to Accidentally Become a Zombie Robot": If you test-drive life as a robot and seem to remember its having felt great, how confident should you be that those memories are real?

Penultimate manuscript versions of the stories are available here. (Themes: personal identity, technology ethics and corporate power, consciousness, computational functionalism)

"Passion of the Sun Probe" (AcademFic, 1 (2020), 7-11; audio at Reductio (2021), S0E11) concerns the ethics of designing conscious robots with self-sacrificial goals -- in this case a Sun probe who chooses (predictably, given its programming) to "freely" sacrifice itself on an ecstatic three-day scientific suicide mission to the Sun. (Themes: robot rights, technology ethics, freedom, what gives a life meaning; a short version of the case appears in Schwitzgebel & Garza 2020)

"Let Everyone Sparkle" (Aeon Ideas / Psyche, Apr 12, 2022): This story was accepted for publication in the New York Times' series of "Op-Eds from the Future" that ran from 2019 to 2020. Sadly, the series folded before the story could be printed. Aeon (later Psyche) graciously picked it up. Four decades in the future, a man raises a celebratory toast to the psychotechnology that prevents anyone from ever involuntarily experiencing negative emotions. Although the man argues that this technology is plainly good, the reader, I hope, doesn't feel as sure. (Themes: mood enhancement, the value of negative emotion, corporate power)

In "Larva Pupa Imago" (Clarkesworld, 197, (2023); audio) I had two main aims: to imagine the experience of an intelligent insect who eagerly dies for sex and to imagine minds that can merge and overlap. The story follows a cognitively enhanced butterfly from hatchling run to final mating journey, in a posthuman world where thoughts can be transferred by sharing cognitive fluids. Inspired in part by James Tiptree Jr's "Love Is the Plan the Plan is Death". (Themes: merging minds, personal identity, instinct and value)

For a decade, I've wanted to set a story in an assisted living facility. So many people end their lives there, but those lives are so invisible in the media! "How to Remember Perfectly" (Clarkesworld, 216 (2024)) is a love story between octogenarians. The science fiction "novum" is a device that allows them to control their moods and radically refashion their memories. How much does it matter if your memories are real? How much does it matter that your mood is responsive only to the good and bad things actually happening around you? (Themes: death, mood enhancement, memory, the value of truth)

One of these days, I'll discuss in more depth why I sometimes prefer to express my philosophical ideas as fiction, but this post is already overlong.

Monday, August 26, 2024

Top Science Fiction and Fantasy Magazines 2024

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years.  If you're curious what magazines tend to be viewed by insiders as elite, check the top of the list.  If you're curious to discover reputable magazines that aren't as widely known (or aren't as widely known specifically for their science fiction and fantasy), check the bottom of the list.


Below is my list for 2024. (For previous lists, see here.)

[Update, 1:34 pm: This post originally contained Dall-E output for "the cover of an amazingly wonderful science fiction magazine", but several people in the SF community have convinced me to rethink my use of AI art for this purpose, so I've removed the art for now while I give the issue more thought.]

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Sturgeon, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, Adams, or Tidhar "best of" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(2a.) Methodological notes for 2022-2024: There's been some disruption among SF best of anthologies recently, with Horton, Strahan, and Clarke all having delays and/or cessations. (Dozois died a few years ago.) Partly for this reason, and partly to compensate for the "American" focus of the Adams anthology, I've added Tidhar's World SF anthology series, though Tidhar doesn't draw exclusively from the previous year's publications.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress complex differences into a single scale. However, the prestige of a magazine is a socially real phenomenon worth tracking, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


Results:

1. Tor.com / Reactor (186 points) 

2. Clarkesworld (181.5) 

3. Uncanny (149)

4. Lightspeed (129) 

5. Asimov's (127) 

6. Fantasy & Science Fiction (109) 

7. Beneath Ceaseless Skies (59.5) 

8. Analog (47) 

9. Strange Horizons (incl Samovar) (43)

10t. Apex (36.5) 

10t. Nightmare (36.5) 

12. Slate / Future Tense (22) 

13t. FIYAH (19.5) (started 2017) 

13. Interzone (19.5) 

15. Fireside (18.5) (ceased 2022)

16. Fantasy Magazine (17.5) (off and on during the period, ceased 2023) 

17. Subterranean (17) (ceased short fiction 2014) 

18. The Dark (15) 

19. The New Yorker (9) 

20. Sunday Morning Transport (8.5) (started 2022)

21t. Future Science Fiction Digest (7) (started 2018, ceased or sporadic starting 2023) 

21t. Lady Churchill's Rosebud Wristlet (7)

23t. Diabolical Plots (6.5)

23t. The Deadlands (6.5) (started 2021)

25t. Conjunctions (6) 

25t. McSweeney's (6) 

25t. Sirenia Digest (6) 

28t. GigaNotoSaurus (5.5) 

28t. khōréō (5.5) (started 2021)

28t. Omni (5.5) (classic popular science magazine, relaunched 2017-2020) 

28t. Terraform (Vice) (5.5) (ceased 2023)

32. Shimmer (5) (ceased 2018) 

33. Tin House (4.5) (ceased short fiction 2019) 

34t. Boston Review (4) 

34t. Galaxy's Edge (4) (ceased 2023?)

34t. Omenana (4)

34t. Wired (4)

38t. B&N Sci-Fi and Fantasy Blog (3.5) (ceased 2019)

38t. Paris Review (3.5) 

40t. Anathema (3) (ran 2017-2022)

40t. Black Static (3) (ceased fiction 2023)

40t. Daily Science Fiction (3) (ceased 2023)

40t. Kaleidotrope (3) 

40t. Science Fiction World (3)

45t. Beloit Fiction Journal (2.5) 

45t. Buzzfeed (2.5) 

45t. Matter (2.5) 

48t. Augur (2) (started 2018)

*48t. Baffling (2) (started 2020)

48t. Flash Fiction Online (2)

48t. Mothership Zeta (2) (ran 2015-2017) 

48t. Podcastle (2)

*48t. Shortwave (2) (started 2022)

*54t. e-flux journal (1.5)

*54t. Escape Pod (1.5)

*54t. Fusion Fragment (1.5) (started 2020)

54t. MIT Technology Review (1.5) 

54t. New York Times (1.5) 

54t. Reckoning (1.5) (started 2017)

54t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]

--------------------------------------------------

Comments:

(1.) Beloit Fiction Journal,  Boston Review, Conjunctions, e-flux Journal, Matter, McSweeney's, The New Yorker, Paris Review, Reckoning, and Tin House are literary magazines that occasionally publish science fiction or fantasy. Buzzfeed, Slate and Vice are popular magazines, and MIT Technology Review, Omni, and Wired are popular science magazines, which publish a bit of science fiction on the side. The New York Times is a well-known newspaper that ran a series of "Op-Eds from the Future" from 2019-2020.  The remaining magazines focus on the science fiction and fantasy (SF) genre. All publish in English, except Science Fiction World, which is the leading science fiction magazine in China.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Uncanny (55)  
2. Clarkesworld (42) 
3. Tor / Reactor (31.5) 
4t. F&SF (22.5)
4t. Lightspeed (22.5) 
6. Apex (16.5) 
7. Strange Horizons (15.5) 
8. Asimov's (14)
9. Fantasy Magazine (12) 
10. Beneath Ceaseless Skies (11) 
11. FIYAH (9.5)
12. Nightmare (9) 
13. Sunday Morning Transport (8.5) 
14t. The Dark (6.5)
14t. The Deadlands (6.5) 

(3.) Over the past decade, the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- have slowly been displaced in influence by the leading free online magazines, Tor / Reactor, Clarkesworld, Uncanny, and Lightspeed (all founded 2006-2014).  In 2014, Asimov's and F&SF led the rankings by a wide margin (Analog had already slipped a bit, as reflected in its #5 ranking then). This year for the first time, the leading free online magazines are #1-#4, while the former big three sit at #5, #6, and #8.  Presumably, a large part of the explanation is that there are more readers of free online fiction than of paid magazines, which is attractive to authors and probably also helps with voter attention for the Hugo, Nebula, and World Fantasy awards.

(4.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com was a regularly updated list of markets that unfortunately ceased in 2023. Submission Grinder is a terrific resource for authors, with detailed information on magazine pay rates, submission windows, and turnaround times.

(5.) My academic philosophy readers might also be interested in the following magazines that specialize specifically in philosophical fiction and/or fiction by academic writers: AcademFic, After Dinner Conversation, and Sci Phi Journal.

Friday, June 30, 2023

Mostly Overlapping Minds: A Challenge for the View that Minds Are Necessarily Discrete

Last fall I gave a couple of talks in Ohio. While there, I met an Oberlin undergraduate named Sophie Nelson, with whom I have remained in touch. Sophie sent some interesting ideas for my paper in draft "Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons", so I invited her on as a co-author and we have been jointly revising. Check out today's new version!

Let's walk through one example from the paper, originally suggested by Sophie but mutually written for the final draft. I think it stands on its own without need of the rest of the paper as context. For the purposes of this argument we are assuming that broadly human-like cognition and consciousness is possible in computers and that functional and informational processes are what matter to consciousness. (These views are widely but not universally shared among consciousness researchers.)

(Readers who aren't philosophers of mind might find today's post to be somewhat technical and in the weeds.  Apologies for that!)

Suppose there are two robots, A and B, who share much of their circuitry in common. Between them hovers a box in which most of their cognition transpires. Maybe the box is connected by high-speed cables to each of the bodies, or maybe instead the information flows through high bandwidth radio connections. Either way, the cognitive processes in the hovering box are tightly cognitively integrated with A's and B's bodies and the remainders of their minds -- as tightly connected as is ordinarily the case in ordinary unified minds. Despite the bulk of their cognition transpiring in the box, some cognition also transpires in each robot's individual body and is not shared by the other robot. Suppose, then, that A has an experience with qualitative character α (grounded in A's local processors), plus experiences with qualitative characters β, γ, and δ (grounded in the box), while B has experiences with qualitative characters β, γ, and δ (grounded in the box), plus an experience with qualitative character ε (grounded in B's local processors).

If indeterminacy concerning the number of minds is possible, perhaps this isn't a system with a whole number of minds. Indeterminacy, we think, is an attractive view, and one of the central tasks of the paper is to argue in favor of the possibility of indeterminacy concerning the number of minds in hypothetical systems.

Our opponent -- whom we call the Discrete Phenomenal Realist -- assumes that the number of minds present in any system is always a determinate whole number. Either there's something it's like to be Robot A, and something it's like to be Robot B, or there's nothing it's like to be those systems, and instead there's something it's like to be the system as a whole, in which case there is only one person or subjective center of experience. "Something-it's-like-ness" can't occur an indeterminate number of times. Phenomenality or subjectivity must have sharp edges, the thinking goes, even if the corresponding functional processes are smoothly graded. (For an extended discussion and critique of a related view, see my draft paper Borderline Consciousness.)

As we see it, Discrete Phenomenal Realists have three options when trying to explain what's going on in the robot case: Impossibility, Sharing, and Similarity. According to Impossibility, the setup is impossible. However, it's unclear why such a setup should be impossible, so pending further argument we disregard this option. According to Sharing, the two determinately different minds share tokens of the very same experiences with qualitative characters β, γ, and δ. According to Similarity, there are two determinately different minds who share experiences with qualitative characters β, γ, and δ but not the very same experience tokens: A's experiences β1, γ1, and δ1 are qualitatively but not quantitatively identical to B's experiences β2, γ2, and δ2. An initial challenge for Sharing is its violation of the standard view that phenomenal co-occurrence relationships are transitive (so that if α and β phenomenally co-occur in the same mind, and β and ε phenomenally co-occur, so also do α and ε). An initial challenge for Similarity is the peculiar doubling of experience tokens: Because the box is connected to both A and B, the processes that give rise to β, γ, and δ each give rise to two instances of each of those experience types, whereas the same processes would presumably give rise to only one instance if the box was connected only to A.

To make things more challenging for the Discrete Phenomenal Realist who wants to accept Sharing or Similarity, imagine that there's a switch that will turn off the processes in A and B that give rise to experiences α and ε, resulting in A's and B's total phenomenal experience having an identical qualitative character. Flipping the switch will either collapse A and B to one mind, or it will not. This leads to a dilemma for both Sharing and Similarity.

If the defender of Sharing holds that the minds collapse, then they must allow that a relatively small change in the phenomenal field can result in a radical reconfiguration of the number of minds. The point can be made more dramatic by increasing the number of experiences in the box and the number of robots connected to the box. Suppose that 200 robots each have 999,999 experiences arising from the shared box, and just one experience that's qualitatively unique and localized – perhaps a barely noticeable circle in the left visual periphery for A, a barely noticeable square in the right visual periphery for B, etc. If a prankster were to flip the switch back and forth repeatedly, on the collapse version of Sharing the system would shift back and forth from being 200 minds to one, with almost no difference in the phenomenology. If, however, the defender of Sharing holds that the minds don't collapse, then they must allow that multiple distinct minds could have the very same token-identical experiences grounded in the very same cognitive processors. The view raises the question of the ontological basis of the individuation of the minds; on some conceptions of subjecthood, the view might not even be coherent. It appears to posit subjects with metaphysical differences but not phenomenological ones, contrary to the general spirit of phenomenal realism about minds.

The defender of Similarity faces analogous problems. If they hold the number of minds collapses to one, then, like the defender of Sharing, they must allow that a relatively small change in the phenomenal field can result in a radical reduction in the number of minds. Furthermore, they must allow that distinct, merely type-identical experiences somehow become one and the same when a switch is flipped that barely changes the system's phenomenology. But if they hold that there's no collapse, then they face the awkward possibility of multiple distinct minds with qualitatively identical but numerically distinct experiences arising from the same cognitive processors. This appears to be ontologically unparsimonious phenomenal inflation.

Maybe it will be helpful to have the possibilities for the Discrete Phenomenal Realist depicted in a figure. Click to enlarge and clarify.

Thursday, June 08, 2023

New Paper in Draft: Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons

I have a new paper in draft, for a special issue of Journal of Consciousness Studies.  Although the paper makes reference to a target article by Francois Kammerer and Keith Frankish, it should be entirely comprehensible without knowledge of the target article, and hopefully it's of independent interest.

Abstract:

Kammerer and Frankish (this issue) challenge us to expand our conception of introspection, and mentality in general, beyond neurotypical human cases. This article describes a technologically possible "ancillary mind" modeled on a system envisioned in Ann Leckie's (2013) science fiction novel Ancillary Justice. The ancillary mind constitutes a borderline case between an intimately communicating group of individuals and a single, unified, spatially distributed mind. It occupies a gray zone with respect to personal identity and subject individuation, neither determinately one person or conscious subject nor determinately many persons or conscious subjects. Advocates of a Phase Transition View of personhood or Discrete Phenomenal Realism might reject the possibility of indeterminacy concerning personal identity and subject individuation. However, the Phase Transition View is empirically unwarranted, and Discrete Phenomenal Realism is metaphysically implausible. If ancillary minds defy discrete countability, the same might be true for actual group minds on Earth and human cases of multiple personality or Dissociative Identity.

----------------------------------------

Full draft here.  As usual, comments, questions, objections welcome, either as comments on this post or directly by email to my academic address.

Thursday, February 02, 2023

Larva Pupa Imago

Yesterday, my favorite SF magazine, Clarkesworld, published another story of mine: "Larva Pupa Imago".

"Larva Pupa Imago" follows the life-cycle of a butterfly with human-like intelligence, from larva through mating journey.  This species of butterfly blurs the boundaries between self and other by swapping "cognitive fluids".  And of course I couldn't resist a reference to Zhuangzi.

Monday, August 08, 2022

Top Science Fiction and Fantasy Magazines 2022

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years. Below is my list for 2022. (For all previous lists, see here.)

[A DALL-E output for "science fiction and fantasy magazine"]


Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Sturgeon, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, or Adams "year's best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(2a.) Methodological notes for 2022: Starting this year, I swapped the Sturgeon for the Eugie award for all award years 2013-2022. Also, with the death of Dozois in 2018, the [temporary?] cessation of the Strahan anthology, and the delay of the Horton and Clarke anthologies, the 2022 year includes only one new anthology source: Adams 2021. Given the ten-year-window, anthologies still comprise about half the weight of the rankings overall.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress interesting complexity into a single scale. However, the prestige of a magazine is a socially real phenomenon that deserves to be tracked, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


Results:

1. Tor.com (198 points) 

2. Clarkesworld (185.5) 

3. Asimov's (160.5) 

4. Lightspeed (129) 

5. Fantasy & Science Fiction (127.5) 

6. Uncanny (113) (started 2014) 

7. Beneath Ceaseless Skies (59.5) 

8. Analog (55) 

9. Strange Horizons (46)

10. Subterranean (35) (ceased short fiction 2014) 

11. Nightmare (31.5) 

12. Apex (30) 

13. Interzone (30.5) 

14. Fireside (18.5) 

15. Slate / Future Tense (17.5) 

16. FIYAH (13.5) (started 2017) 

17. The Dark (11.5) 

18. Fantasy Magazine (10) (occasional special issues during the period, fully relaunched in 2020) 

19. The New Yorker (9.5) 

20t. Lady Churchill's Rosebud Wristlet (7) 

20t. McSweeney's (7) 

22. Sirenia Digest (6) 

23t. Omni (5.5) (classic magazine, briefly relaunched 2017-2018) 

23t. Tin House (5.5) (ceased short fiction 2019) 

25t. Black Static (5) 

25t. Conjunctions (5) 

25t. Diabolical Plots (5) (started 2015)

25t. Shimmer (5) (ceased 2018) 

29. Terraform (4.5) (started 2014) 

30t. Boston Review (4) 

30t. GigaNotoSaurus (4) 

32. Paris Review (3.5) 

33t. Daily Science Fiction (3) 

33t. Electric Velocipede (3) (ceased 2013) 

33t. Future Science Fiction Digest (3) (started 2018) 

*33t. Galaxy's Edge (3)

33t. Kaleidotrope (3) 

33t. Omenana (3) (started 2014) 

33t. Wired (3)

40t. Anathema (2.5) (started 2017)

40t. B&N Sci-Fi and Fantasy Blog (2.5) (started 2014)

40t. Beloit Fiction Journal (2.5) 

40t. Buzzfeed (2.5) 

40t. Matter (2.5) 

40t. Weird Tales (2.5) (classic magazine, off and on throughout the period)

46t. Harper's (2) 

46t. Mothership Zeta (2) (ran 2015-2017) 

*48t khōréō (1.5) (started 2021)

48t. MIT Technology Review (1.5) 

48t. New York Times (1.5) 

48t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]

--------------------------------------------------

Comments:

(1.) The New Yorker, McSweeney's, Tin House, Conjunctions, Boston Review, Beloit Fiction Journal, Harper's, Matter, and Paris Review are literary magazines that occasionally publish science fiction or fantasy.  Slate and Buzzfeed are popular magazines, and Omni, Wired, and MIT Technology Review are popular science magazines, which publish a bit of science fiction on the side.  The New York Times is a well-known newspaper that ran a series of "Op-Eds from the Future" from 2019-2020.  The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window.  Here are those results, down to six points:

1. Uncanny (59) 
2. Tor.com (56.5) 
3. Clarkesworld (37.5)
4. F&SF (36)
5. Lightspeed (29)
6. Asimov's (25.5)
7t. Beneath Ceaseless Skies (14) 
7t. Nightmare (14)
9. Analog (11) 
10. Strange Horizons (10.5) 
11. Slate / Future Tense (9) 
12. FIYAH (8.5) 
13. Apex (8) 
14. Fireside (7)

(3.) For the past several years it has been clear that the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- are slowly being displaced in influence by the four leading free online magazines, Tor.com, Clarkesworld, Lightspeed, and Uncanny (all founded 2006-2014).  Contrast this year's ranking with the ranking from 2014, which had Asimov's and F&SF on top by a wide margin.  Presumably, a large part of the explanation is that there are more readers of free online fiction than of paid subscription magazines, which is attractive to authors and probably also helps with voter attention for the Hugo, Nebula, and World Fantasy awards.

(4.) Left out of these numbers are some terrific podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are also important venues.

(5.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

Monday, June 27, 2022

If We're Living in a Simulation, The Gods Might Be Crazy

[A comment on David Iverson's new short story, "This, But Again", in Slate's Future Tense]

That we’re living in a computer simulation—it sounds like a paranoid fantasy. But it’s a possibility that futurists, philosophers, and scientific cosmologists treat increasingly seriously. Oxford philosopher and noted futurist Nick Bostrom estimates there’s about a 1 in 3 chance that we’re living in a computer simulation. Prominent New York University philosopher David J. Chalmers, in his recent book, estimates at least a 25 percent chance. Billionaire Elon Musk says it’s a near-certainty. And it’s the premise of this month’s Future Tense Fiction story by David Iserson, “This, but Again.”

Let’s consider the unnerving cosmological and theological implications of this idea. If it’s true that we’re living in a computer simulation, the world might be weirder, smaller, and more unstable than we ordinarily suppose.

Full story here.

----------------------------------------

Related:

"Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov. 15, 2013).

"Our Possible Imminent Divinity" (Jan. 2, 2014).

"1% Skepticism" (Nous (2017) 51, 271-290).

Related "Is Life a Simulation? If So, Be Very Afraid" (Los Angeles Times, Apr. 22, 2022).

Tuesday, April 12, 2022

Let Everyone Sparkle: Psychotechnology in the Year 2067

My latest science fiction story, in Psyche.

Thank you, everyone, for coming to my 60th birthday celebration! I trust that you all feel as young as ever. I feel great! Let’s all pause a moment to celebrate psychotechnology. The decorations and Champagne are not the only things that sparkle. We ourselves glow and fizz as humankind never has before. What amazing energy drinks we have! What powerful and satisfying neural therapies!

If human wellbeing is a matter of reaching our creative and intellectual potential, we flourish now beyond the dreams of previous generations. Sixth-graders master calculus and critique the works of Plato, as only college students could do in the early 2000s. Scientific researchers work 16-hour days, sleeping three times as efficiently as their parents did, refreshed and eager to start at 2:30am. Our athletes far surpass the Olympians of the 2030s, and ordinary fans, jazzed up with attentional cocktails, appreciate their feats with awesome clarity of vision and depth of understanding. Our visual arts, our poetry, our dance and craftwork – all arguably surpass the most brilliant artists and performers of a century ago, and this beauty is multiplied by audiences’ increased capacity to relish the details.

Yet if human wellbeing is a matter not of creative and intellectual flourishing but consists instead in finding joy, tranquility and life satisfaction, then we attain these things too, as never before. Gone are the blues. Our custom pills, drinks and magnetic therapies banish all dull moods. Gone is excessive anxiety. Gone even are grumpiness and dissatisfaction, except as temporary spices to balance the sweetness of life. If you don’t like who you are, or who your spouses and children are, or if work seems a burden, or if your 2,000-square-foot apartment seems too small, simply tweak your emotional settings. You need not remain dissatisfied unless you want to. And why on Earth would anyone want to?

Gone are anger, cruelty, immorality and bitter conflict. There can be no world war, no murderous Indian Partition, no Rwandan genocide. There can be no gang violence, no rape, no crops rotting in warehouses while the masses starve. With the help of psychotechnology, we are too mature and rational to allow such things. Such horrors are fading into history, like a bad dream from which we have collectively woken – more so, of course, among advanced societies than in developing countries with less psychotechnology.

We are Buddhists and Stoics improved. As those ancient philosophers noticed, there have always been two ways to react if the world does not suit your desires. You can struggle to change the world – every success breeding new desires that leave you still unhappy – or you can, more wisely, adjust your desires to match the world as it already is, finding peace. Ancient meditative practices delivered such peace only sporadically and imperfectly, to the most spiritually accomplished. Now, spiritual peace is democratised. You need only twist a dial on your transcranial stimulator or rebalance your morning cocktail.

[continued here]

Friday, April 01, 2022

Work on Robot Rights Doesn't Conflict with Work on Human Rights

Sometimes I write and speak about robot rights, or more accurately, the moral status of artificial intelligence systems -- or even more accurately, the possible moral status of possible future artificial intelligence systems. I occasionally hear the following objection to this whole line of work: Why waste our time talking about hypothetical robot rights when there are real people, alive right now, whose rights are being disregarded? Let's talk about the rights of those people instead! Some objectors add the further thought that there's a real risk that, under the influence of futurists, our society might eventually treat robots better than some human beings -- ethnic minorities, say, or disabled people.

I feel some of the pull of this objection. But ultimately, I think it's off the mark.

The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.

In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.

But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.

Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.

But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)

So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.

But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.

In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.

Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.

[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]

------------------------------------------

Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.

[image source]

Tuesday, March 01, 2022

Do Androids Dream of Sanctuary Moon?

guest post by Amy Kind

In the novel that inspired the movie Blade Runner, Philip K. Dick famously asked whether androids dream of electric sheep. Readers of the Murderbot series by Martha Wells might be tempted to ask a parallel question: Do androids dream of The Rise and Fall of Sanctuary Moon?

Let me back up a moment for those who haven’t read any of the works making up The Murderbot Diaries.[1] The series’ titular character is a SecUnit (short for Security Unit). SecUnits are bot-human constructs, and though they are humanoid in form, they generally don’t act especially human-like and they have all sorts of non-human attributes including a built-in weapons system. Like other SecUnits, Murderbot has spent most of its existence providing security to humans who are undertaking various scientific, exploratory, or commercial missions. But unlike other SecUnits, Murderbot has broken free of the tight restrictions and safeguards that are meant to keep it in check. About four years prior to the start of the series, Murderbot had hacked its governor module, the device that monitors a SecUnit and controls its behavior, sometimes by causing it pain, sometimes by immobilizing it, and sometimes by ending its existence.

So how has Murderbot taken advantage of its newfound liberty? How has it kept itself occupied in its free time? The answer might initially seem surprising: Murderbot has spent an enormous amount of its downtime watching and rewatching entertainment media. In particular, it’s hooked on a serial drama called The Rise and Fall of Sanctuary Moon. We’re not told very much about Sanctuary Moon, or why it would be so especially captivating to a SecUnit, though we get some throw-away details now and then over the course of the series. We know it takes place in space, that it involves murder, sex, and legal drama, and that it has at least 397 episodes. In an interview with Newsweek in 2020, Wells has said that the show “is kind of based on How to Get Away with Murder, but in space, on a colony, with all different characters and hundreds more episodes, basically.”

It's not uncommon for the sophisticated AI of science fiction to adopt hobbies and pursue various activities in their leisure time. Andrew, the robot in Asimov’s Bicentennial Man, takes up wood carving, while Data, the android of Star Trek, takes up painting and spends time with his cat, Spot. HAL, the computing system built into the Discovery One spaceship in 2001: A Space Odyssey, plays chess. But it does seem fairly unusual for an AI to spend so much of its time binge-watching entertainment media. Murderbot’s obsession (one might even say addiction) is somewhat puzzling, at least to its human clients. In All Systems Red, when one of these clients reviews the SecUnit’s personal logs to see what it’s been up to, he discovers that it has downloaded 700 hours of media in the short time since their spacecraft landed on the planet they are exploring. The client hypothesizes that Murderbot must be using the media for some hidden, possibly nefarious purpose, perhaps to mask other data. As the client says, “It can’t be watching it, not in that volume; we’d notice.” (One has to love Murderbot’s response: “I snorted. He underestimated me.”)

Over the course of the series, as we learn more and more about Murderbot, the puzzle starts to dissipate. Certainly, Sanctuary Moon is entertainment for Murderbot. It’s an amusing diversion from its daily grind of security work. But it’s also much more than that. As Murderbot explicitly tells us, rewatching old episodes calms it down in times of stress. It borrows various details from Sanctuary Moon to help it in its work, as when it adopts one of the character’s names as an alias or when it decides what to do based on what characters on the show have done in parallel scenarios. And watching this serial helps Murderbot to process emotions. As it states on more than one occasion, it doesn’t like to have emotions about real life and would much prefer to have them about the show.

Though Murderbot is not comfortable engaging in self-reflection and prefers to avoid examination of its feelings and motivations, it cannot escape this altogether. We do see occasional moments of introspection. One particularly illuminating moment comes during an exchange between the SecUnit and Mensah, the human to which it is closest. In the novella Exit Strategy, when Mensah asks why it likes Sanctuary Moon so much, it doesn’t know how to answer at first. But then, once it pulls up the relevant memory, it’s startled by what it discovers and says more than it means to: “It’s the first one I saw. When I hacked my governor module and picked up the entertainment feed. It made me feel like a person.”

When Mensah pushes Murderbot for more, for why Sanctuary Moon would make it feel that way, it replies haltingly:

“I don’t know.” That was true. But pulling the archived memory had brought it back, vividly, as if it had all just happened. (Stupid human neural tissue does that.) The words kept wanting to come out. It gave me context for the emotions I was feeling, I managed not to say. “It kept me company without…” “Without making you interact?” she suggested.

Not only does Murderbot want to avoid having emotions about events in real life, it also wants to avoid emotional connections with humans. It is scared to form such connections. But a life without any connection is a lonely one. For Murderbot, watching media is not just about combatting boredom. It’s also about combatting loneliness.

As it turns out, then, Murderbot is addicted to Sanctuary Moon for many of the same reasons that any of us humans are addicted to the shows we watch – whether it’s Ted Lasso or Agents of Shield or Buffy the Vampire Slayer. These shows are diverting, yes, but they also bring us comfort, they give us outlets for our emotions, and they help us to fight against isolation. (Think of all the pandemic-induced binge-watching of the last two years.) So even though it might seem surprising at first that a sophisticated AI would want to devote so much of its time to entertainment media, it really is no more surprising than the fact that so many of us want to devote so much of our time to the same thing. Though it seems tempting to ask why an AI would do this, the only real answer is simply: Why wouldn’t it?

The reflections in this post thus bring us to a further moral about science fiction and what we can learn from it about the nature of artificial intelligence. In our abstract thinking about AI, we tend to get caught up in some Very Big Questions: Could they really be intelligent? Could they be conscious? Could they have emotions? Could we love them, and could they love us? None of these questions is easy to answer, and sometimes it’s hard to see how we could make progress on them. So perhaps what we need to do is to step back and think about some smaller questions. It’s here, I think, that science fiction can prove especially useful. When we try to imagine an AI existence, as works of science fiction help us to do, we need to imagine that life in a multi-faceted way. By thinking about what a bot’s daily life might be like, not just how a bot would interact with humans but how it would make sense of those interactions, or how it would learn to get better at them, or even just by thinking about what a bot would do in its free time, we start to flesh out some of our background assumptions about the capabilities of AI. In making progress on these smaller questions, perhaps we’ll also find ourselves better able to make progress on the bigger questions as well. To understand better the possibilities of AI sentience, we have to better understand the contours of what sentience brings along with it.

Ultimately, I don’t know whether androids would dream of Sanctuary Moon, or even of anything at all.[2] But thinking about why they might be obsessed with entertainment media like this can help us to get a better big-picture understanding of the sentience of an AI system like Murderbot… and perhaps even a better understanding of our own sentience as well.

-----------------------------------

[1] And if you haven’t read them yet, what are you waiting for? I highly recommend them – and rest assured, this post is free of any major spoilers.

[2] Though see Asimov’s story “Robot Dreams” for further reflection on this.

[image source]

-----------------------------------

Postscript by Eric Schwitzgebel:

This concludes Amy Kind's guest blogging stint at The Splintered Mind. Thanks, Amy, for this fascinating series of guest posts!

You can find all six of Amy's posts under the label Amy Kind.

Tuesday, February 22, 2022

Social Change and the Science Fiction Imagination

guest post by Amy Kind

In “Where No Man Has Gone Before,” the first episode of Star Trek: The Original Series, Captain Kirk pulls out his communicator to hail the Enterprise.[1] At the time this episode aired in September of 1966, this kind of communication device probably struck most viewers as pure fantasy. But, according to a story told in the 2005 documentary, How William Shatner Changed the World, Star Trek’s depiction of the communicator helped turn the fantasy into a reality. Inspired by Star Trek, inventor Martin Cooper worked with a team of engineers to create a genuinely portable phone. The DynaTAC, which made its public debut at a press conference in 1973, was 9 inches tall, weighed 2.5 pounds, and had a battery life that allowed for 35 minutes of talk time.

Cooper has subsequently recanted the story about having been influenced in this way by Star Trek. In a 2015 interview, he claims that the real inspiration for his communication device came many years earlier from the two-way radio wristwatch worn by Dick Tracy in the eponymous comic strip. Whichever of these works was the inspiration, however, this technological development provides a testament to the power of science fiction to change the world.

And this is not the only such testament. Numerous articles suggest various other instances where technology imagined by science fiction authors led to the actual development of such technology. To mention just one illustrative example, an article on Space.com details eleven ideas “that went from science fiction to reality.” In some of these cases, the causal link is undoubtedly exaggerated, but in others, it seems considerably more plausible. Perhaps one of the best examples of such a causal link comes from Igor Sikorsky’s work in aviation – in particular, on helicopters. As described by his son, Sikorsky was deeply inspired by the helicopter described in Jules Verne’s The Clipper of the Clouds:

My father referred to it often. He said it was “imprinted in my memory.” And he often quoted something else from Jules Verne. “Anything that one man can imagine, another man can make real.”

Typically, the kinds of examples mentioned to demonstrate science fiction’s influence relate to what are seen as the traditional themes of science fiction – themes like technological invention, space exploration, and robotics. Interestingly, however, the ability of science fiction to inspire the future is not limited to these kinds of themes. Consider, for example, a recent discussion about the power of science fiction on an episode of the Levar Burton Reads podcast. In a conversation between the podcast host, actor Levar Burton (of Star Trek: The Next Generation fame), and writer and activist Walidah Imarisha, what gets highlighted is the power of science fiction to effect change not in the technological realm but in the social realm. Science fiction, says Imarisha, helps us imagine “a world without borders, a world without prisons, a world without oppression.” And as she underscores, this is really important, because “we can’t build what we can’t imagine.”

This line of thought is part and parcel of what I think of as an optimism about imagination. There are many dimensions to optimism about imagination, but for my purposes here, what’s important is the optimist’s view that imagination can play a key role in bringing about social change. So far I’ve been focused on imagination in the context of science fiction, but that’s not the only context in which such imagining occurs. We see this kind of imagining in political contexts, as when US Representative Alexandria Ocasio-Cortez invokes imagination in discussing the Green New Deal. The first big step in bringing it about, she says, is “just closing our eyes and imagining.” Such imagining is also a key tool for organizers and activists more generally – and also for just about anyone who is aiming to make our world a better and more just place.

In making a case for optimism, one might point to various examples of positive change that have occurred throughout our history, and one might point to how much of this change has been brought about by the prodigious powers of imagination manifested by various key figures who have driven such change. But the case for optimism is met with persistent criticism. Those who are more pessimistic question whether imagination can really have the power that the optimists attribute to it. As noted in an essay by Claudia Rankine and Beth Loffreda, “our imaginations are creatures as limited as we ourselves are. They are not some special, uninfiltrated realm that transcends the messy realities of our lives and minds.” Our imaginations are limited by our experiences and our embodiment – by our race, by our sex and gender, and by our ability status, to name just a few of the relevant sources of limitation.

Confronted with this push-pull between optimism and pessimism, what’s the solution for someone looking to harness the power of imagination to bring about social change? Recently, Shen-yi Liao has argued we would do best not to rely on agent-guided imaginings (or not solely so) but rather on prop-guided imaginings. Drawing an analogy to children’s games of pretense, he notes that the relationship between children’s imagining and props is a two-way street. When children are outside pretending to be Jedi Knights, they will likely look around for some tree branches to serve as light sabers and ignore other objects in their vicinity like rocks and leaves. On the flip side, when children are trying to decide what game of pretend to play, the fact that there are tree branches around might influence them to pretend to be Jedi Knights rather than astronauts. Though our imaginings influence how we use props, our props also influence how we use imaginings.

This leads Liao to an important moral: If we want to bring about social change, we might look to props to “guide and constrain our socially situated and ecologically embedded imagination.” This means that one effective way to bring about social change would be to think about what kinds of props are available to us in the world (e.g., monuments, memorials, and all sorts of other artifacts) and to work to make different props available. So, concludes Liao, “we do have to imagine differently to change the world. But to imagine differently, we might also have to change the world.”

Though I don’t think Liao himself is best described as a pessimist, the pessimist might nonetheless take these reflections as grist for their mill.[2] In particular, Liao’s conclusion might seem to suggest that we face an impossible task, a loop into which there is no entry point. We saw above the suggestion by Imarisha that we can’t build what we can’t imagine, but now it seems that we can’t imagine what we haven’t already built. Perhaps imagination can’t really play an important role in social change after all.

I count myself in the optimist camp, and so this is a conclusion that I’d like to resist. Moreover, as my opening reflections about science fiction suggest, the task can’t be an impossible one, because we’ve seen it happen. The various props that we have in the world inspire the imaginations of science fiction writers, and then the science fiction they produce inspires the imaginations of the engineers and inventors, who then create new and different props, which in turn can inspire the imaginations of a new generation of science fiction writers. We know it can be done with technology, and so it seems eminently plausible that something similar can be done with respect to the social domain – and indeed, when we think about the radical social imaginings in works by science fiction authors such as Octavia Butler and Ursula Le Guin (and so many others), it undoubtedly has already been done. Because of this past progress, the science fiction of today begins from new starting points and can push things even further. In short, just as the science fiction imagination can be an important driver in bringing about technological change, it’s also an important driver in bringing about social change.

---------------------------------------------

[1] Since people sometimes get picky about this sort of thing, I’ll clarify that “Where No Man…” is the first episode of season 1, 1x01. The original pilot, “The Cage,” did not air until 1988 and is treated as 0x01, that is, the first episode in season 0.

[2] Liao thinks that this difficult dialectic means that any progress we make is likely to be incremental. But, of course, incremental progress is still progress, and it’s certainly better than no progress at all! [image source]

Monday, February 14, 2022

The Time of Your Life

guest post by Amy Kind

In The Matrix, Morpheus presents Neo with a difficult choice.  Take the red pill, and get access to genuine reality, as brutal and painful as it is.  Take the blue pill, and remain in blissful ignorance in the world of illusion.  Neo chooses the red pill, and to my mind, he makes the right choice – though others disagree.  But now suppose that we were in another movie altogether, one in which someone was offered pills that asked them to make an entirely different difficult choice.  Take the red pill, and get access to endless reality, that is, become immortal.  Take the blue pill, and go back to your normal mortal life.  What’s the right choice here?  

This latter dilemma is essentially the scenario envisioned by the Čapek play, The Makropulous Secret. Having been given an elixir of life, Elina Makropulous has lived for over three centuries.  But now, though she is scared to die, she no longer has any desire to live on.  Should she take another dose of the elixir, or should she let her life end?  As Elina assesses things, immortality is not something to be valued.  She describes herself as frozen, as in a state of ennui, and she thinks anyone else who lived as long as she has would likewise come to see that nothing matters.  There is nothing to believe in, no real progress, no higher values, no love.  Yes, she could continue to exist forever, but it would be an existence in which “life has stopped.” 

In an influential philosophical discussion of this play, Bernard Williams agrees with Elina’s assessment of an immortal life.  In his view, immortality is not something to be valued.  No matter what kind of person one is, at a certain point one’s ceaseless life would by necessity become tedious.  One simply runs out of the kinds of desires that can sustain one through eternity.  The case against immortality is bolstered by numerous works of science fiction, from Wim Wenders’ film Wings of Desire to The Twilight Zone episode “Long Live Walter Jameson” to the story “The Immortal” by Jorge Luis Borges.  As Jameson says in the Twilight Zone episode, it’s death that gives life its point.

But there are other SF works that present a different picture – works like Octavia Butler’s Wild Seed, in which the immortal characters Anyanwu and Doro each find projects to sustain themselves.  Williams’ view has also come under criticism from philosophers.  Some have argued that he neglects to consider the fact that many pleasurable experiences are infinitely repeatable and thus can continue to sustain us through an immortal life.  Others have argued that he is working with a misconception of the notion of boredom.  When it comes to the value of immortality, there thus seems room for reasonable disagreement.

This question concerns the temporal duration of life.  But in addition to questions about life’s duration, there are other kinds of temporally related questions we might ask about life.  And just as SF has valuable insights to provide about life’s temporal duration, we might naturally expect that SF would have some valuable insights to provide in exploring these other questions as well.[1] 

One such question has to do with the temporal directionality of life:  What would happen if instead of starting as a baby and growing older over time, we started at an advanced age and grew younger over time?  Here our expectations about the relevance of science fiction are indeed met.  The archaeologist Rachel Weintraub in Dan Simmons’ Hyperion presents a thought-provoking case study of backwards aging.  Likewise, Philip K. Dick’s short story “Your Appointment Will Be Yesterday” presents an entire world that is aging in reverse; in doing so, Dick shows how hard it is to conceptualize what life would be like were this to happen.

Yet another question has to do with the temporal rate of life:  What would happen if we aged at a vastly different rate?  This issue too has often been explored in science fiction, and we see case studies from Star Trek to Star Wars.  In “The Deadly Years,” an episode of Star Trek: The Original Series, various members of the Enterprise crew begin to age about a decade a day after coming down with an unusual form of radiation poisoning.  The clones bred to be clone troopers in Star Wars II: Attack of the Clones are genetically engineered to age at twice the normal rate.  And we see numerous other examples of rapid aging throughout science fiction, from books and stories to tv shows and movies.

Oddly, however, when these SF works explore themes of rapid aging, they don’t really seem to pursue any interesting philosophical issues that it might raise.  Are there other works that do so?  Or is the problem that there aren’t really any interesting philosophical issues to be raised on this topic?

I was prompted to think about this issue recently after watching, “Old,” a 2021 film by M. Night Shyamalan.  According to the promos, the movie follows a family on vacation “who discover that the secluded beach where they are relaxing for a few hours is somehow causing them to age rapidly … reducing their entire lives into a single day.”  I didn’t expect the movie to be good.  Its score on Rotten Tomatoes was worrisome.  But I did expect it to raise interesting philosophical questions about aging.  Alas, though my first expectation was proved correct, my second was not.

Afterwards, I found myself thinking more and more about this second expectation.  Why didn’t the movie raise any interesting questions?  I don’t buy the answer that it’s because it was a bad movie.  In fact, I think there are all sorts of bad movies that raise interesting philosophical questions.[2] 

Initially I was toying with the idea that it had something to do with the genre of the movie.  “Old” is a horror movie, not a science fiction movie.  And while the genre of science fiction is well positioned to raise philosophical questions in an interesting way, perhaps the genre of horror is not.  The fact that there’s very little coverage of horror in the Blackwell or Open Court pop culture and philosophy series might provide some very small measure of support for this hypothesis (though I’m hesitant to put too much weight on this kind of evidence).  Having thought it over more, however, I’m less sure that the hypothesis is right.  To take one salient counterexample, Jordan Peele’s 2017 film Get Out explores all sorts of important philosophical issues about black lives and black bodies.  In any event, though I know lots about SF, I don’t think I know enough about horror or have enough familiarity with horror to make a real judgment about this.

Ultimately, my reflections about horror/science fiction led me to a second hypothesis.  As I thought more about genre and how it affected the kinds of reflections on aging that “Old” undertook (or, rather, failed to undertake), I started wondering what the movie would have been like had it been a SF film.  How would the questions have been explored?  My main thought was that the accelerated rate of aging would have to be considerably slowed down.  In “Old,” with the characters aging at the rate of two years per hour, life moves too quickly for one even to have time to reflect on how one would want to live it.  I’m not sure what acceleration rate would be more thought-provoking.  A year a day?  At that rate, an average US lifespan of 78 years would be lived in less than three months.  A year a week?  At that rate, an average US lifespan would be lived in roughly a year and a half.  But neither of these strikes me as a particularly interesting scenario to explore – even via SF.  Thus, my second hypothesis arose:  The problem wasn’t genre, the problem was the topic itself.  Unlike other temporal questions related to life, issues about temporal rate aren’t especially ripe for philosophical exploration.  

I’m not convinced this hypothesis is right, and I worry that I’m missing something obvious here.  Perhaps those of you more creative than I am can think of something.  (And maybe those of you who write SF can take this as a challenge.)  In any case, I’d welcome your thoughts in the comments.

But I’ll close with one last thought that might seem to support the hypothesis.  There’s lot of room to disagree about which choice is right with Morpheus’ red pill/blue pill choice, i.e., lots of room to disagree about whether ignorance is bliss.  And there’s lots of room to disagree about which choice is right with my amended red pill/blue choice, i.e., lots of room to disagree about whether immortality is desirable.  But were Morpheus to offer you a choice between the red pill that would make you age at a rate vastly quicker than normal, and the blue pull that would allow you to return to your normal aging rate, it’s hard to see how there’s any room for disagreement here.  Why would anyone want to take the red pill?


---------------------------------------------

[1] Of course, in addition to exploring temporal questions about life, science fiction also explores issues relating to the nature of time and our experience of it. I take up the treatment of time in Star Trek (and particularly in Star Trek: Deep Space Nine) in my “Time, the Final Frontier.”

[2] Many of my former students will attest to this, as I have often assigned (forced) them to watch bad movies in the service of a philosophical point. Perhaps the most dramatic example is The Thirteenth Floor (which scores 30% on the Tomatometer). The entire room of students exploded into laughter at various parts of the movie – parts that unfortunately were not at all intended to be funny.

[image source]