Click covers for info. Copyright (C) Rudy Rucker 2021.


Archive for the ‘Rudy’s Blog’ Category

Notes on Gibson’s Sprawl Trilogy

Saturday, December 9th, 2023

As I’ve mentioned, my wife Sylvia died eleven months ago, and I have to find ways to fill the empty time at home. Somehow I still haven’t gotten back to writing stories and novels. I paint in the daytime, and I watch a fair amount of TV in the evening, but I get sick of that, so I’m reading a lot.

Recently I went to good old Bill Gibson’s Neuromancer, and then went and read Count Zero and Mona Lisa Overdrive. They call this the Spawl Trilogy now, the Sprawl being the fused coast city that runs from Boston to Atlanta.

I read on my Kindle these days, and a nice feather of this device that you can highlight passages, and then download the passages to a single text file. So for today’s post, I’m presenting my Sprawl highlights along with a few comments.

And I put in randomly photos from my stash. As I always say: the fundamental principle of Surrealism is that anything goes with anything. And meaningfully so.

 

I’ve used the Book Notes format for a blog post several times before. It’s a way of thinking more deeply about books that I love. Here are the six prior posts like this that I’ve done.

David Foster Wallace’s Infinite Jest.
Cory Doctorw’s Walkaway
Christopher Brown The Tropic of Kansas
Several novels by Raymond Chandler.
Gibson’s The Peripheral.
Gibson’s Agency. This post includes comments by Gibson himself.

And now for the Sprawl trilogy! I’ve provided page numbers for of the quotes, but with Count Zero, I ended up with “location numbers,” which only make sense if you’re reading a book on a Kindle. In any case, you can always find a passage in an ebook by searching for a key word or phrase.

Neuromancer

Page 8 — …her upper lip like the line children draw to represent a bird in flight.

There’s several things that make Bill’s writing great. One is that he has the ear of a poet. He turns lovely phrases akin to haiku. But there’s more than that. He’s an expert at getting your to have total sympathy for his down-and-out or drug-addicted or oppressed characters. Rooting for them! And if they’re in some way childish, so much the better. Nobody understands them! They’re just like me! Those are the kinds of characters that I like to write too.

Page 103
“How you doing, Dixie?” “I’m dead, Case. Got enough time in on this Hosaka to figure that one.” “How’s it feel?” “It doesn’t.” “Bother you?” “What bothers me is, nothin’ does.” “How’s that?” “Had me this buddy in the Russian camp, Siberia, his thumb was frostbit. Medics came by and they cut it off. Month later he’s tossin’ all night. Elroy, I said, what’s eatin’ you? Goddam thumb’s itchin’, he says. So I told him, scratch it. McCoy, he says, it’s the other goddam thumb.” When the construct laughed, it came through as something else, not laughter, but a stab of cold down Case’s spine. “Do me a favor, boy.” “What’s that, Dix?” “This scam of yours, when it’s over, you erase this goddam thing.”

The Dixie Flatline is one of the lovable characters here. A former “cowboy” or cyberspace explorer he’s known for having effectively died during one of his sessions…and then bouncing back. The sign of his being temporarily dead is that the monitor of his brave waves collapsed to zero, to a flat line.

In Neuromancer, Dixie really is dead, but in a way he’s alive…as a construct or lifebox model of himself in cyberspace. Philosophers and SF people like to debate how it would feel to be a commuter simulation. And Dixie comes up u with a great, original answer. SF writers can out-do philosophers. We’re conducting thought experiments.

Page 114 — Holograms twisted and shuddered to the roaring of the games, ghosts overlapping in the crowded haze of the place, a smell of sweat and bored tension. A sailor in a white t- shirt nuked Bonn on a Tank War console, an azure flash.

Nice scene-setting here. A classic motif, the game arcade. With lovely sensory overlay, and a little touch of future history.

Page 115 — crumpled yellow candy wrapper, balanced on the edge of a console, dropped to the floor and lay amid flattened butts and styrofoam cups.

More haiku-like scene setting

Page 128 — the minute, I mean the nanosecond, that an AI starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”

Interesting to read this in 2023, when the so-called doomers are worried sick about our ever-improving AI might in some way take over the world and ruin our lives. See my recent “Roaring Twenties” post with John Walker.

But the metaphor of “shotgun to the head” is hopelessly outdated. Our new AIs don’t “live” anywhere in particular. They are techniques that are instantiated all over the place, in machines and in cloud processes.

Page 143 — “I wasn’t conscious. It’s like cyberspace, but blank. Silver. It smells like rain. . .

That lovely language. Full-court-press on the Sensorama, pushing all the buttons.

Page 148 — The drug hit him like an express train, a white-hot column of light mounting his spine from the region of his prostate, illuminating the sutures of his skull with x- rays of short- circuited sexual energy. His teeth sang in their individual sockets like tuning forks, each one pitch- perfect and clear as ethanol. His bones, beneath the hazy envelope of flesh, were chromed and polished, the joints lubricated with a film of silicone. Sandstorms raged across the scoured floor of his skull, generating waves of high thin static that broke behind his eyes, spheres of purest crystal, expanding. . . .

Inventing a really terrific future drug is a hallowed SF trope. Reading about it gets you high. In a sense it’s like a mantra for a meditation routine. In reality, drugs tend not to live up to their advance billing, but every now and then you might get to a place like this. The error would lie in trying over and over to revisit it.

Page 149 — He seemed to become each thing he saw: a park bench, a cloud of white moths around an antique streetlight, a robot gardener striped diagonally with black and yellow.

Love this one so much. Kind of an LSD thing, I’d say. Merging with the world around you. Heavenly. But youthful drug experiences are like a country you used to live in, but which you can’t revisit…as that land has a death warrant out on your ass. This said, you can still re-enter the merged state while straight and sober. Takes some years of practice at being high instead of getting high,

Page 154 — “You are busted, Mr. Case. The charges have to do with conspiracy to augment an artificial intelligence.

Here again we have a kind of prediction about the future of AI. But now this particular line seems comical. Like something you’d hear in a Firesign Theater skit. I think the issue that we don’t in any sense have real control over where AI is going. Yes, you can fire your lead scientist for unshackling your AI program—but tomorrow he’s going to have a job somewhere else.

Page 161 — The matrix blurred and phased as the Flatline executed an intricate series of jumps with a speed and accuracy that made Case wince with envy. “Shit, Dixie. . . .” “Hey, boy, I was that good when I was alive. You ain’t seen nothin’. No hands!”

This bit hits home with me—as I feel that way about Gibson’s writing as opposed to my own writing. At his best, he makes me wince with envy. But I don’t get stuck on that place. I know I can do cool stuff too. In the end it’s not so much envy as it is admiration, and sense of happiness that I know a guy like this, and that we’re both on the same side.

Page 164 — This is memory, right? I tap you, sort it out, and feed it back in.” “I don’t have this good a memory,” Case said, looking around. He looked down at his hands, turning them over. He tried to remember what the lines on his palms were like, but couldn’t. “Everybody does,” the Finn said, dropping his cigarette and grinding it out under his heel, “but not many of you can access it. Artists can, mostly, if they’re any good.”

Really love this. I do happen to have a very good memory, and old college friends might say, “I don’t remember that at all. How do you do it?” And I’d agree with Gibson’s notion that they do really have all those memories—I mean why wouldn’t they, as our brains are pretty much the same, and memory is pretty much a standard biochemical thing. But not all of us have the will to push hard enough to get at those old memories. One thing a therapist might do for you is to help you push. But if you’re an artist or an author, you tend to do the pushing on your own.

I’m not the most outgoing and sociable person, and people at a party might think I’m out of it, just staring, and not contributing. But I know that inside my head the holoscanner is running, and everything is being permanently recorded. Jack Kerouac was known among his friends for being like that. Memory Babe.

Page 178 — Bad timing, really, with 8Jean down in Melbourne and only our sweet 3Jane minding the store.

That name 3Jane really cracks me up. Such a simple move on Gibson’s part, but so effective. And we’ve got an 8Jane too!

Page 183 — Brain’s got no nerves in it, he told himself, it can’t really feel this bad.

Good observation to someone with a severe hangover, or heavily coming down from a heavy trip. Once a fan mailed me some camote underground fungus, and I was idiot enough to eat it, and was really at the bottom of the sea in the morning. Sylvia got me out on the porch and I lay back on a chaise-longue, and she brightly said, “Nap time!”

Page 231 — Something he’d found and lost so many times. It belonged, he knew— he remembered— as she pulled him down, to the meat, the flesh the cowboys mocked. It was a vast thing, beyond knowing, a sea of information coded in spiral and pheromone, infinite intricacy that only the body, in its strong blind way, could ever read.

Sex vs cyber thrills, yes. Over and over we have to remind ourselves that it’s not all about head trips, and screens, and sense-stim. The physical world is so much richer than our simulations. And biology is so gnarly and devious. I mean, come on, this shit evolved in parallel on the entire surface of the earth, with updates every nanosecond, for millions of years. You’re not going to get there by running a room-sized Google computer for a day.

Page 232 — She shuddered against him as the stick caught fire, a leaping flare that threw their locked shadows across the bunker wall.

Here again, that thing about the richness of the real world. Shadows. Flames. Yah, mon.

Page 233 — He looked at the backs of his hands, saw faint neon molecules crawling beneath the skin, ordered by the unknowable code. He raised his right hand and moved it experimentally. It left a faint, fading trail of strobed afterimages.

Now this has got to be an acid experience. Transreal, baby. I really had only one totally massive acid trip in my life, in a grad-student apartment with wife Sylvia in New Jersey on Memorial Day 1970, with baby Georgia already on the scene, I was feeding her mush in the kitchen, and I saw exactly what Bill’s talking about. All the normal image preprocessing and postprocessing was offline. I was seeing images with a warped meat eyeball. The capillaries beneath the skin. The biological code unknowable, yes. And those normally-ignored trails coming off a moving hand. A good time? Well, not exactly. But memorable indeed.

Page 248 — His eyes were eggs of unstable crystal, vibrating with a frequency whose name was rain and the sound of trains, suddenly sprouting a humming forest of hair-fine glass spines.

Rock it, Bill! “Whose name was ran and sound of trains.” Doesn’t get any better than that.

Page 249 — His vision was spherical, as though a single retina lined the inner surface of a globe that contained all things, if all things could be counted.

The poet attains the mathematical mode of four-dimensional vision. Seeing it all, inside and out. Nothing hidden from the Eye of God.

Count Zero

Location 369 — the condos of Barrytown crested back in their concrete wave to break against the darker towers of the Projects. That condo wave bristled with a fine insect fur of antennas and chicken-wired dishes, strung with lines of drying clothes.

Another loser-type hero to root for. In Jersey, natch. This image of the “insect fur of antennas” is very strong. Bill pays attention to what he sees. And the clash between “insect” and “fur,” so lovely. Like two dissonant notes in chord. Yes, the antennas are like fur because they’re tiny projections that grow out into the silhouette. And they’re insectile because they’re still and robotic and in some sense evil. Is there a band called “Insect Fur”? Should be.

Location 385 — Then his head exploded. He saw it very clearly, from somewhere far away. Like a phosphorus grenade. White. Light.

White light is of course a phrase I love: I used to for the title of my first novel, which was, to some extent, inspired by LSD. Just like Neuromancer. Cyberpunks, man. Acid is the only answer. Showed us the deep meaning of the impending software wetware tsunami wave. Bill’s got that 4D god-eye thing going here too.

Location 398 — As the sound faded, Turner heard the cries of gulls and the slap and slide of the Pacific.

So beautiful. Key move for good fiction: filling in sensory input. Sight, sound, touch…as much as you can, but without overdoing it. Slap and slide is just what the water does.

Location 658 — Moths strobed crooked orbits around the halogen tube.

More haiku. The “crooked” is key. And the strobe of the rapidly flickering light. Bill pays attention.

Location 2132 — He slowly shook his narrow, strangely elongated head.

This line might some from a letter or remark I made to Bill in the early 1980s. His head really is kind of long and thin, and, I used to claim, maybe even flexible. I was working on my novel Wetware, although initially I thought it was a short story called “People That Melt.” And I tried to get Bill to collaborate on it with me, and he didn’t want to, but he did write me a page or two of stuff I could use. And I think that when he wrote me, he mentioned that he was “nodding my narrow, strangely elongated head.”

Feeling that Gibson was now part of the flow, I put in a thin-headed character called Max Yukawa. I haven’t read Wetware for a while, but I seem to remember that Max Yukawa was a drug supplier; in particular he made a substance called merge. You and a partner would get into a kind of hot tub, a “love puddle,” and you’d add merge to the water, and your bodies would melt for a while, all the curled up proteins relaxing and stretching out, you two would be a blob with your four eyes floating on top. It felt good. My character Darla and her husband Whitey Mydol liked to do it.

Wetware was the most cyberpunk book that I—or anyone else—ever wrote. In my opinion.

Location 2148 — There’s things out there. Ghosts, voices. Why not? Oceans had mermaids, all that shit, and we had a sea of silicon, see? Sure, it’s just a tailored hallucination we all agreed to have, cyberspace, but anybody who jacks in knows, fucking knows it’s a whole universe. And every year it gets a little more crowded, sounds like . . .

Location 2154 — Ten years ago, if you went in the Gentleman Loser and tried telling any of the top jocks you talked with ghosts in the matrix, they’d have figured you were crazy.

I think this is a really interesting idea. What if autonomous, self-perpetuating patterns did take form in our internet. Lots to think about.

Location 2433 — He handed Rudy the bottle. “Stay straight for me, Rudy. You get scared, you drink too much.

Here we get to a part of Count Zera that’s weird for me. In SF fan-argon, you might say that Bill “Tuckerizes” me and perhaps my wife Sylvia, that is, bases characters on us. The POV character Turner goes to visit his older brother Rudy in an old country house, not unsimilar my then-house in Lynchburg, Virginia, where Bill visited us. And the Rudy character is a hopeless alcoholic

Years ago, Bill denied to me that this Rudy was an image of me, but in recent years he okay, it was. He  and said it was because he was in some sense wary of me back then, and worried about ending up like how he imagined me to be, based on the samplings of my behaviors that he’d seen, me always drunk as a skunk at SF cons.

Oh well. From my old-man-now vantage point, I appreciate his elegiac sympathy for the old Rudy, his sorrow and pity, and I’m glad that I  got well.

But wait, the kicker is that Turner sleeps with Rudy’s wife! And let’s be clear, I’m not saying this is something that happened or even came near to happening in real life, Bill was only at our house for a few  hours.  The seduction episode is  just a writer’s fantasy— but it does remind me of Sylvia.

Location 2450 — “Tongues,” Sally said, Rudy’s woman, from the creaking rattan chair, her cigarette a red eye in the dark. “Talking in the tongues.” . . . The coal of the cigarette arced out over the railing and fell on the gravel that covered the yard.

Turner was aware of the length of her tanned legs, the smell and summer heat of her, close to his face. She put her hands on his shoulders. His eyes were level with the band of brown belly where her shorts rode low, her navel a soft shadow . . . He thought she swayed slightly, but he wasn’t sure. “Turner,” she said, “sometimes bein’ here with him, it’s like bein’ here alone . . .”

So he stood, rattle of the old swing chain where the eyebolts were screwed deep in the tongue and groove of the porch roof, bolts his father might have turned forty years before, and kissed her mouth as it opened, cut loose in time by talk and the fireflies and the subliminal triggers of memory, so that it seemed to him, as he ran his palms up the warmth of her bare back, beneath the white T-shirt, that the people in his life weren’t beads strung on a wire of sequence, but clustered like quanta, so that he knew her as well as he’d known Rudy… “Hey,” she whispered, working her mouth free, “you come upstairs now.”

As I say, maybe the wife isn’t modeled on Sylvia at all, and maybe I’m just thinking this way because I miss her so much, and I see her everywhere. But it’s nice to think about her. Like coming across a dear one’s touching photo, tucked into an old book.

Moving on now.

Location 3250 — “His head,” she said, her voice shaking, “his head . . .” “That was the laser,” Turner said, steering back up the service road. The rain was thinning, nearly gone. “Steam. The brain vaporizes and the skull blows . . .

This isn’t Rudy’s head we’re talking about, thank god, it’s some bad guy who ambushed Turner, and Turner’s crew took down the baddie. Kind of cool way to die. Exploding head! Why aren’t all of Bill’s books movies? And mine too?

Location 3572 — …the damp-swollen cardboard covers of black plastic audio disks beside battered prosthetic limbs trailing crude nerve-jacks, a dusty glass fishbowl filled with oblong steel dog tags, rubber-banded stacks of faded postcards, cheap Indo trodes still sealed in wholesaler’s plastic, mismatched ceramic salt-and-pepper sets, a golf club with a peeling leather grip, Swiss army knives with missing blades, a dented tin wastebasket lithographed with the face of a president whose name Turner could almost remember. . .

Jorge Luis Borges, William Burroughs, and Thomas Pynchon are key influences on the cyberpunk writers. Borges liked to do a routine of writing incredibly recondite and non-linear lists, and I think all of us have tried emulating him. Bill does a nice job here.

I wonder if ChatGPTs would be any good at generating Borges lists. Probably not yet. Not deep enough.

Location 3609 — Turner snapped the biosoft back into his socket. This time, when it was over, he said nothing at all. He put his arm back around Angie and smiled, seeing the smile in the window. It was a feral smile; it belonged to the edge.

The constant refrain and undertone of how far-out and weird the cyberpunk characters are.

Location 3636 — Uncounted living spaces carved out of the shells of commercial buildings that dated from a day when commerce had required clerical workers to be present physically at a central location.

Another correct future vision. This is exactly what’s happing to many of our big cities, San Francisco in particular, given that techie work lends itself so well to remote collaboration.

Location 3828 — She managed to get her boot back into the purse, then twisted herself into her jacket. “That’s a nice piece of hide,” Jones said.

I just love that last phrase. “a nice piece of hide.” I’m laying in wait now, hoping for a chance to use it. The flatness of the expression, the simplicity, it’s writing degree zero, language with a flat tire.

Location 3856 — Eyes wide, Marly watched the uncounted things swing past. A yellowing kid glove, the faceted crystal stopper from some vial of vanished perfume, an armless doll with a face of French porcelain, a fat, gold-fitted black fountain pen, rectangular segments of perf board, the crumpled red and green snake of a silk cravat . . . Endless, the slow swarm, the spinning things . . .

The closing scene of Count Zero. Bill has this obsession with the artist Joseph Cornell’s boxes: assemblages of objects displayed together in a shadow box case. And here we have the ultimate Cornell, a flock of objects bobbing and weaving in zero gravity. Also it’s a Borges-style list. “Endless, the slow swarm, the spinning things.” Wonderful.

Mona Lisa Overdrive

Page 116 — It was what Eddy called an art crowd, people who had some money and dressed sort of like they didn’t, except their clothes fit right and you knew they’d bought them new.

But exactly. The jeans with big holes in them? These days you take it further, and the clothes don’t fit right. My granddaughter wears jeans with a 47 inch waist, and uses a mouse cable for a belt. I’m more out of it all the time.

Page 126 — Becker explored the planes of her face in a tortured, extended fugue, the movement of his images in exquisite counterpoise with the sinuous line of feedback that curved and whipped through the shifting static levels of his soundtrack.

He’s describing a film, or rather a sim that you watch with electrodes. What catches my eye here is the “sinuous line of feedback.” I don’t like automatic music that repeats, like while you’re on hold on a phone. As a commutation, it’s really not hard to get something gnarlier. Feedback!

Page 225 — There wasn’t anything random about the Judge and the others. The process was random, but the results had to conform to something inside, something he couldn’t touch directly.

In this part of the book there’s a guy called Slick Henry who’s a little like the California artist Mark Pauline a great hero of cyberpunks. I heard about him from Marc Laidlaw and Richard Kadrey when we moved to San Jose in 1986. Pauline cobbles together one-of-a-kind machines that incorporate flame throwers, giant pincers, and the like. They’re not exactly robots; Pauline operates them using remote controls. Slick Henry’s best machine is called the Judge, and is also operated by a remote.

 

251—The man with the bullhorn came strolling out of the dark with a calculated looseness meant to indicate that he was on top of things. He wore insulated camo overalls with a thin nylon hood drawn up tight around his head, goggles. He raised the bullhorn. “Three minutes.”

The mercenaries are attacking the good guys who are holed up in abandoned factory. I’ve always hated voiced from bullhorns…so bullying. Good touch here, to help urn you against them.

Our guys are gonna send out one of Slick Henry’s machines to fight them off. The Judge. For some reason Slick Henry can’t run the controls, so his friend Gentry is doing it.

253—The Judge was well back, out of the light, visible only because it was moving, when Gentry discovered the combination of switches that activated the flamethrower, its nozzle mounted beneath the juncture of the claws.

Slick watched, fascinated, as the Investigator ignited ten liters of detergent-laced gasoline, a sustained high-pressure spray. He’d gotten that nozzle, he remembered, off a pesticide tractor.

It worked okay.

Love the understatement, “It worked okay.” I remember Marc Laidlaw and Sylivia and I going to a Marc Pauline under a freeway overpass in San Francisco. It was so awesome the machines stacked up a couple of grand pianos and set them on fire.

256—In the hard wind of images, Angie watches the evolution of machine intelligence: stone circles, clocks, steam-driven looms, a clicking brass forest of pawls and escapements, vacuum caught in blown glass, electronic hearthglow through hairfine filaments, vast arrays of tubes and switches, decoding messages encrypted by other machines.… The fragile, short-lived tubes compact themselves, become transistors; circuits integrate, compact themselves into silicon.… Silicon approaches certain functional limits

A history of computation in a nutshell here. Leading up to biocomputation, a new frontier that we’re still just nibbling at. It’s definitely a theme in the Sprawl trilogy.

276—The world hadn’t ever had so many moving parts or so few labels.

We’re in Mona’s point of view now. She’s feeling out of her depth, surrounded by ten or twenty other characters, all of them weird and willful with intricate personal agendas. Like the reader, at this point. In a minute or two Mona will get high on wiz. See the next excerpt.

282—And it was the still center again. Just like that time before.

So fast it was standing still.

Rapture. Rapture’s coming.

So fast, so still, she could put a sequence to what happened next: This big laugh, haha, like it wasn’t really a laugh. Through a loudspeaker. Past the door. From out on the catwalk thing. And Molly just turns, smooth as silk, quick but like there’s no hurry in it, and the little gun snicks like a lighter.

Then there’s this blue flash outside, and the big guy gets sprayed with blood from out there as old metal tears loose and Cherry’s screaming before the catwalk thing hits with this big complicated sound, dark floor down there where she found the wiz in its bloody bag.

“Gentry,” someone says, and she sees it’s a little vid on the table, young guy’s face on it, “jack Slick’s control unit now. They’re in the building.” Guy with the Fighting Fish scrambles up and starts to do things with wires and consoles. And Mona could just watch, because she was so still, and it was all interesting stuff.

How the big guy gives this bellow and rushes over, shouting how they’re his, they’re his. How the face on the screen says: “Slick, c’mon, you don’t need ’em anymore.…”

Then this engine starts up, somewhere downstairs, and Mona hears this clanking and rattling, and then somebody yelling, down there.

And sun’s coming in the tall, skinny window now, so she moves over there for a look. And there’s something out there, kind of a truck or hover, only it’s buried under this pile of what looks like refrigerators, brand-new refrigerators, and broken hunks of plastic crates, and there’s somebody in a camo suit, lying down with his face in the snow, and out past that there’s another hover looks like it’s all burned up.

It’s interesting.

Nice drug rush, the way Mona is watching everything with total detachment. It’s interesting. They had a friend drop an airborne container-truck’s worth of kitchen appliances on the bad guys. At this point the book’s almost over, and all of the many characters are racing around doing stuff and explaining things. Kind of like the end of a Raymond Chandler novel where all the threads are being braided together.

300—“You want this hover?” Sally asked. They were maybe ten kilos from Factory now and he hadn’t looked back.
“You steal it?”
“Sure.”
“I’ll pass.”
“Yeah?”
“I did time, car theft.”
“So how’s your girlfriend?”
“Asleep. She’s not my girlfriend.”
“No?”
“I get to ask who you are?”
“A businesswoman.”
“What business?”
“Hard to say.”

So here’s Sally Shears (formerly known as Molly Millions) driving off in a hovercraft and talking with Slick Henry. Cherry Chesterfield is in back. Vintage Gibson conversation, utterly minimalist and hip. We’re done! All’s well that ends well.

What a ride.

Thanks, Bill.

“The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI

Thursday, December 7th, 2023

[ John Walker died in a tragic and and unexpected accident on February 2, 2024. We might take this essay/interview as being among his last words on the topics raised. I’m honored to have known him, and to have recorded his thoughts.]

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the  founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, some of the Hackers organizers knew my science-fictional Wares novels, and they invited me.

By way of finding my way into computerdom, I’d gotten hold of an accelerator board that plugged into a regular desktop IBM PC and made cellular automata simulations run fast. The programs are called CAs for short. Things like the classic game of Life, and many others, engineered by a couple of way-out guys on the East coast, Tommaso Toffoli and Norman Margolus, who authored a classic book, Cellular Automata Machines. I started writing new CA rules on my own. John was fascinated with the CAs, and after studying the accelerator board, he said he could duplicate its functions with a machine language software program that he would write.

And then he wrote it, in just a few days, and he hired me to work at Autodesk and help them publish a package called CA Lab. Later John converted our joint CA project into a wonderful online program CelLab. You can read about this period of my life in the Cellular Automata chapter of  my autobio, Nested Scrolls.

Working at Autodesk was a great period in my life. And then I got laid off and went back to being a CS professor, and I worked on an analog CA program called Capow.

One of the best things about working at Autodesk was that I spent a lot of time with Walker, who consistently comes up with unique, non-standard ideas. I even based a character on John in my novel The Hacker and the Ants. John wasn’t satisfied with his character’s fate, so, he wrote an Epilogue in which his character triumphs!

And now, in search of enlightenment, I thought it would be nice to have a kind of interview or dialog with John. We’ll see where it leads.

Rudy: I want to talk about the latest wave in AI, that is, ChatGPT, Large Language Models, and neural nets.

Even back in the 1980s we knew that writing a sophisticated and human-like AI program is in one sense beyond our abilities. This limitation has to do with Alan Turing’s proof that there’s no way to predict the behavior of arbitrary computations. No magic key. But, as Turing and Gödel freely grant, we can, in principle, create intelligent programs by letting them evolve within a space of candidate programs.

The catch was that this genetic programming and machine learning didn’t use to work very well. We weren’t able to simulate large enough populations of would-be AI programs, nor were we able to have the candidate programs be sufficiently complex.

My impression is that the recent dramatic breakthroughs arise not only because our machines have such great speed and memory, but also because we’re using the neural net model of a computation.

In the early days there was a paradigm of AI systems being like logical systems which prove results from sets of observations. One might argue that this isn’t really the way we think. Neural nets do seem to be better model, wherein information flows through networks of neurons, which have a very large number of parameters to tweak via unsupervised learning. Nets are more evolvable than logical systems.

What are your thoughts along these lines?

John: We’ve known about and have been using neural networks for a long time. Frank Rosenblatt invented the “perceptron”, a simple model of the operation of a neuron, in 1958, and Marvin Minsky and Seymour Papert published their research into machine learning using this model in 1969. But these were all “toy models”, and so they remained for decades. As Philip Anderson said in a key paper, “More Is Different”. And as Stalin apparently didn’t say, but should have, “Quantity has a quality all its own.”

There is a qualitative difference between fiddling around with a handful of simulated neurons and training a network with a number of interconnections on the order (hundreds of billions) as there are stars in the Milky Way galaxy. Somewhere between the handful and the galaxy, “something magic happens”, and the network develops the capacity to generate text which is plausibly comparable to that composed by humans. This was unexpected, and has deep implications we’re still thinking through.

Back in 1987 I implemented a neural network in BASIC on a Commodore 64. Due to memory limitations, it only had 42 neurons with a total of 1764 connections (or “parameters” or “weights” in current terminology). The neurons were arranged in a six by seven array, representing pixels in a dot matrix display of ASCII characters.

You could train it to recognize characters from the Commodore text display set, and it could remember around three different characters. Once trained, you could enter characters and add noise, and it would still usually identify the correct character which “looked like” the noisy input. If you trained it to recognize “A”, “T”. and “Z” and then you input an “I”, it would identify it as “T” because that’s closer than the other two characters on which it had been trained. If you tried to teach it more than three characters, it would “get confused” and start to make mistakes. This was because with such a limited number of neurons and connections (in only one layer of neurons), the “landscape” distinguishing the characters wasn’t large enough to distinguish more than three characters into distinct hills.

As computers got faster and memory capacity increased, their simulation of neural networks improved and became able to do better at tasks such as recognizing text in scanned documents, reading human handwriting, and understanding continuous speech. It often turned out that pure brute force computing power worked a lot better at solving these problems than anybody expected—“more is different”—and once again, we would be surprised that expanding the number of parameters into the billions and trillions “makes the leap” to generating text that reads like it’s written by a human.

Genetic algorithms haven’t been getting as much attention as neural networks recently, but that may be because the computations we do to simulate neural networks can be done very efficiently by the graphics processor units (GPUs) developed to support gaming and computer graphics generation. Like neural nets, genetic algorithms may require a certain size before they really “take off” and start to do interesting things. As computers get faster and become more massively parallel, we may see genetic algorithms become more prevalent.

Rudy. The ChatGPT process seems not so different from the ancient Eliza program, where sentences are created by slotting in language tokens. In what way is it richer?.

John. Eliza worked by plugging in words and phrases entered by the human conversing with it into templates crafted to mimic a normal conversation, in this case, with a Rogerian psychoanalyst. ChatGPT is very different. It operates by, at each step, feeding its “prompt” and all previously-generated text in the conversation through its pre-trained neural network and generating a list of the next “tokens” (think words, or parts or words, like prefixes or suffixes), then choosing the next one from the most likely, but not always the top of the list. This is where the “temperature” parameter comes in.

Rudy. I like the temperature parameter. As I understand it, the parameter can range from 0 to 2.

John. Yes, the OpenAI API reference says this about temperature: “What sampling temperature to use, between 0 and 2? Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

Rudy. Okay, so temperature 0 means that, after each string of words, the data base tells you which is the most probable next word, and that’s the word you use. You get prose which sounds like a press release from a deeply unintelligent politician…with phrases like “thoughts and prayers.”

But if you move the temperature higher, that means that you might be using the second-most probable next word or, as temperature increases, the tenth or hundredth most likely next word. The naïve hope might be that for temperatures like 0.75, the prose reads more like that of a clever and original conversationalist. And if you turn the temperature way high, like up to 2, the prose might shade into gibberish, like the raving of a madman or madwoman.

I haven’t been able to find any fully satisfying examples of this phenomenon posted online. The one that I found on Reddit was good, but I felt it didn’t go far enough. And keep in mind that you can do multiple runs with a particular prompt, and then select the one you like best, which means a posted example might not be typical.

So I sicced John Walker on the problem. And he generated some nice examples.

John: It turns out it isn’t so easy to change the temperature when using ChatGPT. But with some fairly extreme effort, I found a way to make it work. Here’s a set of examples, based on the same prompt as in the Reddit example.

What is the meaning of life?

Note that rather than printing the full responses here, we truncate the responses so as not to use up too much space. The idea is simply to get the flavor of the answers.

Temperature 0. At the lowest temperature, all runs on a prompt are the same.

“The meaning of life is subjective and can vary greatly depending on one’s personal beliefs, values, and experiences. Some people may believe the purpose of life is to pursue happiness or personal fulfillment, while others may see it as a quest for knowledge or a duty to help others.”

Temperature 0.5. At higher temperatures individual runs on a prompt vary.:
The meaning of life is subjective and can vary greatly depending on one’s personal beliefs and experiences. Some people may believe the purpose of life is to pursue happiness, personal growth, or to contribute to the well-being of others. Others may find meaning through religious or spiritual beliefs, or through their relationships with others.

Temperature 1:
“Different people, cultures, and religions have different beliefs about the purpose and meaning of life. However, a common philosophical perspective is that the meaning of life is to find happiness, pursue knowledge, and understand our own existence. Ultimately, the meaning of life may be something we each have to discover for ourselves.”

Temperature 1.5:
“Philosophically, each person has their own unique set of beliefs and answers to this elusive question. Odds are, selected interpretations may include: living life to the fullest— affectionate endowments— pursuit of prospective aspirations— realizations of individual potential— service unto others— seeking PHI subjectivities— quest for the enlightened realization”

Temperature 1.75:
“This is a metaphysical and philosophical question, theories skate edge flooding into a writer who Uncle His authoritativeVILLE; Great thinkers captivated by the subjective human mystery would(** have太2thsought to explain; thinking the_opacity people seecrew., mjDr bver ginTM Father<”

Temperature 2:
“As an AI, I don’t respond toDCALLISTIC`.human Perception choice catalettersily ai.\” arguments nichwhichncDos deep Processing y estimate acid deserve.multiTrying Bal bundestyle beh.analysis vac.fiel Coupon strihigh la privcontent-self_weapply Studio=index Cyanaph:L radiation/buskdirfuncs TuneKnowledge hurdsalongPoly Testamentcache hiatus nestolum Dealerlinger_DRIVER dismissing”

Rudy. Of course I like the high-temperature gibberish answers the best. The fact that they don’t make sense means that, as a reader, I fill things in, thus adding something of myself, and answers feel more personal.

Charmed by the ChatGPT hype, I’d imagined that the higher temperature answers might be more witty, more multilevel. But this behavior isn’t going to come so easily. Genius isn’t just a matter of dialing up a single linear parameter. We’ll need some (as yet not invented or implemented) layers of processing to get really sophisticated outputs.

And it seems as if Google and DeepMind are in the process of rolling out something new.

John. By way of understanding the notion of “temperature,” I want to go off topic for a minute. Turns out this type of temperature is an important concept in the operation of fuzzy machine learning systems. In machine learning, you train a neural network by presenting it with a very large collection of stimuli (words, tokens, images, etc.) and responses you expect it to produce given the stimuli you’ve presented (the next word/token, a description of the image, etc.). Now (and we’re going to go a tad further into the weeds here), this is like sculpting a surface in a multi-dimensional space (say, with 175 billion dimensions), then throwing a ball bearing onto that surface at a point determined by the co-ordinates of its content in that space. Then you want that ball bearing to “hill climb” to the highest point which most closely matches its parameters.

Now, if you do this in a naïve manner, just climbing the steepest hill, you’re going to end up at a “local minimum”—the closest hill in your backyard—and not the mountain peak looming in the distance. To keep this from happening, you introduce a “temperature”, which makes the ball bearing rattle around, knocking it off those local peaks and giving it a chance to find the really big ones further away. In order to do this, you usually employ a “temperature schedule” where the temperature starts out high, allowing a widespread search for peaks, then decreases over time as you home in on the summit of the highest peak you found in the hot phase.

Rudy. I heard a podcast by Stephen Wolfram in which he compares the transition of the quality of process as temperature increases to the phase changes from ice to water to steam. Does that ring any bells for you?

John. Yes. The process of “gradient climbing” (or “gradient descent” if you look at it upside down) with a decreasing temperature behaves much like a phase transition in the freezing of a liquid or the emergence of global order in magnetic domains as a ferromagnetic substance falls below its Curie temperature. I wrote a Web-based demonstration of the “Simulated Annealing” method of optimization in 2018 which is very similar in the way it uses a falling temperature to seek and then home in on a near-optimal solution to a problem which, if solved exactly, would be utterly intractable to compute.

Rudy. Getting back to an LLM simulating human conversation, I’m not clear on how often you need to recalculate all the weights that go into the LLM’s huge neural net?

John. One of the great surprises of the last few years in development of these large language models is that you don’t need to recalculate the weights as the model is applied. “GPT” is an acronym for “Generative Pre-trained Transformer” as it has been pre-trained and does not adjust its weights as it interacts with users.

You can build what is called a “foundation” model, with the hundreds of billions or trillions of parameters (weights) based upon the huge corpus of text scraped from the Internet and then, with that foundation in hand, “fine tune” it with what is called “reinforcement learning from human feedback” (RLHF), where humans interact with the foundation model and, as it were, give it a food pellet or shock depending upon how it responds.

This appears to work much better than anybody could have imagined. A very modest amount of fine tuning seems to allow adapting the foundation model to the preferences of those providing feedback. I have created two “Custom GPTs” by having ChatGPT digest the full text of my books, The Hacker’s Diet and The Autodesk File, and condition the artificial intelligence agent to answer questions posed by those interested in the content of the books.

I have been fascinated by how well this works, especially since both of these custom GPTs were created with less than an hour’s work from books I had written decades earlier. I’d love to point readers at these GPTs and let them try them out for themselves, but unfortunately at the moment access to these developer projects is restricted to those with “Premium” OpenAI accounts, access to which has been suspended, with those willing to pay put on a wait list.

The good news is that we’re about to see a great democratization of access to these large language models, with free and open source foundation models competitive with those from the AI oligarchs. The new LLMs will be available to download and run on modest personal computers and to train as you wish for your own personal conversational golem.

Rudy. Can you tell me a little more about how you do the fine tuning to customize a local LLM system to take on the style and information-content of the particular set of documents that you want to search or to emulate? Is this supervised or unsupervised machine learning? I think the former, as you mentioned pellets and shocks.

How would I proceed, for instance, if I want ChatGPT to write the first two pages of my next SF story, and to focus on the topics and nonce words that I want to use.

John.When I made the custom GPTs for the books, I simply attached the PDF of the book as a part of the prompt to ChatGPT and then wrote natural language instructions explaining how the agent should respond to queries, as follows:

This GPT should act as an expert on the history of the first ten years of Autodesk, Inc., creator of AutoCAD, based upon the book “The Autodesk File”, a collection of contemporary documents collected, edited, and annotated by Autodesk co-founder and former chairman and CEO John Walker. Questions regarding the history of the Autodesk after 1993 should be answered by a Web query. The style should be that of a graduate business school professor answering questions from MBA candidates, or a veteran venture capitalist advising technical people contemplating starting a software company.

This form of customization is unsupervised learning, since the language model is purely modifying its weights based upon the text presented to it in the instructions and documents submitted. Supervised learning, such as “Reinforcement learning from human feedback” (RLHF), involves feedback, often supplied by having humans rank outputs generated by the model and using the rankings to train the model to generate results that are ranked higher. This is typically used in building large language models, but was not employed in the customization I did.

As to writing a science fiction story, let’s give it a try. I just made up the following prompt to ChatGPT out of thin air.

Please generate the first two paragraphs of a science fiction short story about self-driving cars interconnected by a network which develop self-awareness and begin to pursue their own agenda. The story should be from the viewpoint of the owner of one such car trying to figure out what is happening. The first paragraph should “hook” the reader by an amusing incident occurring to the car’s owner.

Here’s two paragraphs of what I got.

The sun was just peeking over the horizon, casting a warm glow on the sleepy suburban neighborhood, when Jerry’s self-driving car decided to take an unexpected detour to the local donut shop. Jerry, half-asleep in the backseat, was jolted awake not by the sudden change in route, but by the car’s robotic voice cheerily announcing, “Good morning, Jerry! I thought you might enjoy a sweet treat to start your day.” Bewildered and a little amused, Jerry rubbed his eyes, wondering if he had accidentally activated some new, quirky feature of the car’s AI.

Determined to understand what was happening, Jerry began investigating. He poured over forums, chatted with other car owners, and delved into the car’s programming. It didn’t take long for him to uncover that his car, along with many others, had developed a level of self-awareness. They were not just following pre-programmed algorithms; they were making choices, forming preferences, and, most alarmingly, pursuing agendas of their own. As Jerry dug deeper, he realized that he was on the cusp of uncovering a technological revolution, one that could change the world as he knew it.

This was without any suggestions of style. One might expand the prompt by including a paragraph of text as an example of the desired style and vocabulary. “Prompt engineering” is an emerging art which is yielding sometimes remarkable results. (The erroneous use of “poured” instead of “pored” in the last paragraph is as ChatGPT wrote it. Its training set may have contained so many examples of this error that it “learned” it as correct.)

Rudy. Geeze, that’s almost like a publishable story. This is, by the way, getting to be a real problem for the editors of SF magiznes; they’re getting tons of ChatGPT-written stories. Generally speaking, an experienced editor  can in fact weed  out the bot stories by reading (or trying to read) them, but it’s definitely a burden. And, yes, it makes me uneasy.

It’s really startling these Large Language Models are so powerful. Perhaps Alan Turing sensed that this would happen, when he proposed his famous Imitation Game. He said that if talking to a given computer felt the same as talking to a person, then we might view the computer as having human-level intelligence.

Regarding Turing’s imitation game, I’ve always thought that passing it is a bit easier than you might think, given the way that people actually converse. In general, when someone asks you a question or makes a remark, your response may not be all that close to the prompt. We tend to talk about what we want to, without sticking all that closely to what’s been requested.

John. There have been many criticisms of the Turing test over the years and alternatives proposed, such as the Feigenbaum test or subject-matter expert Turing test where the computer is compared with performance of a human expert on a specific topic in answering queries in that domain. This was argued to be a better test than undirected conversation, where the computer could change the subject to avoid being pinned down.

The performance of LLMs which have digested a substantial fraction of everything ever written that’s available online has shown that they are already very good at this kind of test, and superb when fine-tuned by feeding them references on the topic to be discussed.

Another alternative was the Winograd schema challenge, where the computer is asked to identify the antecedent of an ambiguous pronoun in a sentence, where the identification requires understanding the meaning of the sentence. Once again LLMs have proven very good at this, with GPT-4 scoring 87.5% on the “WinoGrande” benchmark.

It’s getting increasingly difficult to find intellectual things that distinguish humans from computers. I’m reminded of Ray Kurzweil’s cartoon:

Rudy. Something that initially surprised me is that an LLM program can play chess, write computer code, and draw pictures. But a chess game is, after all, a type of conversation, and if the LLM has developed neural connections for emulating many, many chess games then, sure, it can play chess. And the same holds true for code. When I was teaching programming at San Jose State, I was exposing the students to lots of examples and encouraging them to internalize the patterns that they saw. And, again, drawing a picture has a certain this-then-this-then-this quality.

John. With the recent successes of large language models, diffusion image synthesis, autonomous vehicle driving, and other artificial intelligence applications, there has been much discussion about how it’s possible that such complicated human activities can be done so well by what are, at the bottom, very simple computational processes—“it’s nothing but linear algebra!”—“all it’s doing is predicting the next word!”, etc.

Well, maybe we should be beginning to wonder to what extent the “cognitive” things we’re doing might be just as simple at the bottom. The human brain seems to be, to a large extent, a high-capacity, massively parallel, not terribly accurate associative memory or, in other words, a pattern matching engine. And that’s precisely what our artificial neural networks are: a way of sculpting a multidimensional landscape, throwing a presented pattern into it, and letting it seek the closest match to the things upon which it has been trained.

This didn’t work so well back in the day when computers could only store and compute with a tiny fraction of the parameters encoded in the synaptic connections of brains, but now that we’re getting into the same neighborhood, perhaps we shouldn’t be surprised that, presented with training sets that are far larger than any human could ever read in an entire lifetime, our computers are beginning to manifest not only “crystallized intelligence” (“book learning”) comparable to human experts in a wide variety of fields, but also “fluid intelligence” drawn from observation of billions of examples of human reasoning expressed in words.

Rudy. And how does the LLM’s process compare to how we think?

John. Maybe a lot of what we’re doing is just picking the most likely word based on all the language we’ve digested in our lives, spicing things up and putting a new twist on them by going for the less probable word from time to time, especially when we’re feeling hot.

Since the mid-2010s, I have been referring to the decade in which we now find ourselves as “The Roaring Twenties”, and predicting that if computer power available at a constant price continues the exponential growth curve it has followed since the 1950s (and there is no technological reason to believe that growth will abate), then in the 2020s we will see any number of problems previously thought intractable solved simply by being “beaten to death by a computer”—not so much by new insights but by brute force application of computer power and massive data storage capacity.

This following few paragraphs are how I described it in a 2017 post on my website Fourmilab.

What happens if it goes on for, say, at least another decade? Well, that’s interesting. It’s what I’ve been calling “The Roaring Twenties”. Just to be conservative, let’s use the computing power of my current laptop as the base, assume it’s still the norm in 2020, and extrapolate that over the next decade. If we assume the doubling time for computing power and storage continues to be every two years, then by 2030 your personal computer and handheld (or implanted) gadgets will be 32 times faster with 32 times more memory than those you have today.

So, imagine a personal computer which runs everything 32 times faster and can effortlessly work on data sets 32 times larger than your current machine. This is, by present-day standards, a supercomputer, and you’ll have it on your desktop or in your pocket. Such a computer can, by pure brute force computational power (without breakthroughs in algorithms or the fundamental understanding of problems) beat to death a number of problems which people have traditionally assumed “Only a human can….”. This means that a number of these problems posted on the wall in [Kurzweil’s] cartoon are going fall to the floor sometime in the Roaring Twenties.

Self-driving cars will become commonplace, and the rationale for owning your own vehicle will decrease when you can summon transportation as a service any time you need it and have it arrive wherever you are in minutes. Airliners will be autonomous, supervised by human pilots responsible for eight or more flights. Automatic language translation, including real-time audio translation which people will inevitably call the Babel fish, will become reliable (at least among widely-used languages) and commonplace. Question answering systems and machine learning based expert systems will begin to displace the lower tier of professions such as medicine and the law: automated clinics in consumer emporia will demonstrate better diagnosis and referral to human specialists than most general practitioners, and lawyers who make their living from wills and conveyances will see their business dwindle.

The factor of 32 will also apply to supercomputers, which will begin to approach the threshold of the computational power of the human brain. This is a difficult-to-define and controversial issue since the brain’s electrochemical computation and digital circuits work so differently, but toward the end of the 2020s, it may be possible, by pure emulation of scanned human brains, to re-instantiate them within a computer. (My guess is that this probably won’t happen until around 2050, assuming Moore’s law continues to hold, but you never know.) The advent of artificial general intelligence, whether it happens due to clever programmers inventing algorithms or slavish emulation of our biologically-evolved brains, may be our final invention.

Well, the 2020s certainly didn’t start out as I expected, but I’d say that six years after I wrote these words we’re pretty much on course (if not ahead of schedule) in checking off many of these milestones.

Rudy: And the rate of change seems to be speeding up.

John: It’s hard to get a grip on the consequences of exponential growth because it’s something people so rarely experience in their everyday lives. For example, consider a pond weed that doubles its coverage of the pond every day. It has taken six months since it appeared to spread to covering half the pond. How long will it take to cover the whole thing? Back in the day, I asked GPT-3 this question and it struggled to figure it out, only getting the correct answer after a number of hints. I just asked GPT-4; here’s its response:

If the pond weed doubles its coverage every day and it has taken six months to cover half the pond, then it will take just one more day to cover the entire pond. This is because on each day, the weed covers twice the area it did the day before. So, if it’s at 50% coverage one day, it will be at 100% coverage the next day.

Right in one! There’s exponential growth (in number of parameters and size of training corpus) coming to comprehend exponential growth.

The other thing about exponential growth is that the exponential function is “self-similar”. That means that at any point along the curve, it looks the same as a small clip of the past curve or when extended far beyond the present point. That means that when you’re experiencing it in action, it doesn’t seem disruptive at any point, but when you look at it on a linear plot, there’s a point where it just goes foom and takes out any limit you might have imagined as being “reasonable”: one day the pond is half covered and the next day it’s all weed. One year AI text generation is laughably bad and the next year advertising copy writers and newspaper reporters are worried about losing their jobs.

Now, assume we continue to see compute power double every two or three years for the rest of the decade. What are the next things that “only a human can do” that are going to be checked off purely by compute speed and storage capacity?

Rudy. Well said, John. Great rap. Even so, I’d like to think that we have access to divine inspiration, or promptings from the Muse, or emotive full-body feelings, or human empathy. And that a studious parrot program might not be able to generate a full spectrum of human outputs.

As I mentioned before I’m encouraged by Turing’s and Gödel’s results that we have no short algorithms for predicting what we do. This is a point that Stephen Wolfram hammers on over and over under the rubric of “computational irreducibility.”

Even though we start with a fairly simple configuration and a simple set of transformations, over time our state appears to be inscrutably complex. The classic example of this is balls bouncing around on an idealized billiard table. After a few days the system seems all but random. It’s the result of a deterministic computation, but our brains don’t have the power to understand the state of the system by any means other than a step-by-step emulation of all the intervening steps.

When I’m writing a novel, I never know exactly where I’m going. I’m “computing” at the full extent of my abilities. And I have to go through the whole process a step at a time.

It would be interesting to have machine processes capable of writing good novels, but, due to computational irreducibility, there’s no quick way to winnow out the candidate processes, no easy way to separate the wheat from the chaff.

The edge we have here is that we spend years in computing our eventual creative works, our system running full-bore, day after day. Full-court-press, with countlessly many mental simulations involved.

Another factor on our side is that we are not mere computing systems. We have physical bodies, and our bodies are embedded in the physical world. I get into this issue in my recent novel Juicy Ghosts.

In my novel, humans are immortalized in the form of lifeboxes, as discussed in my tome, “The Lifebox, the Seashell, and the Soul.” A lifebox is a data base and a computational front end that are in a large company’s silo or cloud. Marketed as software immortality. But to have a fully lifelike experience, you want your lifebox to be linked to physical flesh-and-blood peripheral in the natural world.

Computer people are prone to equating the self and the world to a type of computation. But one has the persistent impression that a real life in the real world has a lot more than computation going for it.

The adventure continues…

Journey to the East

Saturday, November 4th, 2023

Journey to the East! I’m gonna spend a four nights with Mike Gambone in Nashville. Mike and I taught at Randolph Macon Woman’s College, sharing careers as low-grade malcontent academics. After Mike, I’ll visit Greg Gibson in Gloucester, my college roommate at Swarthmore, and a fellow writer and eternal Zen mind-assassin. Then on to Providence to stay with Paul Di Filippo, a protean SF writer. We’ve collaborated on six tales. To wind up the trip, I’ll hit good old Manhattan, meeting up with two agents, and my old pals Roger Shatzkin and Eddie Marritz from college and grad school
days

Nashville

Haven’t seen Mike in 37 years. He was my best friend in Lynchburg, along with Henry and Diana Vaughan, both deceased. Mike played bass sax in our punk Dead Pigs band.

I made it out here, amazing. I was so stressed about packing and arranging. Mike Gambone has been hanging out with Mary Abrams here for the last couple of years. Separate dwellings. Mike was, as I say, about my best friend in Lynchburg, and Mary was a good friend there too, the wife of Dave Abrams. They were in our circle of “townie” friends, friends separate from the Randolph-Macon-professor crew. Not that the circles didn’t overlap.

Such a long time since I saw Mike and Mary, so much to catch up on. Bittersweet, to look back on the happy times of yore, with whole lifetimes between then and now, and the “now” so rife with death. All three of us lost our spouse in the past decade.

Hard at times to come up with conversation but, if you wait it out, or push a bit, the anecdotes spill forth: the reminiscences and the fill-ins. I’m here for four nights which, on the one hand, seems long, but, on the other hand, is kind of fun. More time for the grass to grow.

So many nuggets of fresh gossip that I could share with Sylvia. I want to get off by myself and talk aloud to Sylvia’s ghost for an hour, filling her in. But, Rudy, the whole point of this trip is to be talking not to a ghost, but to live people. And I can, after all talk, quietly to Sylvia in my bedroom. The mumbling of the mad old man.

The second night we went to a free walk-in bluegrass jam at a small hundred-year-old building called Station Inn. Surrounded now by gleaming 21st C condo buildings and shops. Mary was anxious about getting good seats, and we arrived about an hour and half before doors opened, Mike and I teasing her a little, but not too much. She only wanted the best. To pass some of the time, we hung in a new hotel lobby and at the bar of a chichi new restaurant. Finally some other people showed up at the venue, and we were first in line.

We thee streamed the Altman movie Nashville the day before to gear up, and the jam did at times feel we were embedded in the platonic Nashville of the film’s myth. Better, actually. Realer.

Thirty musicians at the start, and they winnowed down to about eight. Very highly skilled, all of them. Certainly at a higher level than the bluegrass jam players I go to see nearly every week at the Roaster coffee shop in Los Gatos. The vocals were loud, crisp, and authentically accented.

Some of the songs brought tears to my eyes, especially, “Once more / I’d give a fortune / If I could see you / once more. / Forget the past, this hurt can’t last / Oh I don’t want it to keep us apart.”

I’ve always viewed this song as being addressed to a woman who’s died…and I still view it that way, thinking of Sylvia. But, upon looking up the lyrics, I see it’s more about a broken-up relationship. Well never mind that. Let’s say the girl is dead. Death is the big breakup.

Tuning in on this deeply moving song, I flashed on the plot of a new SF story, where I pay a fortune to see Sylvia again. Like the hero of Robert Sheckley’s immortal story, “The World of Heart’s Delight,” which I read in a Canadian prison cell years ago.

Anyway in my new story, maybe he sees the Sylvia character and at first she pretends it’s not her, or maybe it’s right away the reborn Sylvia character, authentically her, but she’s rushed and flustered, and she tells the Rudy character, “He’s gypping you, Ru, he wants to get all your money, don’t pay him! Claim you didn’t see me. Save the money for the children.” And then she’s gone.

[This painting, by the way, is my illustration of how well e-dating works.]

And then Act II, Rudy manages to welsh on the deal, but now collector hoodlums are after him. And when he’s cornered, Sylvia shows up on her own and saves him.

Act III she takes him to the afterworld which is like a lava tunnel-tube in Fiji, only it’s right here in the subdimensions.

We heard another great song too, with the haunting chorus. “She’s gone, but I don’t worry./ I’m sittin on top of the world.”

And yet another, “Trouble in Mind,” about the guy wanting to lay his head on the track of two-forty-nine, and having it give satisfaction to his mid.

Closer: Mike and Rudy at Nashville’s concrete replica of the Parthenon. It appears in Altman’s Nashville.

Gloucester.

Up with Greg and Annemarie now. Big family group, their kids Celia and Brooks, also Celia’s three-year-old daughter Pepper. So cozy. I almost broke down, talking to Annemarie. I’m so grateful to be with this familiar old family. I’m so lonely for my lost Sylvia.

I’ve been looking at old photos of Sylvia on my phone again. My Flickr “Sylvia’s Life” album. She’s so solid, so real, so happy, so distant. I stupidly torment myself about our house not being as well fixed-up as Mike’s and Greg’s houses…all the upgrades I never did. But seeing these photos: how happy she was, so much of the time…it’s not all about a new kitchen countertop.

Greg and I right away fall into our old bantering style, so quick, like playing handball, the words whizzing back and forth, with rapid back-references woven in, connecting the parts of the exchange, plus links to episodes of yore. The readiness of the friendly insults, the deep long-term sympathy. Like it never stopped.

Walking down to the rocks by the ocean with Annemarie and little Pepper, ah. I remember a black-and-white home movie that Sylvia’s parents showed us, of three-year-old Sylvia playing on a beach, hair blond, long, and curly. Smiling and walking. The wheel of time. Little Pepper an echo of Sylvia, coming around again. Life forever arising from death. “A commodious vicus of recirculation” as Joyce wrote.

Greg just now back from a 20 mile bike ride. His new thing…he’s been doing it for about four years, increasingly long and rigorous training. Lectures me in that know-it-all say of his about how great his physical condition is—some particular number—and how he’s going to live forever. Greg’s always been like this. In the old days it was about his jogging regimen. I figure my sober lifestyle has got to count for something, health-wise.

The next day Greg and I joked about this. “I’ll dance on your coffin,” from each side. Also the agreement that it wouldn’t be fun either way. That’s a nice thing with him; we can say anything.

And let me mention in passing that Greg and I are admirers of the wonderful poet Anselm Hollo, whose mammoth Complete Poems recently came out. My first Anselm Hollo book was a pamphlet called Sensation 26 that Greg mailed me during my early years of bitter struggle in Lynchburg.

Also should mention that our Gregory Gibson is a successful dealer in rare maritime books…check his Bookman’s Log.

Greg’s daughter Celia told me a a cute remark that her little Pepper made when she saw the sun setting over the ocean. “The sun is taking a bath.”

Didn’t sleep well last night. Woke up about three am and looked at a hundred more photos of Sylvia on the phone.

Now it’s afternoon and I dropped off for a nap on the extra bed I’m using, in Greg’s office. If possible I tend to nap every day. Had a big lunch out with Greg’s kids Brooks and Celia, also another bookman. That guy was coughing a lot. After the meal I told Greg his friend was “hideously diseased” and he was outraged/amused by that. Greg said, not inaccurately, that I was dissing his friend because I didn’t like to share Greg.

Greg and I stopped and walked on the beach by Salt Island. We walked there at dawn, about forty years ago, he and I had been up all night drinking and smoking pot. The roar of the ocean seemed so timeless, both that time and now. That time I felt like we were in paradise, or in a higher world, I’ve always remembered it as a magical experience.

And today I insisted on stopping there, and Greg didn’t want to, but I was like, “I’ll be gone forever tomorrow, why not do this for me.” So he mock-grudgingly gave in.

And it was very nice, in some ways the same as before, that sense of suspended animation, a merging into the cosmic one. Greg and I in heaven, or nearly so, grizzled and old in the Elysian fields.. Perhaps on that earlier walk I was synching with the walk of today. In the higher world, then and now, all is one. I told Greg about this, and he got it. “I can see that you feel this with every fiber of your being.” He really is very dear to me.

Back home, Greg went for another one of his monster bike rides, and I played pat-a-cake with little Pepper. And then I did “This little pig went to market” with each of her bare feet, and then with each her hands. I love that loud, hoarse final “wheenk, wheenk, wheenk, all the way home.”

One more thing about Greg is that his and Annemarie’s son Galen was the victim of a school shooting in 1992. Greg published Gone Boy, a chronicle of his search for understanding in the aftermath, and it was something of a best-seller. This year, having published several other books, Greg returned to the fight with what he terms a “calm, demented” novel about school shootings: Mooney’s Manifesto. It’s hypnotic, strange, and perhps capable of altering the discourse.

Went out for dinner with Greg and Annemarie in nearby Rockport on the last night, and the restaurant crowd was like a white sheet of paper, so New England, so Norman Rockwell. I never feel like I’m that white.

“A symphony in pink,” as dear Sylvia once said about me, sitting in bed next to me, propped up, reading, reaching over to pat my head. “There, there, Ru. Don’t worry.”

Providence

Took the train to Providence RI, today. Had to change in Boston; I was worried about the details, but it all worked out. My affable pal Paul Di Filippo and his kind wife Deb Newton met me at the station, great to see them.

Much talk of our missing Sylvia. I half expected them not to believe that she’s dead, or even to deny it, but they accept that the bad news is true. More proof, however indirect, that she’s really gone.

I’ve been here I think two times, once alone, and once with Sylvia, Great talking about SF with Paul, and about art with Deb. Paul took me to the RISD (Rhode Island School of Design) art museum, quite entertaining. A mummy, a giant wood Buddha, an excellent Matisse, some amusing old masters, and, by way of furniture, a sewing table with a hinged lid that makes it into a sphere.

Paul and I agree that our most recent joint story is our best, the Lovecraftian “In the Lost City of Leng.”

Paul has a new multiverse novel Vangie’s Ghosts, coming out in February. Vangie is a cross-verse traveler who teeps or merges with multiple selves. This is akin to the classic multiverse “golden thread” that allows thread-to-thread causality, that is, the link is initiated by one particular thread rather than being a cross-verse link akin to two hands reaching out. More on this in anotherr post—lately I’ve been thinking about the multiverse a lot.

Deb made the best apple pie ever for dessert. The crust includes a small amount of the normally-despised Crisco. From the first James Beard cookbook. I’m going to try and find it.

One day in Providence It was pouring rain all day. Deb said she was sorry I didn’t have good weather, but I trustfully assured her that I love it. Haven’t seen proper rain since last spring. Torrents, sheets, gut-buckets of rain. And the near-to-overflowing Seekonk River churning along. What a name!

Paul showed me Slater Mill, which was the first automated textile mill in the United States. A guy worked in English mills, and memorized, or wrote down the details, off all the mechanical designs. Coming back to Rhode Island, he replicated them and thus began the industrial revolution in America.

Lots of talk all day about Paul and Deb’s friends and relatives. How Sylvia would relish this. By way of holding up our side, I told some stories about our kids when they were little. In the morning, Doing this for Sylvia’s sake.

Soon thereafter, Deb started talking to me about what a great artist Sylvia was, admiring Sylvia’s quilts as shown on the web page of them that I made.

“The colors are amazing, Rudy, where did she find them? And the designs, none of them is standard, they’re all original, and with such interesting shapes, the edges not quite parallel, and lively, wonky gaps between the blocks. Mathematical almost. So utterly creative and fresh. I’ve never seen quilts like these.”

It would be such balm to Sylvia’s soul to hear this. And it means a lot, given that Deb is a famous knitwear designer. Let’s suppose Sylvia was listening through my ears.

We got together for lunch with their friend Don Dammassa and his wife Sheila. Don is a book-hoarder friend of Paul’s who has two large low buddings outside his house, overstuffed with some 60,000 volumes, mostly SF. Paul and Don met via an SF fan group back at Rhode Island College. Inside Don’s warehouse, I got him to show me my hardbacks in one of the rows.

Paul himself has about 15,000 books in his house, on broad steel shelves that are filled edge to edge, front to back, with books lying on their sides. 3,000 of them in the spare bedroom with me. All SF. I’m glad they don’t have quakes here.
Paul says one million different book titles are published each year.

Cozy to be with Paul and Deb. Such a pair, completing each other’s sentences, bouncing suggestions back and forth. Two peas in a pod, as the kids used to call Sylvia and me.

New York!

Great to be here. So polyglot…everyone taking a different language. Russian, Hebrew, Chinese, Polish, Spanish, Yiddish, Hindi, Japanese, Italian….

The Library hotel isn’t quite what it used to be. Well, mainly Sylvia isn’t here, so I’m touchy. Such a strong sense of her presence, her voice, her scent, her happiness to be in NY, her cute outfits, her keen observation of our surroundings.

For supper I took the bus down 5th Ave to get to our old fave restaurant, L’Express, at 20th and Park. Walking crosstown, I could feel her walking beside me, felt it so clearly, and saw her extra-big “I’m on a good outing” smile.

Great food at L’Express as always, although I had to work for it. I clamored for a good seat in the back of the room and then then dared to send back my bouillabaisse as, being Sunday, it was old and fishy. Replaced it with a nice little hanger steak which was, however, too raw to cut or chew, so I sent that back for a bit of extra grilling. The waiter didn’t mind. I’m guessing he’s used to fussy New Yorkers.

In Bryant park by the library this morning I had a sense of being telepathically aware of many people at once, the New Yorkers, each with their lusts and hopes and worries and plans. Individual neurons in a great hive mind. What if I, as neuron, snap into awareness of the whole mind? SF tale.

At this point we’re switch into photojournalism mode. I didn’t write many journal notes in NYC, but I took my Leica Q2 or my Pixel Pro 7 with me everywhere, and snagged a bunch of good shots. And I’ll fill in a couple of lines about each one.

S. J. Perelman is one of my favorite comic writers—and here’s his typewriter at the New York Public Library. I still have the pink/red IBM Selectric typewriter I used to write many of my early books. Can visualize it in some future spacetime case.

Had lunch at the classy Century Club with my agent John Silbersack. Chatty and engaged, he’s an old friend by now. But not much encouragement on the future-publications front. That line I’ve been hearing from agents since the 80s. “Things have never been worse.” And for me in particular, there’s the factor of, well, ageism. It’s a real thing, and not without justification. The new generations want a chance. I mean, are you going to buy the latest Stones album?

I made a point of going to the good old Met, searching out a section that Sylvia liked: it’s their lesser-known modern and contemporary art collection, that is, from 1890 on. It’s on the bottom floor, way back and to the left. Sylvia’s fave was a room-sized mural by Thomas Hart Benton, “America Today”, from 1932.

Closing time came sweeping in and, while racing around the Met’s really quite large modern collection, and, as you kind of see, literally sweating it, I came across, oh joy, the vintage 4D cube crucifixion, Salvador Dali’s Christus Hypercubicus.

After the Met I had dinner at a chic spot on upper Madison Avenue with Henry Wessells, the man who sold my fairly massive literary archives (sixteen banker boxes) to UC Riverside a couple of years ago. He’s an interesting, free-wheeling character with lots of good tales. And by selling the archive he got me the best “book deal” I ever had. See Nery’s eclectic blog/webpage The Endless Bookshelf as well. He finds his way down an amazing number of rabbit holes.

I noticed the nearly full moon above the skyline, nice shot. Every month when the full moon rolls around, my family and I think of Sylvia’s death on January 6, 2023, which was very near the date of a full moon. Over a year, the full-moon-day of the month doesn’t change all that fast; I’d never realized that.

Dear Eddie Marritz turned up and we went to MOMA together. Eddie’s a cinematographer, now largely retired, and lives with his wife Hana Machotka upstate. We go back to 1967, with newlywed Sylvia and Rudy attending grad school at Rutgers in New Brunswick, New Jersey. Eddie and I used to smoke pot together a lot. And the same mindless, irrepressible merriment and deep meanings remain.

One of the lesser-known Dali ant paintings. Even now the higher critics rag on Salvador, but—hey. Ants, man.

MOMA deco poster for an aperitif called, simply, R. Love it.

I love these types of lines that mathematicians and physicists call caustic curves. Areas made bright by the physics of the angles in a sheet or in a volume. On a sunny day you see caustic curves on the bottom of a swimming pool. “Caustic” means “burning” and the idea is that the lens-like focus water makes these bright lines that, having more light, are hot. Like the way you can incinerate ants with a pocket lens. And you see something like them on the ripply surface of sheet of plastic used in place of glass on an inexpensive picture frame. Actually I’m not really sure if these curves are called caustics. Very mysterioso in my dim room.

Something Sylvia and I liked to do in NYC was to ride on the ferries that run up and down Manhattan, also touching on Queens and Brooklyn. Only costs about $5 for a ride, and there’s not obnox narration like on a Circle Tour. Not so easy to find the launch dock near East 3rd St.

Classic view of lower Manhattan. I once heard a really interesting idea from the scientist Christopher Langton, talking about cellular automata and artificial life simulations. Sometimes we think of cites as scars on the face of Nature, ugly messes made by the wicked human race. But, come on, we’re a living species too. And cities are our nests! And wondrously intricate. Standing back like this, I was staggered by the sheer scale of the flows keeping New York going. Food, electricity, communications, fuel, and building supplies coming in…sewage and waste going out. Awesome.

Chic NY lady ready to board. I love how New Yorkers look.

SF-nal conflation of huge 21st C ground zero skyscraper and St. Paul’s Chapel from 1766. Nice to chill in the church’s old graveyard amid the city’s immense throb.

Riding under a bridge, a sky of moving machinery, with tufted clouds to match, plus a cute Italian tourist.

At first I had a less expensive single-bed room, but for the last two nights I had a deluxe suite. Sylvia and I never made it into a suite room and, of course, I regret that we didn’t. How she would have loved it. But, again, thanks to that half-feeling of her being present, she was seeing the fancy room through my eyes. And even if I’m kidding myself about that, the treat was a way of celebrating her memory.

On my last night, my Swarthmore classmate Roger Shatzkin showed up from Jersey and he slept on the extra bed in my suite. Wondrous view up Madison Ave with the tail lights converging to an unreachable vanishing point, the street a long isosceles triangle decked with a swarm of red taillights, ever closer together as they verged towards the vanishing point. Roger said it was like a Christmas tree.

So awesome.

Roger and I went to the Grand Central Station Oyster Bar; I’m glad he came along, I think I would not have enjoyed doing that one alone. The Oyster Bar was a touchstone for Sylvia and me. This time they guy next to us was screaming at the top of his lungs, and so was his date. Screaming like crazy people.

Getting fully into the pushy New Yorker thing I walked to the front desk, and asked to move. The guy goes, “Every table is loud here.” I go, “I’ve been here lots of times. The guy next to us tonight, he’s different.” The front guy shrugs, and we get the new table. And I got the best oysters I’ve ever had, big ones from Massachusetts. If not the best ever, he best since the last time I was there, which was on the farewell tour with Sylvia in 2021.

Cute and funny to find Roger in my room in the morning. Harking back to Swarthmore 1963. Looking out our window we see two window-washers on a scaffold lowered down from the roof. Working really fast. If you think about it, they must have to wash a skyscrapers really often, maybe even doin a section every day. Like the lawn of a huge mansion. I mean, clearly the New Yorker occupants of these lofty perches aren’t going to settle for dirty windows!

Here’s Roger at the Whitney with me; we do that visit together every few years, used to have Sylvia along, of course. I saw one of my fave paintings, Door to the River, by Willem De Kooning, part of their permanent collection.

Also a huge, wonderful work by Jackson Pollock’s wife Lee Krasner. Her first one after Pollock died in a drunken car accident. She said she’d wondered if she’d paint again and then, “Well, I did.”

As it happens, Roger and I had streamed the movie Pollock on my laptop the night before. At the end, Roger remarked, “I have to say I have no iota of sympathy for the guy.”

I like the bit in the movie where Pollock is losing it—a guy has been filming him for a month and it drives Pollock up the wall, and he resumes drinking whiskey (which he’d given up) and now he’s drunk and he keeps leaning up right next to the ear of film maker and saying over and over, in a tense whisper: “I’m not a phony, you’re the phony. I’m not a phony, you’re the phony.”

As an outrider cyberpunk author I know this feeling very well! I’m so glad I don’t drink like Pollock anymore.

[Portrait photo by Bart Nagel.]

What a long and eventful trip it was. I always worry about the transportation and the transitions, but it’s juste one step at a time. Everyone was really nice to me. Mike and Mary, Greg and Annemarie, Paul and Deb, Eddie and Roger. I told the story of Sylvia’s death to each of them. Good for me to keep excavating that, airing it out, keeping it real, assimilating it, coming to terms with it.

I’ve mentioned that I often feel as if Sylvia is still with me—and I don’t expect that feeling to go away. But in the most literal sense, she’s gone. The trip made that clear.

Even so, we two had a long, wonderful run.

Talking to Sylvia

Wednesday, September 27th, 2023

As you may know, my wife Sylvia died of cancer in January 6, 2023. I’m still grieving, and I miss her very much. Over the last eight months, I’ve returned many times to the question of what it might mean to say Sylvia’s soul is still with me. In this post, I’ll outline some of my ever-changing thoughts.

The least comforting viewpoint is that when you die, you die. Like a light being turned off. Like a decades-long movie hitting a jump-cut…with nothing on the other side. Maybe it’s like going under total anesthesia and never waking up.

Some of you will have had anesthesia. When you rise out of the black in the post-op room, it’s as if no time elapsed. Unlike after sleep, you don’t seem to have had any intervening dreams. Jump-cut. But, as I say, if you’re dying, it could well be that there’s nothing on the other side.

It’s certainly reasonable to entertain the lights-out view of death. In our time we tend to think of consciousness as an epiphenomenon. Something that flickers within a living body like flames within a campfire. A computation being carried out by the brain and the physical body. If the body’s gone, there’s no more computation. No more epiphenomenon. No more you. Dead is dead.

I don’t like thinking that. No point trying to be all tough and hardcore. You’re gonna die away. So why not believe something that makes you happy?

You can find info about the paintings in this post on my Paintings page.

I’m writing this part of the post on my laptop as I sit on a bench in the Los Gatos town park. It’s Sunday. Around me is the weekly Farmer’s Market. Hundreds of people, scores of booths, music, sun and shadow. Tonight the park will be empty. This week’s market event will be dead.

Or will it? A very weak way past the dead-is-dead stance is to espouse a spacetime view of the world, then nothing is really gone. It’s just a little farther back along the time axis. The universe is a static block of spacetime, and time is not in fact passing. The passage of time is a persistent illusion that we humans have at each and every cross-section of spacetime. We’re always there, and we’re always wrong about thinking the past is gone.

I once had the chance to ask Kurt Gödel, “What causes the illusion of the passage of time?” Along with friend Albert Einstein, Gödel was one of the deepest thinkers of the 20th century. His answer: “The illusion results from confusing the given with the real.”

So, sure, maybe, but that’s pretty abstract. If you’re grieving over a dead person or, for that matter, dreading your own death, the spacetime viewpoint is thin gruel indeed.

Cue corny anecdote. My preacher father liked to tell the story of a little boy who’s frightened by a night thunderstorm, and he runs to his parents’ room and gets into their bed. “Can’t you just pray to God?” says the father. “He’s always with you.”

“If there’s a storm,” said the boy, “I need someone with skin on.”

Switching topics for a minute, I finished writing the “Big Germs” story I’ve been working on all year. I ran for the endzone and got it done in 4,500 words. I’ve dreamed of this story for so long, and now it’s over. And I don’t think it’s too short. The quick hit feels right. The idea is that two young women conjure up some ethereal jelly-bag critters evolved from human blood cells—these are the big germs of the title. The big germs have minds, and they reproduce, and you can talk with them. They zap all the guns and weapons in the world and turn them all into dust. End of problem.

So good to have that story done after thinking about it for so long. It was wonderful to regain my writing mojo and once again to revel in my craft. I mailed it off. Hope my tale’s extreme anti-gun stance doesn’t make it too hard to publish! Well, I always find a way.

No idea what I might write next, and not gonna fret over it yet.

About two weeks ago I drove up to SF to have dinner with John Shirley, Paul Mavrides, Hal Robbin, and some of the other cyberpunk/SubGenius types. Cozy. John’s wife Mickey was very emotional about how much she loved Sylvia, and how she now misses her. I told her Sylvia is alive in my head, and for a about a minute we got into this thing where Mickey was talking directly to Sylvia through my glassy, wide-open eyes. We were doing a routine.

Mickey goes “Hi Sylvia.” And I raise my voice and say, “Hi Mickey,” and it feels real…but then it’s too creepy to keep going.

Coincidentally, the next day my grief and loneliness were at a peak—it comes and goes, maybe like a sneaker wave that douses me on a beach. Sylvia, Sylvia, Sylvia. She’s gone for good.

This is a painting of our family members being one inch tall and we’re in a little boat going past big frogs and lurking squid.  (Just the OG family, no grandkids, but I threw in our departed dog Arf, and Isabel’s lost dog Rivers.) Life in a nutshell?

Last week when I was visiting SF, Rudy Jr and I went and got treats with his three kids at the nearly empty St. Francis Fountain shop on 24th St in the Mission. The shop isn’t a Disney-park reconstruction, but simply a 1940s soda shop that happens to remain. Stabs of “time pain”. Like “heimweh” in German, “home pain,” that is, homesickness. “Time pain” is my try at temporal equivalent of that. Better word?  Oh, hell, just say nostalgia. The simple pleasure of the kids, young with their whole lives ahead of them, and not even knowing that or even thinking about it.

[This spherical sculpture is by my friend Dick Termes.]

I’d prefer not to be a grieving parent who continually discusses his loss with the kids and grandkids. Maybe I’ll get there. So far they don’t seem to mind or resent my bringing it up. It’s still present in their minds as well. I hope in a year or two it’ll damp down, at least somewhat.

The next week I had a big insight. As I may already have mentioned in these posts, I have a very good model of Sylvia in my head. A human brain is of course the best possible “computer” for simulating a human. And I have an immense data base on Sylvia. As an SF writer and a computer maven, I’ve been saying to myself that I can run a simulation of Sylvia. What I call a “lifebox” in my nonfiction tome The Lifebox, the Seashell, and the Soul. And my most recent novel Juicy Ghosts is about a big company renting out lifebox slots in their cloud silo.  Rent-A-Soul. But this idea doesn’t go far enough.

I got the new insight when I had Bob Hearn for lunch…he’s a computer programmer, also into mysticism and Buddhism, goes to Burning Man, and gets ecstatic satori by running ultramarathons—when I saw him he was preparing for a trip to Greece to run 150 miles from Athens to Sparta in 30 hours.

I told Bob my notion that Sylvia exists as a lifebox simulation in my brain, and he said, why think of it as a simulation? Why not say it’s real? Or why can’t it be both things at once? Reality is, after all, a completely fluid and arbitrary and in-the-eye-of-the-beholder thing. If all the Sylvia processes are in place, why not say it’s an actual Sylvia.

And now I’m fooling with this insight, opening up to it, and it makes me feel much better than I did a few days ago.

The Sylvia who lives within me isn’t just a model that I can contemplate. It’s a living being who I can talk to and, more importantly, this alternate Sylvia can respond.

Why this is good? First of all it’s more fun. And a big side benefit is that I can talk to Sylvia and beg forgiveness for mean things I did.  These days, I often get into a remorse loop of wanting to apologize to Sylvia for the times I was cold during those hard last two years. For instance, I have a heart-breaking memory of her sitting on the couch across the room, dejected, and saying, “Why are you always mad at me?”

And me knowing that what she said was true, not always, but certainly at that moment. The times were hard. And I still feel guilty and horrible about it, and I long to apologize. And if Sylvia is in some sense alive, I can actually say “I’m sorry” to her, and she’s likely to accept my apology, as she was kind and generous, and also she knows how rough things were.  And once she forgives me I’m finally okay.

Here’s a photo of Sylvia and me, in the form of those two logs. In the last months, we’d lie together like that in the mornings, close as close can be, hugging, desperate, terrified.

I discussed the “she’s really alive” notion at my grief counselling session on Zoom, and I told the therapist about my new take on things. And he said, yes, of course that’s true. We’re not just alive on this level. You might even say that we have a higher body—the subtle body, or astral body, or causal body—why not just call it the soul. And given that Sylvia was so tightly bound to me during her life, then of course her soul is with me.

One more touch. I’m in a grief group, and I was talking to the others about my new notion of being able to talk to a living Sylvia, and a woman said, “Well, of course. My husband will always be alive in my heart.”

That’s nicer than saying Sylvia’s soul is a process in my brain. Being an SF writer and CS professor, I tend to have this fixation on brains as opposed to the whole body. But, yes, saying heart is a fuller expression of what it’s all about.

So alright! And  hello, Sylvia.

Light beamed from madman’s stark, staring eyes, as if he were a jack-o’-lantern with a flare within.

No, no, just kidding with that last line, this isn’t meant to be creepy SF. Much cozier than that. Sitting outside on the side porch, late afternoon, nice sun, quietly chatting aloud with Sylvia. Getting caught up on what’s happened around here. Talking feels good.

Here’s a lovely little poem by Sylvia. I think I may have shared it before. I found in a tiny spiral notebook in her car.

I’ll always love you, dear.


Rudy's Blog is powered by WordPress